• No results found

Chapter 6 Best Linear Unbiased Estimate (BLUE)

N/A
N/A
Protected

Academic year: 2021

Share "Chapter 6 Best Linear Unbiased Estimate (BLUE)"

Copied!
16
0
0

Loading.... (view fulltext now)

Full text

(1)

Chapter 6

Best Linear Unbiased Estimate

(BLUE)

(2)

Motivation for BLUE

Except for Linear Model case, the optimal MVU estimator might: 1. not even exist

2. be difficult or impossible to find

Resort to a sub-optimal estimate

BLUE is one such sub-optimal estimate

Idea for BLUE:

1. Restrict estimate to be linear in data x 2. Restrict estimate to be unbiased

3. Find the best one (i.e. with minimum variance)

Advantage of BLUE:Needs only 1st and 2nd moments of PDF

Mean & Covariance

Disadvantages of BLUE:

1. Sub-optimal (in general)

(3)

6.3 Definition of BLUE (scalar case)

Observed Data: x =

[

x[0] x[1] . . . x[N – 1]

]

T PDF: p(x;

θ

) depends on unknown

θ

BLUE constrained to be linear in data: aTx

N n n BLU =

a x n = − − 1 0 ] [ ˆ θ

Choose a’s to give: 1. unbiased estimator

2. then minimize variance

Linear Unbiased Estimators Nonlinear Unbiased Estimators BLUE MVUE Variance Note: This is not Fig. 6.1

(4)

6.4 Finding The BLUE (Scalar Case)

− − = 1 0 ] [ ˆ N n nx n a θ 1. Constrain to be Linear: 2. Constrain to be Unbiased:

− = = ⇓ = 1 0 ]} [ { } ˆ { N n n E x n a E θ θ θ

Using linear constraint

Q: When can we meet both of these constraints?

(5)

Finding BLUE for Scalar Linear Observations

Consider scalar-parameter linear observation:

x[n] =

θ

s[n] + w[n] ⇒ E{x[n]} =

θ

s[n]

Tells how to choose weights to use in the

BLUE estimator form 1

] [ } ˆ { 1 0 = = =

− − s aT N n n Need n s a E θ θ #"! θ

− − = 1 0 ] [ ˆ N n nx n a θ

Then for the unbiased condition we need:

Now… given that these constraints are met…

We need to minimize the variance!! Given that C is the covariance matrix of x we have:

{ }

ˆBLU = var

{ }

aTx = aTCa var θ
(6)

Goal: minimize aTCa subject to aTs = 1

Constrained optimization Appendix 6A: Use Lagrangian Multipliers:

Minimize J = aTCa + λ(aTs – 1) s C s s C s s a s C a s a 1 1 1 1 1 2 1 2 2 0 : Set − − = − = − ⇒ = − = ⇒ − = ⇒ = ∂ ∂ T T T T a J λ λ λ $ $! $ $" # s C s s C a 1 1 − − = T s C s 1 1 ) ˆ var(θ = T s C s x C s x a 1 1 ˆ − − = = T T T BLUE θ

(7)

Applicability of BLUE

We just derived the BLUE under the following:

1. Linear observations but with no constraint on the noise PDF 2. No knowledge of the noise PDF other than its mean and cov!!

What does this tell us???

BLUE is applicable to linear observations But… noise need not be Gaussian!!!

(as was assumed in Ch. 4 Linear Model)

And all we need are the 1st and 2nd moments of the PDF!!!

But… we’ll see in the Example that we can often linearize a nonlinear model!!!

(8)

6.5 Vector Parameter Case: Gauss-Markov Thm

Gauss-Markov Theorem:

If data can be modeled as having linear observations in noise:

w

H

θ

x

=

+

Known Matrix Known Mean & Cov (PDF is otherwise arbitrary & unknown)

Then the BLUE is: θˆ BLUE =

(

HTC−1H

)

−1HTC−1x and its covariance is: Cˆ =

(

H C−1H

)

−1

θ

T

(9)

Ex. 4.3: TDOA-Based Emitter Location

Tx @ (xs,ys) Rx3 (x3,y3) Rx2 (x2,y2) Rx1 (x1,y1) s(t) s(t – t1) s(t – t2) s(t – t 3) Hyperbola: τ12 = t2 t1 = constant Hyperbola: τ23 = t3 t2 = constant TDOA = Time-Difference-of-Arrival

Assume that the ith Rx can measure its TOA: t

i

Then… from the set of TOAs… compute TDOAs

Then… from the set of TDOAs… estimate location (xs,ys)

We won’t worry about “how” they do that.

Also… there are TDOA systems that never

(10)

TOA Measurement Model

Assume measurements of TOAs at N receivers (only 3 shown above):

t0, t1, … ,tN-1

There are measurement errors

TOA measurement model:

To = Time the signal emitted

Ri = Range from Tx to Rxi

c = Speed of Propagation (for EM: c = 3x108 m/s)

ti = To + Ri/c + εi i = 0, 1, . . . , N-1

Measurement Noise ⇒ zero-mean, variance σ2, independent (but PDF unknown)

(variance determined from estimator used to estimate ti’s) Now use: Ri = [ (xsxi)2 + (y s - yi)2 ]1/2 i i s i s o s s i x x y y c T y x f t = ( , ) = + 1 ( − )2 + ( − )2 +ε Nonlinear Model

(11)

Linearization of TOA Model

⇒ θ = [δx δy]T So… we linearize the model so we can apply BLUE:

Assume some rough estimate is available (xn, yn)

xs = xn + δxs ys = yn + δys

know estimate know estimate

Now use truncated Taylor series to linearize Ri (xn, yn):

s n i n s n i n n i y R y y x R x x R R

i

B

i

A

i i i δ δ $! $" # $! $" #

=

=

∆ ∆ − + − + ≈ Known i s i s i o n i i y c B x c A T c R t t = − i = + δ + δ + ε ~ Apply to TOA:

known known known

(12)

TOA Model vs. TDOA Model

Two options now:

1. Use TOA to estimate 3 parameters: To, δys, δys

2. Use TDOA to estimate 2 parameters: δys, δys

Generally the fewer parameters the better… Everything else being the same. But… here “everything else” is not the same:

Options 1 & 2 have different noise models (Option 1 has independent noise) (Option 2 has correlated noise)

(13)

Conversion to TDOA Model

N–1 TDOAs rather than N TOAs TDOAs: τi = ~ti − ~ti1, i =1, 2,…, N −1 $! $" # $! $" # $! $" # correlatednoise1 known 1 known 1 − − − ++ − = i i s i i s i i y c B B x c A A δ δ ε ε In matrix form: x = Hθ + w ε w H A ) ( ) ( ) ( ) ( ) ( ) ( 1 2 1 1 2 0 1 2 1 2 1 1 2 1 2 0 1 0 1 =                 − − − =                 − − − − − − = − − − − − − N N N N N N A B B A B B A A B B A A c ε ε ε ε ε ε & & & & & & &

[

]

T N 1 2 1 − = τ τ ' τ x

[

]

T s s y x δ δ = θ

See book for structure of matrix A

T

AA w

(14)

Apply BLUE to TDOA Linearized Model

(

)

( )

1 1 2 1 1 ˆ − − − −       = = H AA H H C H Cθ w T T T σ

(

)

( )

AA H H

( )

AA x H x C H H C H θ w w 1 1 1 1 1 1 ˆ − − − − − −       = = T T T T T T BLUE

Describes how large the location error is Dependence on σ2

cancels out!!!

Things we can now do:

1. Explore estimation error cov for different Tx/Rx geometries • Plot error ellipses

2. Analytically explore simple geometries to find trends • See next chart (more details in book)

(15)

Apply TDOA Result to Simple Geometry

Rx1 R x2 Rx3 d d

α

R

α

Tx             − = 2 2 2 2 ˆ ) sin 1 ( 2 / 3 0 0 cos 2 1 α α σ c θ C Then can show:

Diagonal Error Cov ⇒ Aligned Error Ellipse

And… y-error always bigger than x-error e x

(16)

0 10 20 30 40 50 60 70 80 90 10-1 100 101 102 103 α (degre es ) σ x /c σ o r σ y /c σ σx σy Rx1 R x2 Rx3 d d

α

R

α

Tx

• Used Std. Dev. to show units of X & Y • Normalized by cσ… get actual values by

multiplying by your specific cσ value

References

Related documents

Visual analysis indicates that clusters B and C are composed largely of jobs associated with Sourcing and Supply Chain keywords, cluster E is largely composed of jobs associated

 opposite to Leuven station; the historical centre of the city, restaurants and shops are within walking distance of the hotel contactdetails Malon Hotel Malon

Berdasarkan hasil penelitian dapat disimpulkan bahwa penerapan model RME dapat meningkatkan hasil belajar dan keaktifan peserta didik dalam pembelajaran

14 6 It also provides that "no provider or user of an interac- tive computer service shall be held liable on account of any action voluntarily taken in good faith

sample. Compute the dry unit weight if the moisture content is 11.5%. Compute the void ratio. Compute the degree of saturation. Compute the dry unit weight. Compute the void

of Arts and Media Studies, Norwegian University of Science and Technology (NTNU), Trondheim.. Hotel room from 26th till

Writing in The Times, Peter Ackroyd named Oliver’s 1987 collected poems Kind “the finest poetry of the year.” 1 Patrick Wright and Howard Brenton both heaped praise on Oliver’s

In a letter sent to the Latin patriarch o f Constantinople, by Innocent III in 1208, there is a reference to the oath of obedience which the Greeks gave to the Latins.. In Cyprus,