Applied Statistics and Probability for Engineers, 5th edition

23  Download (0)

Full text

(1)

CHAPTER 7 Section 7-2

7-1. The proportion of arrivals for chest pain is 8 among 103 total arrivals. The proportion = 8/103. 7-2. The proportion is 10/80 =1/8. 7-3.

(

2

.

560

2

.

570

)

=

(

2.0560.008/2.5659

/

2.0570.008−2/.5659

)

n X

P

X

P

σ µ

9392

.

0

0304

.

0

9696

.

0

)

875

.

1

(

)

875

.

1

(

)

875

.

1

875

.

1

(

=

=

=

=

P

Z

P

Z

P

Z

7-4. ~ (100,102) =25 n N Xi 2 25 10 100 = = = = n X X σ σ µ 8054 . 0 0359 . 0 8413 . 0 ) 8 . 1 ( ) 1 ( ) 1 8 . 1 ( ) 102 4 . 96 ( )] 2 100 ( )) 2 ( 8 . 1 100 [( 2100 102 / 2100 4 . 96 = − = − ≤ − ≤ = ≤ ≤ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ≤ ≤ = + ≤ ≤ − − − − Z P Z P Z P P X P X P n X σ µ 7-5.

µ

X

=

520

kN

/

m

2

;

10

.

206

6

25 =

=

=

n

X

σ

σ

(

)

3121

.

0

6879

.

0

1

)

4899

.

0

(

1

)

4899

.

0

(

)

525

(

52510.206520 /

=

=

=

=

=

− −

Z

P

Z

P

P

X

P

n X σ µ 7-6. n = 6 n = 49 429 . 1 6 5 . 3 = = = n X σ σ 5 . 0 49 5 . 3 = = = n X σ σ X σ is reduced by 0.929 psi

(2)

289

.

154

5

345

;

13237

=

=

=

=

n

X X

σ

σ

µ

(

17230 13237 17305 17237

)

154.289 / 154.289

(17230

17305)

( 0.045

0.4407)

(

0.44)

(

0.045)

0.6700 0.482 0.188

X n

P

X

P

P

Z

P Z

P Z

µ σ − − −

=

=

≤ ≤

=

≤ −

=

=

7-8. psi n X = = 50 =5 22.361 σ σ = standard error of X 7-9. σ2 =25

12

~

11

.

11

5

.

1

5

2 2

=

=

=

=

X X

n

n

σ

σ

σ

σ

7-10. Let Y = X−6 121 2 2 2 1 12 ) ( 2 ) 1 0 ( 2 = − = = = + = + = a b b a X X X X σ µ µ µ 1441 2 2 1 2 1 121 121 2 2 5 6 144 1 12 = − = − = = = = = Y Y X X n σ µ σ σ σ

)

,

5

(

~

6

1441 2 1

=

X

N

Y

, approximately, using the central limit theorem.

7-11. n = 36 3 2 128 12 1 ) 1 1 3 ( 12 1 ) 1 ( 2 2 ) 1 3 ( 2 2 2 = = − + − = − + − = = + = + = a b b a X X σ µ 6 3 / 2 36 3 / 2 , 2 = = = X X σ µ n X z / σ µ − =

(3)

2312 . 0 7688 . 0 1 ) 7348 . 0 ( ) 6742 . 3 ( ) 6742 . 3 7348 . 0 ( ) 5 . 2 1 . 2 ( 6 3 / 2 6 3 / 2 2 5 . 2 2 1 . 2 = − = < − < = < < = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ < < = < < − − Z P Z P Z P Z P X P 7-12. µX = 8 2. minutes n = 49 σX = 15. minutes σ σ µ µ X X X X n = = = = = 15 49 0 2143 8 2 . . . mins

Using the central limit theorem, X is approximately normally distributed.

a) ) ( 8.4) 1 2143 . 0 2 . 8 10 ( ) 10 (X < =PZ < − =P Z< = P b) P(5<X<10)=P(05.21438.2 <Z<100.2143−8.2) 1 0 1 ) 932 . 14 ( ) 4 . 8 ( < − <− = − = =P Z PZ c) ) ( 10.27) 0 2143 . 0 2 . 8 6 ( ) 6 (X < =P Z< − =P Z<− = P 7-13. 8 75 16 1 1 1 = = = σ µ n 12 70 9 2 2 2 = = = σ µ n ) 20 , 5 ( ~ ) 9 12 16 8 , 70 75 ( ~ ) , ( ~ ) , ( ~ 2 2 2 2 2 1 2 2 1 2 2 2 1 1 2 1 2 1 N N n n N N X X X X X X + − + − + − − σ σ µ µ σ σ µ µ a) P(X1− X2>4) 5885 . 0 4115 . 0 1 ) 2236 . 0 ( 1 ) 2236 . 0 ( ) ( 20 5 4 = − = − ≤ − = − > = > − PZ P Z Z P b) P(3.5≤X1−X2≤5.5)

1759

.

0

3687

.

0

5445

.

0

)

3354

.

0

(

)

1118

.

0

(

)

(

5.5205 20 5 5 . 3

=

=

=

− −

Z

P

Z

P

Z

P

7-14. If µBA, then XBXA is approximately normal with mean 0 and variance 25 25 20.48

2 2 = + A B σ σ . Then,

P

(

X

X

>

3

.

5

)

=

P

(

Z

>

320.5−.480

)

=

P

(

Z

>

0

.

773

)

=

0

.

2196

A B

The probability that

X

B exceeds

X

A by 3.5 or more is not that unusual when µB and µA are

equal. Therefore, there is not strong evidence that µB is greater than µA.

(4)

983 . 0 0170 . 0 1 ) 12 . 2 ( 1 ) ( ) 2 ( ) 2 , 5 ( ~ ) , 55 60 ( ~ ) ( 2 5 2 16 4 16 42 2 = − = − ≤ − = ≥ = ≥ − + − − − P Z Z P X X P N N X X low high low high Section 7-3 7-16. a) 2 2

1.815

SE Mean σ

0.4058

20

Variance σ

1.815

3.294

X

s

n

=

=

=

=

=

=

=

b) Estimate of mean of population = sample mean = 50.184 7-17. a)

s

SE Mean

10.25

2.05

N

25

N

=

N

=

=

2 2

3761.70

Mean

150.468

25

Variance

10.25

105.0625

Sum of Squares

Variance

105.0625

2521.5

1

25 1

S

ss

SS

n

=

=

=

=

=

=

=

=

b) Estimate of mean of population = sample mean = 150.468 7-18. a) b) 7-19.

( )

(

(

1

)

2

)

2 1

θ

n

/

(

1)

/

i

E

=

E

=

X

X

c

=

n

σ

c

( )

(

1

)

2 2 2

1

Bias

E

θ

θ

n

(

n

1)

c

c

σ

σ

σ

=

− =

=

7-20. E

( )

⎟⎟=

( )

µ =µ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ =

= = n n X E n n X E X n i i n i i 2 2 1 2 1 2 2 1 2 1 1 E

( )

⎟⎟=

( )

µ =µ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ =

= = n n X E n n X E X n i i n i i 1 1 1 1

2 , X1and X2 are unbiased estimators of µ.

The variances are V

( )

2n 2 1 =σ X and V

( )

n 2 2 =σ

(5)

2 1 2 / 2 / ) ˆ ( ) ˆ ( 2 2 2 1 = = = n n n n MSE MSE σ σ Θ Θ

Since both estimators are unbiased, examination of the variances would conclude that X1 is the

“better” estimator with the smaller variance.

7-21. E

( )

Θ =

[

+ + +

]

= = (7µ)=µ 7 1 )) ( 7 ( 7 1 ) ( ) ( ) ( 7 1 ˆ 7 2 1 1 E X E X E X E X E

( )

Θ =

[

+ +

]

= [2µ−µ+µ]=µ 2 1 ) ( ) ( ) 2 ( 2 1 ˆ 7 6 1 2 E X E X E X

a) Both Θˆ1and Θˆ2 are unbiased estimates of µ since the expected values of these statistics are equivalent to the true mean, µ.

b) V

( )

(

)

2 2 7 2 1 2 7 2 1 1 7 1 ) 7 ( 49 1 ) ( ) ( ) ( 7 1 7 ... ˆ σ σ Θ = + + + = = ⎦ ⎤ ⎢ ⎣ ⎡ + + + =V X X X V X V X V X 7 ) ˆ ( 2 1 σ Θ = V V

( )

(

)

(4 ( ) ( ) ( )) 4 1 ) ( ) ( ) 2 ( 2 1 2 2 ˆ 4 6 1 4 6 1 2 4 6 1 2 V X V X V X V X V X V X X X X V = + + = + + ⎦ ⎤ ⎢ ⎣ ⎡ − + = Θ = 1

(

)

4 4 2 2 2 σ +σ +σ = 1 4 6 2 ( σ ) 2 3 ) ˆ ( 2 2 σ Θ = V

Since both estimators are unbiased, the variances can be compared to select the better estimator. The variance of Θˆ1is smaller than that of Θˆ2,Θˆ1is the better estimator.

7-22. Since both Θˆ1and Θˆ2 are unbiased, the variances of the estimators can compared to select the better estimator. The variance of θ2 is smaller than that of θˆ1thus θ2 may be the better estimator.

Relative Efficiency = 2.5 4 10 ) ˆ ( ) ( ) ˆ ( ) ˆ ( 2 1 2 1 = = = Θ Θ Θ Θ V V MSE MSE 7-23. E(Θˆ1)=θ E(Θˆ2)=θ/2 θ Θ − =E2) Bias = θ θ 2− = − θ 2 V(Θˆ1)= 10 V(Θˆ2)= 4

For unbiasedness, use Θˆ1since it is the only unbiased estimator. As for minimum variance and efficiency we have:

Relative Efficiency = 2 2 2 1 2 1 ) ) ˆ ( ( ) ) ˆ ( ( Bias V Bias V + + Θ Θ

where bias for θ1is 0. Thus, Relative Efficiency =

(

)

(10 0) 4 2 40 16 2 2 + +⎛⎝⎜− ⎞⎠⎟ ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ = + θ θ

(6)

Use Θˆ1, when 40 16 2 1 ( +θ )≤ 40≤(16+θ2) 24≤ θ2 θ ≤ −4 899. or θ ≥ 4 899. If −4 899. < <θ 4 899. then use Θˆ2.

For unbiasedness, use Θˆ1. For efficiency, use Θˆ1when θ ≤ −4 899. or θ ≥ 4 899. and use Θˆ2 when

−4 899. < <θ 4 899. . 7-24. E(Θˆ1)=θ No bias V(Θˆ1)=12=MSE(Θˆ1) θ Θˆ )= ( 2 E No bias V(Θˆ2)=10=MSE(Θˆ2) θ Θˆ )≠ ( 3

E Bias MSE(Θˆ3)=6 [note that this includes (bias2)]

To compare the three estimators, calculate the relative efficiencies:

2 . 1 10 12 ) ˆ ( ) ˆ ( 2 1 = = Θ Θ MSE

MSE , since rel. eff. > 1 use

2

ˆ

Θ as the estimator for θ

2 6 12 ) ˆ ( ) ˆ ( 3 1 = = Θ Θ MSE

MSE , since rel. eff. > 1 use

3

ˆ

Θ as the estimator for θ

8 . 1 6 10 ) ˆ ( ) ˆ ( 3 2 = = Θ Θ MSE

MSE , since rel. eff. > 1 use

3

ˆ

Θ as the estimator for θ Conclusion:

3

ˆ

Θ is the most efficient estimator with bias, but it is biased. Θˆ2 is the best “unbiased” estimator. 7-25. n1 = 20, n2 = 10, n3 = 8

Show that S2 is unbiased:

( )

(

) ( ) ( )

(

)

(

2

)

(

2

)

2 3 2 2 2 1 2 3 2 2 2 1 2 3 2 2 2 1 2 38 38 1 8 10 20 38 1 8 10 20 38 1 38 8 10 20 σ σ σ σ σ + + = = = + + = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + + = S E S E S E S S S E S E

Therefore, S2 is an unbiased estimator of σ2 .

7-26. Show that

(

)

n X X n i i 2 1

= − is a biased estimator of σ2 : a)

(7)

(

)

(

)

( )

( )

(

)

(

)

n n n n n n n n n n X nE X E n X n X E n n X X E n i n i i n i i n i i 2 2 2 2 2 2 2 1 2 2 2 2 2 1 2 1 2 1 2 ) ) 1 (( 1 1 1 1 1 σ σ σ σ µ σ µ σ µ σ µ − = − = − − + = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + − + = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ −

= = = = ∴

(

)

n X Xi 2

is a biased estimator of σ2 . b) Bias =

(

)

n n n X n X E i 2 2 2 2 2 2 2 σ σ σ σ σ = − − =− − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡

c) Bias decreases as n increases.

7-27. a) Show that 2

X is a biased estimator of µ2. Using E X

( )

2 =V X( )+

[

E X( )

]

2

( )

( )

(

)

(

2 2 2

) ( )

2 2 2 2 2 2 2 2 1 2 2 2 1 1 2 2 1 2 2 1 1 1 1 1 µ σ µ σ µ σ µ σ + = + = + = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ =

= = = = n X E n n n n n n n n X E X V n X E n X E n i n i i n i i n i i Therefore, 2 X is a biased estimator of µ.2 b) Bias =

( )

n n X E 2 2 2 2 2 2 µ =σ +µ µ =σ c) Bias decreases as n increases.

7-28. a) The average of the 26 observations provided can be used as an estimator of the mean pull force because we know it is unbiased. This value is 336.36 N.

b) The median of the sample can be used as an estimate of the point that divides the population into a “weak” and “strong” half. This estimate is 334.55 N.

c) Our estimate of the population variance is the sample variance or 54.16 N2. Similarly, our estimate of the population standard deviation is the sample standard deviation or 7.36 N. d) The estimated standard error of the mean pull force is 7.36/26½ = 1.44. This value is the standard deviation, not of the pull force, but of the mean pull force of the sample.

e) No connector in the sample has a pull force measurement under 324 N.

7-29. Descriptive Statistics

Variable N Mean Median TrMean StDev SE Mean Oxide Thickness 24 423.33 424.00 423.36 9.08 1.85

(8)

a) The mean oxide thickness, as estimated by Minitab from the sample, is 423.33 Angstroms. b) Standard deviation for the population can be estimated by the sample standard deviation, or 9.08

Angstroms.

c) The standard error of the mean is 1.85 Angstroms. d) Our estimate for the median is 424 Angstroms.

e) Seven of the measurements exceed 430 Angstroms, so our estimate of the proportion requested is 7/24 = 0.2917 7-30. a)

np

p

n

X

E

n

n

X

E

p

E

(

ˆ

)

=

(

)

=

1

(

)

=

1

=

b) We know that the variance of

is

n

p

p

(

1

)

so its standard error must be

n

p

p

(

1

)

. To estimate this parameter we would substitute our estimate of p into it.

7-31. a)

E

(

X

1

X

2

)

=

E

(

X

1

)

E

(

X

2

)

=

µ

1

µ

2 b) 2 2 2 1 2 1 2 1 2 1 2 1

)

(

)

(

)

2

(

,

)

(

.

.

n

n

X

X

COV

X

V

X

V

X

X

V

e

s

=

=

+

+

=

σ

+

σ

This standard error could be estimated by using the estimates for the standard deviations of populations 1 and 2. c) 2 2 2 1 1 2 2 2 2 1 1 2 2 1 2 1 2 2 2 1 2 2 2 1 1 2 2 1 2 1 2

(

1)

(

1)

1

(

)

(

1) (

) (

1)

(

)

2

2

1

2

(

1)

(

1)

)

2

2

p

n

S

n

S

E S

E

n

E S

n

E S

n

n

n

n

n

n

n

n

n

n

σ

σ

n

n

σ

σ

− ⋅

+

− ⋅

=

=

+

− ⋅

+

+

+ −

=

− ⋅

+

− ⋅

=

=

+ −

+ −

7-32. a)

E

(

µ

ˆ

)

=

E

(

α

X

1

+

(

1

α

)

X

2

)

=

α

E

(

X

1

)

+

(

1

α

)

E

(

X

2

)

=

αµ

+

(

1

α

)

µ

=

µ

b) 2 1 1 2 2 2 1 2 2 1 2 1 2 1 2 2 2 2 2 1 2 1 2 2 2 1 2 2 1 ) 1 ( ) 1 ( ) 1 ( ) ( ) 1 ( ) ( ) ) 1 ( ( ) ˆ .( . n n an n n a n n n X V X V X X V e s α α σ σ α σ α σ α σ α α α α α µ − + = − + = − + = − + = − + =

c) The value of alpha that minimizes the standard error is:

1 2 1 an n an + = α

(9)

d) With a = 4 and n1=2n2, the value of alpha to choose is 8/9. The arbitrary value of α=0.5 is too small and will result in a larger standard error. With α=8/9 the standard error is

2 1 2 2 2 2 2 2 1 667 . 0 2 8 ) 9 / 1 ( ) 9 / 8 ( ) ˆ .( . n n n n e s µ =σ + = σ

If α=0.5 the standard error is

2 1 2 2 2 2 2 2 1 0607 . 1 2 8 ) 5 . 0 ( ) 5 . 0 ( ) ˆ .( . n n n n e s µ =σ + = σ 7-33. a) ( ) 1 ( ) 1 ( ) 1 1 2 2 1 2 ( 1 2) 2 1 1 1 2 2 1 1 2 2 1 1 n p p p E p p n p n n X E n X E n n X n X E − = − = − = − = − b) 2 2 2 1 1 1(1 ) (1 ) n p p n p p+

c) An estimate of the standard error could be obtained substituting 1 1 n X for 1

p

and 2 2 n X for 2

p

in the equation shown in (b).

d) Our estimate of the difference in proportions is 0.01 e) The estimated standard error is 0.0413

(10)

Section 7-4 7-34.

(

)

=

(

1

)

x−1

p

p

x

f

n x n n i x i n i i

p

p

p

p

p

L

− = − ∑ =

=

=

1

)

1

(

)

1

(

)

(

1 1

= = = = = =

=

=

+

=

=

=

+

=

n i i n i i n i i n i i n i i n i i

x

n

p

x

p

n

p

p

pn

x

p

np

n

p

p

n

x

p

n

p

p

n

x

p

n

p

p

L

p

n

x

p

n

p

L

1 1 1 1 1 1

ˆ

0

)

1

(

0

)

1

(

)

1

(

0

0

1

)

(

ln

)

1

ln(

ln

)

(

ln

7-35.

!

)

(

x

e

x

f

x

λ

λ −

=

= − = − ∑=

=

=

n i i i x n n i i x

x

e

x

e

L

n i i 1 1

!

!

)

(

λ

λ

λ

1 λ λ n x n x x n x n d L d x x e n L n i i n i i n i i n i i i n i n i i

= = = = = = = = = + − ≡ + − = − + − = 1 1 1 1 1 1 ˆ 0 0 1 ) ( ln ! ln ln ln ) ( ln λ λ λ λ λ λ λ λ λ

(11)

7-36. f(x)=(θ+1)xθ θ θ θ θ θ θ θ θ θ i n i n n i i x x x x L

= = + = × + × + = + = 1 2 1 1 ) 1 ( ... ) 1 ( ) 1 ( ) 1 ( ) ( 1 ln ˆ ln 1 0 ln 1 ) ( ln ln ) 1 ln( ... ln ln ) 1 ln( ) ( ln 1 1 1 1 2 1 − − = − = + = + + = + + = + + + + =

= = = = i n i i n i i n i i n i x n x n x n L x n x x n L θ θ θ ∂θ θ ∂ θ θ θ θ θ θ 7-37. ( )=λ −λ(x−θ) e x f for x≥θ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − − = − − =

= =

= =

n i n i n x n x n n i x e e e L 1 1 ) ( 1 ) ( ) ( θ λ θ λ θ λ

λ

λ

λ

λ

θ λ θ λ θ λ θ λ λ θ λ θ λ λ λ θ λ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = − = ≡ − − = − − =

= = = = x n x n n x n n x n d L d n x n L n i i n i i n i i n i i 1 ˆ / ˆ 0 ) , ( ln ln ) , ( ln 1 1 1 1

The other parameter θ cannot be estimated by setting the derivative of the log likelihood with respect to θ to zero because the log likelihood is a linear function of θ. The range of the likelihood is important. The joint density function and therefore the likelihood is zero for θ <

) , , ,

(X1 X2 Xn

Min. The term in the log likelihood -nλθ is maximized for θ as small as possible

within the range of nonzero likelihood. Therefore, the log likelihood is maximized for θestimated with Min(X1,X2,…,Xn) so that θˆ x= min

b) Example: Consider traffic flow and let the time that has elapsed between one car passing a fixed point and the instant that the next car begins to pass that point be considered time headway. This headway can be modeled by the shifted exponential distribution.

Example in Reliability: Consider a process where failures are of interest. Suppose that a unit is put into operation at x = 0, but no failures will occur until θ period of operation. Failures will occur only after the time θ.

(12)

7-38.

=

=

=

= −

θ

θ

θ

θ

θ

θ

θ

θ

θ

θ

n

x

L

n

x

x

L

e

x

L

i n i i i x i i

2

1

)

(

ln

ln

2

)

ln(

)

(

ln

)

(

2 1 2

Setting the last equation equal to zero and solving for theta yields

n

x

n i i

2

ˆ

=

=1

θ

7-39.

=

=

=

=

n i i

X

X

n

a

X

E

1

1

2

0

)

(

, therefore:

a

ˆ

=

2

X

The expected value of this estimate is the true parameter so it is unbiased. This estimate is

reasonable in one sense because it is unbiased. However, there are obvious problems. Consider the sample x1=1, x2 = 2 and x3 =10. Now ⎯x = 4.37 and aˆ= x2 =8.667. This is an unreasonable estimate of a, because clearly a ≥ 10.

7-40. a)

c

x

dx

cx

c

x

)

2

c

2

(

1

)

1

(

1 1 1 1 2

=

+

=

=

+

− −

θ

θ

so that the constant c should equal 0.5 b)

= =

=

=

=

n i i n i i

X

n

X

n

X

E

1 1

1

3

ˆ

3

1

)

(

θ

θ

c)

θ

θ

θ

⎟⎟

=

=

=

=

⎜⎜

=

=

(

3

)

3

(

)

3

3

1

3

)

ˆ

(

1

X

E

X

E

X

n

E

E

n i i d)

= = =

+

=

+

+

=

+

=

n i i i n i i n i i

X

X

L

X

n

L

X

L

1 1 1

)

1

(

)

(

ln

)

1

ln(

)

2

1

ln(

)

(

ln

)

1

(

2

1

)

(

θ

θ

θ

θ

θ

θ

θ

(13)

7-41. a)

=

=

=

n i i

X

n

X

E

1 2 2

)

2

1

(

θ

so

= = n i i X n 1 2 2 1 ˆ Θ b)

=

=

=

= −

θ

θ

θ

θ

θ

θ

θ

θ

θ

θ

n

x

L

n

x

x

L

e

x

L

i n i i i x i i 2 2 1 2 2

2

1

)

(

ln

ln

2

)

ln(

)

(

ln

)

(

2

Setting the last equation equal to zero, the maximum likelihood estimate is

= = n i i X n 1 2 2 1 ˆ Θ

and this is the same result obtained in part (a) c)

)

2

ln(

2

)

5

.

0

ln(

2

1

5

.

0

)

(

0 2 / 2

θ

θ

θ

=

=

=

=

a

e

dx

x

f

a a

We can estimate the median (a) by substituting our estimate for θ into the equation for a. 7-42. a)

cannot be unbiased since it will always be less than a.

b) bias =

1

1

)

1

(

1

+

=

+

+

+

n

a

n

n

a

n

na

0

∞ →

n . c)

2

X

d) P(Y≤y)=P(X1, …,Xn ≤ y)=(P(X1≤y))n=

n

a

y ⎟

. Thus, f(y) is as given. Thus, bias=E(Y)-a =

1

1

=

+

+

n

a

a

n

an

.

e) For any n >1, n(n+2) > 3n so the variance of 2

ˆa

is less than that of 1

ˆa

. It is in this sense that the second estimator is better than the first.

7-43. a)

= − ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − −

=

=

= n i i x n i x i

x

e

e

x

L

i n i i 1 1 1 1 1

)

,

(

β δ δ β

δ

δ

β

δ

δ

β

δ

β

β β

(14)

( )

[

]

( ) ( )

+

=

=

= − β δ δ δ β β β δ δ β

β

δ

δ

β

i i i x x i n i x

n

x

L

ln

)

1

(

)

ln(

ln

)

,

(

ln

1 1 b)

( )

( )( )

1 ) 1 ( ) , ( ln ln ln ) , ( ln +

+ − − − = − + = β β β δ δ δ δ β δ β δ ∂δ δ β ∂ β ∂β δ β ∂ i x x x x n n L n L i i i Upon setting ∂δ δ β

ln L( , ) equal to zero, we obtain

β β β β δ δ / 1 ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ = =

n x and x n i i Upon setting ∂β δ β

ln L( , ) equal to zero and substituting for δ, we obtain

⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − + − = − + ∑ ∑

n i x i i i i i n i x i i i i x x n x x x n n x n x x n x n β β β β β β β β β β β δ δ δ β ln ln ln ln ) ln (ln 1 ln ln 1 and ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + =

n x x x x i i i i ln ln 1 β β β

c) Numerical iteration is required.

7-44. a) Using the results from Example 7-12 we obtain that the estimate of the mean is 423.33 and the estimate of the variance is 82.4464

b)

The function seems to have a ridge and its curvature is not too pronounced. The maximum value for std deviation is at 9.08, although it is difficult to see on the graph.

c) When n is increased to 40, the graph looks the same although the curvature is more pronounced. As n increases, it is easier to determine the maximum value for the standard deviation is on the graph.

435 430 425 7 -600000 8 420 Mean -400000 9 -200000 10 0 11 12 13 14 415 410 15 log L Std. Dev.

(15)

7-45. From Example 7-16 the posterior distribution for µ is normal with mean

n

x

n

/

)

/

(

2 2 0 2 0 0 2

σ

σ

σ

µ

σ

+

+

and variance

n

n

/

)

/

/(

2 2 2 0 2 0

σ

σ

σ

σ

+

. The Bayes estimator for µ goes to the MLE as n increases. This

follows since

σ

2

/

n

goes to 0, and the estimator approaches 2

0 2 0

σ

σ

x

(the

σ

02’s cancel). Thus, in the limit

µ

ˆ

=

x

. 7-46. a) Because 2 2 2 ) (

2

1

)

|

(

σ µ

σ

π

µ

=

e

x

x

f

and

b

a

b

f

=

µ

µ

)

1

for

a

(

, the joint distribution is 2 2 2 ) (

2

)

(

1

)

,

(

σ µ

σ

π

µ

− −

=

e

x

a

b

x

f

for -∞ < x < ∞ and a≤ ≤µ b. Then,

µ

σ

π

σ µ

d

e

a

b

x

f

b a x

− −

=

2 2 2 ) (

2

1

1

)

(

and this integral is recognized as a normal probability.

Therefore,

[

( ) ( )

bσx aσx

]

a

b

x

f

Φ

Φ

=

1

)

(

where Φ(x) is the standard normal cumulative

distribution function. Then,

( ) ( )

[

σ σ

]

σ

π

µ

µ

σ µ x a x b x

e

x

f

x

f

x

f

Φ

Φ

=

=

2

)

(

)

,

(

)

|

(

2 2 2 ) (

b) The Bayes estimator is

( ) ( )

[

]

− − −

Φ

Φ

=

b a x a x b

d

e

x σ σ

σ

π

µ

µ

µ

σ µ

2

~

2 2 2 ) ( . Let v = (x - µ). Then, dv = - dµ and

[

( ) ( )

]

( ) ( )

[

]

( ) ( )

[

]

[

( ) ( )

]

− − − − − − − − − − − − − −

Φ

Φ

Φ

Φ

Φ

Φ

=

Φ

Φ

=

a x b x x a x b x a x b b x a x a x b x x a x b

dv

ve

x

dv

e

v

x

v v σ σ σ σ σ σ σ σ

σ

π

σ

π

µ

σ σ

2

2

)

(

~

2 2 2 2 2 2 Let w = 2 2

2

σ

v

. Then,

dw

[

v

]

dv

[

v

]

dv

2 2 2 2 σ σ

=

=

and

( ) ( )

[

]

( ) ( )

⎥⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ Φ − Φ − + = Φ − Φ − = − − − − − − − − − − −

σ σ σ σ σ σ σ σ π σ π σ µ x a x b x a x b w b x a x a x b x e e x dw e x 2 2 2 ) ( 2 2 2 ) ( 2 2 2 ) ( 2 2 2 ) ( 2 2 ~ 7-47. a)

!

)

(

x

e

x

f

x

λ

λ −

=

for x = 0, 1, 2, and

)

1

(

1

)

(

0 ) 1 ( 1 0

Γ

+

⎛ +

=

+ − +

m

e

m

f

m m m λλ

λ

λ

λ

for λ > 0.

(16)

Then,

(

)

!

)

1

(

1

)

,

(

1 0 ) 1 ( 1 0

x

m

e

m

x

f

m m x m m

+

Γ

+

=

+ + − − + +

λ

λ

λ

λ λ λ .

This last density is recognized to be a gamma density as a function of λ. Therefore, the posterior distribution of λ is a gamma distribution with parameters m + x + 1 and 1 + m+ 1

0

λ .

b) The mean of the posterior distribution can be obtained from the results for the gamma distribution to be

[ ]

⎟⎟

⎜⎜

+

+

+

+

=

+

+

+ +

1

1

1

0 0 1 1 0

λ

λ

λ

m

x

m

x

m

m

7-48 a) From Example 7-16, the Bayes estimate is

4

.

518

1

)

85

.

4

(

1

)

4

(

~

25 16 25 16

=

+

+

=

µ

b.)

µ

ˆ

= x

=

4

.

85

The Bayes estimate appears to underestimate the mean.

7-49. a) From Example 7-16,

2

.

288

018

.

0

0045

.

0

)

29

.

2

)(

018

.

0

(

)

28

.

2

)(

0045

.

0

(

~

=

+

+

=

µ

b)

µ

ˆ

= x

=

2

.

29

The Bayes estimate is very close to the MLE of the mean. 7-50. a)

(

|

)

=

,

0

x

e

x

f

λ

λ

λx and

f

(

λ

)

=

0

.

01

e

−0.01λ. Then, ) 01 . 0 ( 2 01 . 0 ) ( 2 2 1

,

,

)

1 2

0

.

01

0

.

01

1 2

(

=

x+x

=

x+x+

e

e

e

x

x

f

λ

λ

λ λ

λ

λ . As a function of λ, this is

recognized as a gamma density with parameters 3 and x1+x2+ .0 01. Therefore, the posterior

mean for λ is

0

.

00133

01

.

0

2

3

01

.

0

3

~

2 1

=

+

=

+

+

=

x

x

x

λ

.

b) Using the Bayes estimate for λ, P(X<1000)=

1000 0 00133 .

00133

.

0

e

x

dx

=0.736. Supplemental Exercises 7-51.

(

,

,...,

)

1

0

,

2

0

,...,

0

1 2 1

=

>

>

>

= − n n i x n

e

for

x

x

x

x

x

x

f

λ

λi 7-52.

⎛ ∑

⎟⎟

⎜⎜

=

= − 5 1 2 ) ( 5 5 4 3 2 1 2 2

exp

2

1

)

,

,

,

,

(

i xi

x

x

x

x

x

f

σµ

σ

π

(17)

7-53. f(x1,x2,x3,x4)=1 for 0≤x1≤1,0≤x2 ≤1,0≤x3 ≤1,0≤x4 ≤1 7-54. ) 3683 . 0 , 5 ( ~ ) 30 5 . 2 25 0 . 2 , 105 100 ( ~ 2 2 2 1 − + − − N N X X 7-55. X ~ N(50,144) 8664 . 0 0668 . 0 9332 . 0 ) 5 . 1 ( ) 5 . 1 ( ) 5 . 1 5 . 1 ( ) 53 47 ( 36 / 12 50 53 36 / 12 50 47 = − = − ≤ − ≤ = ≤ ≤ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ≤ ≤ − − Z P Z P Z P Z P X P

No, because Central Limit Theorem states that with large samples (n ≥ 30), X is approximately

normally distributed.

7-56. Assume X is approximately normally distributed.

34 38 0.7 / 9

(

34) 1

(

34) 1

(

)

1

(

17.14) 1 0 1

P X

P X

P Z

P Z

>

= −

= −

= −

≤ −

= − =

7-57.

5

.

6569

16

/

2

50

52

/

=

=

=

n

s

X

z

µ

P(Z > z) ~0. The results are very unusual.

7-58. P(X ≤37)=P(Z≤−5.36)=0

7-59. Binomial with p equal to the proportion of defective chips and n = 100.

7-60.

E

(

a

X

1

+

(

1

a

)

X

2

=

a

µ

+

(

1

a

)

µ

=

µ

1 2 1 1 1 2 1 1 2 1 1 2 1 1 2 2 1 2 2 1 2 2 1 1 1 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 1 2 2 1 ) ( 2 ) ( 2 2 2 2 0 0 ) 2 2 2 )( ( ) ( ) )( 2 ( 2 ) )( 2 1 ( ) ( ) ( ) 1 ( ) ( ] ) 1 ( [ ) ( 2 2 1 2 n n n a n n n a n n n a a n n a n a n n a n n n a X V n n a n a n n a n n a n a n n a a a a X V a X V a X a X a V X V n n + = = + = + + − = ≡ + − = + − + = + − + = + − + = − + = − + = σ ∂ ∂ σ σ σ σ σ σ σ 7-61.

(18)

= = = = − + − = ∂ ∂ − + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = = n i i n i i n i i n i i x n x n L x x n L x e L n i i 1 2 1 1 3 1 2 3 3 ) ( ln ln 2 2 1 ln ) ( ln 2 1 ) ( 1

θ

θ

θ

θ

θ

θ

θ

θ

θ

θ

Making the last equation equal to zero and solving for theta, we obtain:

n x n i i 3 ˆ=

=1 Θ

as the maximum likelihood estimate.

7-62. ∑ ∑ ∏ = = = − + = ∂ ∂ − + = = n i i n i i n i i n x n L x n L x L 1 1 1 1 ) ln( ) ( ln ) ln( ) 1 ( ln ) ( ln ) ( θ θ θ θ θ θ θ θ θ

making the last equation equal to zero and solving for theta, we obtain the maximum likelihood estimate.

= − = n i i x n 1 ) ln( ˆ Θ 7-63.

= = = − − − = ∂ ∂ − + − = = n i i n i i n i i n x n L x n L x L 1 2 1 1 1 ) ln( 1 ) ( ln ) ln( 1 ln ) ( ln 1 ) ( θ θ θ θ θ θ θ θ θ θ θ θ

making the last equation equal to zero and solving for the parameter of interest, we obtain the maximum likelihood estimate.

= − = n i i x n 1 ) ln( 1 ˆ Θ

(19)

θ

θ

θ

θ

= = = − = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ =

= = = = n n n x E n x E n x n E E n i n i i n i i n i i 1 1 1 1 1 ] ) [ln( 1 ) ln( 1 ) ln( 1 ) ˆ (

=

1 0 1

)

(ln

))

(ln(

X

x

x

dx

E

i θ θ let

u

=

ln

x

and

dv

x

θ

dx

θ −

=

1 then,

θ

θ

θ

θ

=

=

x

dx

X

E

1 0 1

))

(ln(

7-64. a) Let . Then Therefore and Therefore, is a b) 7-65.

23.5 15.6 17.4

28.7

21.9

10

x

µ

= =

=

+

+ +

=

Demand for all 5000 houses is θ = 5000µ

5000

5000(21.9) 109,500

θ

=

µ

=

=

The proportion estimate is

(20)

Mind-Expanding Exercises 7-66. ) 1 ( ) 1 ( ) 0 , 0 ( 1 2 − − = = = N N M M X X P ) 1 ( ) ( ) 1 , 0 ( 1= 2= = − N N M N M X X P ) 1 ( ) 1 )( ( ) 1 , 1 ( ) 1 ( ) ( ) 0 , 1 ( 2 1 2 1 − − − − = = = − − = = = N N M N M N X X P N N M M N X X P N M N N M N N M N N M N M N X P X X P X P X X P X P N M N M N N M N M N M X P X X P X P X X P X P N M N X P N M X P − = − × − − − + × − − = = = = + = = = = = = − × − + × − − = = = = + = = = = = − = = = = 1 1 1 ) 1 ( ) 1 | 1 ( ) 0 ( ) 0 | 1 ( ) 1 ( 1 1 1 ) 1 ( ) 1 | 0 ( ) 0 ( ) 0 | 0 ( ) 0 ( ) 1 ( / ) 0 ( 1 1 2 1 1 2 2 1 1 2 1 1 2 2 1 1 Because 11 1 2 0| 0) ( = = = MNX X P is not equal to N M X

P( 2 = )0 = , X1 and X2 are not

independent. 7-67. a)

)

1

/(

2

)

2

/

(

]

2

/

)

1

[(

Γ

Γ

=

n

n

n

c

n

b) When n = 10, cn = 1.0281. When n = 25, cn = 1.0105. So S is a fairly good estimator for the standard

deviation even when relatively small sample sizes are used. 7-68.

a)

Z

i

= −

Y

i

X

i

; so is N(0, 2

Z

i

σ

2

). Let *

σ

2

=

2

σ

2. The likelihood function is

2 2 2 2 1 1 ( ) 2 2 * 1 1 2 * 2 / 2

1

( * )

* 2

1

( * 2 )

i n i i n z i z n

L

e

e

σ σ

σ

σ

π

σ

π

= − = −

=

=

The log-likelihood is 2 2 2 2 1

1

ln[ ( * )]

ln(2

* )

2

2 *

n i i

n

L

σ

πσ

z

σ

=

= −

(21)

2 2 2 4 1 2 2 1 2 2 2 1 1

ln[ ( * )]

1

0

*

*

2 *

2

*

1

1

ˆ *

(

)

2

2

n i i n i i n n i i i i i

d

L

n

z

d

n

z

z

y

x

n

n

σ

σ

σ

σ

σ

σ

= = = =

= −

+

=

=

=

=

2 2

But *

σ

=

2

σ

,

so the MLE is

2 2 1 2 2 1

1

ˆ

2

(

)

1

ˆ

(

)

2

n i i i n i i i

y

x

n

y

x

n

σ

σ

= =

=

=

b) 2 2 1 2 1 2 2 1 2 2 1 2 2 1 2 2

1

ˆ

(

)

(

)

4

1

(

)

4

1

(

2

)

4

1

[ (

)

(2

)

(

)]

4

1

[

0

]

4

2

4

2

n i i i n i i i n i i i i i n i i i i i n i

E

E

Y

X

n

E Y

X

n

E Y

Y X

X

n

E Y

E Y X

E X

n

n

n

n

σ

σ

σ

σ

σ

= = = = =

=

=

=

+

=

+

=

− +

=

=

So the estimator is biased. Bias is independent of n. c) An unbiased estimator of

σ

2is given by

2

σ

ˆ

2

7-69. | | 12 c X P n c ⎠ ⎞ ⎜ ⎝

µ σ from Chebyshev's inequality. Then,

2 1 1 | | c X P n c ⎠ ⎞ ⎜ ⎝ ⎛ µ < σ . Given an ε, n and c can be chosen sufficiently large that the last probability is near 1 and

n cσ is equal to ε. 7-70. a)

(

)

(

)

( )

n i n t P X t fori n F t X P ()≤ = ≤ =1,..., =[ ]

(

)

(

)

( )

n i t fori n Ft X P t X P (1)> = > =1,..., =[1− ]

(22)

Then,

(

)

( )

n t F t X P (1) ≤ =1−[1− ] b)

( )

( )

( )

( )

( )

F

( )

t nF

( )

t f

( )

t t t f t f t F n t F t t f n X X n X X n n 1 1 ] [ ] 1 [ ) ( ) ( ) 1 ( ) 1 ( − − = = − = = ∂ ∂ ∂ ∂ c)

(

)

( )

( )

n n X F p F X P (1) =0 = (1) 0 =1−[1− 0] =1− because F(0) = 1 - p.

(

)

( )

( )

n

(

)

n X n F F p X P n = − = − − − = =1 1 () 0 1 [ 0] 1 1 ) ( d)

(

)

=

( )

=Φ

[ ]

tσµ t F t X

P . From Exercise 7-62, part(b)

( )

{

[ ]

}

( )

{ }

[ ]

2 2 2 ) ( ) ( 2 2 2 ) ( ) 1 ( 2 1 2 1 1 1 1 σ µ σ µ σ π Φ σ π Φ σµ σµ − − − − − − − − = − = t n t e n t f e n t f n t X n t X e)

(

)

t e t X

P ≤ =1− −λ. From Exercise 7-62, part (b)

( )

nt X t e F = 1− − λ ) 1 (

( )

t n X

t

n

e

f

=

λ

− λ ) 1 (

( )

t n X t e F n) [1 ] ( λ − − =

( )

t n t X t n e e f n λ λ −λ − − − = [1 ] 1 ) ( 7-71.

(

( )

)

(

( )

)

n n n t PX F t t X F

P () ≤ = ()≤ −1 = from Exercise 7-62 for 0≤ ≤t 1. If Y=F(X(n)), then fY(y)=nyn−1,0≤ y≤1. Then, 1 ) ( 1 0 + = =

n n dy ny Y E n

( )

(

)

(

( )

)

n

t

t

F

X

P

t

X

F

P

(1)

=

(1)

−1

=

1

(

1

)

from Exercise 7-62 for 0≤ ≤t 1.

If Y=F(X(1)), then ( )= (1 ) −1,0 1 y t n y fY n . Then, 1 1 ) 1 ( ) ( 1 0 1 + = − =

n dy y yn Y

E n where integration by parts is used. Therefore,

1 1 )] ( [ 1 )] ( [ () = + (1) = + n X F E and n n X F E n 7-72. ( ) [ ( 2 ) ( 2) 2 ( 1)] 1 1 1 + + − = − + =

i i i i n i X X E X E X E k V E 2 2 2 2 2 2 1 1 2 ) 1 ( ) 2 ( σ µ µ σ µ σ − = − + + + =

− = n k k n i Therefore, k=2(n11)

7-73. a) The traditional estimate of the standard deviation, S, is 3.26. The mean of the sample is 13.43 so the values of

X

i

X

corresponding to the given observations are 3.43, 1.43, 4.43, 0.57, 4.57, 1.57 and 2.57. The median of these new quantities is 2.57 so the new estimate of the standard deviation is 3.81; slightly larger than the value obtained with the traditional estimator.

b) Making the first observation in the original sample equal to 50 produces the following results. The traditional estimator, S, is equal to 13.91. The new estimator remains unchanged.

(23)

7-74. a) ) ... )( ( ... ... 1 2 3 1 2 1 1 2 3 1 2 1 2 3 1 2 1 1 2 1 1 − − − + + − + − + − + − + + − + − + + + − + − + + − + + = r r r r r X X X X X X X r n X X X X X X X X X X X X X X X X T

Because X1 is the minimum lifetime of n items, λ

n X

E( 1)= 1 .

Then, X2 − X1 is the minimum lifetime of (n-1) items from the memoryless property of the exponential and λ ) 1 ( 1 ) ( 2− 1 = n X X E . Similarly, λ ) 1 ( 1 ) ( − −1 = + k n X X E k k . Then, λ λ λ λ r r n r n n n n n T E r = + − + − + + − − + = ) 1 ( 1 ... ) 1 ( 1 ) ( and µ λ = = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 r T E r

b) V(Tr/r)=1/(λ2r)is related to the variance of the Erlang distribution

2

/ )

(X r λ

V = . They are related by the value (1/r2). The censored variance is (1/r2) times the

Figure

Updating...

References

Related subjects :