CHAPTER 7 Section 72
71. The proportion of arrivals for chest pain is 8 among 103 total arrivals. The proportion = 8/103. 72. The proportion is 10/80 =1/8. 73.
(
2
.
560
≤
≤
2
.
570
)
=
(
2._{0}560_{.}_{008}−_{/}2.565_{9}≤
_{/}−≤
2._{0}570_{.}_{008}−2_{/}.565_{9})
n XP
X
P
_{σ} µ9392
.
0
0304
.
0
9696
.
0
)
875
.
1
(
)
875
.
1
(
)
875
.
1
875
.
1
(
=
−
=
−
≤
−
≤
=
≤
≤
−
=
P
Z
P
Z
P
Z
74. ~ (100,102) _{=}25 n N X_{i} 2 25 10 100 = = = = n X X σ σ µ 8054 . 0 0359 . 0 8413 . 0 ) 8 . 1 ( ) 1 ( ) 1 8 . 1 ( ) 102 4 . 96 ( )] 2 100 ( )) 2 ( 8 . 1 100 [( 2100 102 / 2100 4 . 96 = − = − ≤ − ≤ = ≤ ≤ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ _{≤} _{≤} = ≤ ≤ = + ≤ ≤ − − − − Z P Z P Z P P X P X P n X σ µ 75.µ
_{X}=
520
kN
/
m
2;
10
.
206
6
25 =
=
=
n
Xσ
σ
(
)
3121
.
0
6879
.
0
1
)
4899
.
0
(
1
)
4899
.
0
(
)
525
(
525_{10}_{.}_{206}520 /=
−
=
≤
−
=
≥
=
≥
=
≥
− −Z
P
Z
P
P
X
P
n X σ µ 76. n = 6 n = 49 429 . 1 6 5 . 3 = = = n X σ σ 5 . 0 49 5 . 3 = = = n X σ σ X σ is reduced by 0.929 psi289
.
154
5
345
;
13237
=
=
=
=
n
X Xσ
σ
µ
(
17230 13237 17305 17237)
154.289 / 154.289(17230
17305)
( 0.045
0.4407)
(
0.44)
(
0.045)
0.6700 0.482 0.188
X nP
X
P
P
Z
P Z
P Z
µ σ − − −≤
≤
=
≤
≤
=
−
≤ ≤
=
≤
−
≤ −
=
−
=
78. psi n X = = 50 =_{5} 22.361 σ σ = standard error of X 79. _{σ}2 _{=}2512
~
11
.
11
5
.
1
5
2 2=
⎟
⎠
⎞
⎜
⎝
⎛
=
⎟
⎠
⎞
⎜
⎝
⎛
=
=
X Xn
n
σ
σ
σ
σ
710. Let Y = X−6 121 2 2 2 1 12 ) ( 2 ) 1 0 ( 2 = − = = = + = + = a b b a X X X X σ µ µ µ 1441 2 2 1 2 1 121 121 2 2 5 6 144 1 12 = − = − = = = = = Y Y X X n σ µ σ σ σ)
,
5
(
~
6
_{144}1 2 1−
−
=
X
N
Y
_{, approximately, using the central limit theorem. }711. n = 36 3 2 128 12 1 ) 1 1 3 ( 12 1 ) 1 ( 2 2 ) 1 3 ( 2 2 2 = = − + − = − + − = = + = + = a b b a X X σ µ 6 3 / 2 36 3 / 2 , 2 = = = X X σ µ n X z / σ µ − =
2312 . 0 7688 . 0 1 ) 7348 . 0 ( ) 6742 . 3 ( ) 6742 . 3 7348 . 0 ( ) 5 . 2 1 . 2 ( 6 3 / 2 6 3 / 2 2 5 . 2 2 1 . 2 = − = < − < = < < = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ < < = < < − − Z P Z P Z P Z P X P 712. µX = 8 2. minutes n = 49 σX = 15. minutes _{σ} σ µ µ X X X X n = = = = = 15 49 0 2143 8 2 . . . mins
Using the central limit theorem, X is approximately normally distributed.
a) ) ( 8.4) 1 2143 . 0 2 . 8 10 ( ) 10 (X < =PZ < − =P Z< = P b) P(5<X<10)=P(_{0}5_{.}−_{2143}8.2 <Z<10_{0}_{.}_{2143}−8.2) 1 0 1 ) 932 . 14 ( ) 4 . 8 ( < − <− = − = =P Z PZ c) ) ( 10.27) 0 2143 . 0 2 . 8 6 ( ) 6 (X < =P Z< − =P Z<− = P 713. 8 75 16 1 1 1 = = = σ µ n 12 70 9 2 2 2 = = = σ µ n ) 20 , 5 ( ~ ) 9 12 16 8 , 70 75 ( ~ ) , ( ~ ) , ( ~ 2 2 2 2 2 1 2 2 1 2 2 2 1 1 2 1 2 1 N N n n N N X X X X X X + − + − + − − σ σ µ µ σ σ µ µ a) P(X1− X2>4) 5885 . 0 4115 . 0 1 ) 2236 . 0 ( 1 ) 2236 . 0 ( ) ( 20 5 4 = − = − ≤ − = − > = > − _{P}_{Z} _{P} _{Z} Z P b) P(3.5≤X1−X2≤5.5)
1759
.
0
3687
.
0
5445
.
0
)
3354
.
0
(
)
1118
.
0
(
)
(
5.5_{20}5 20 5 5 . 3=
−
=
−
≤
−
≤
=
≤
≤
− −_{Z}
_{P}
_{Z}
_{P}
_{Z}
P
714. If µB =µA, then XB−XA is approximately normal with mean 0 and variance _{25} _{25} 20.48
2 2 = + A B σ σ _{. } Then,
_{P}
(
_{X}
−
_{X}
>
3
.
5
)
=
_{P}
(
_{Z}
>
3_{20}.5−_{.}_{48}0)
=
_{P}
(
_{Z}
>
0
.
773
)
=
0
.
2196
A BThe probability that
X
_{B} exceedsX
_{A} by 3.5 or more is not that unusual when µB and µA areequal. Therefore, there is not strong evidence that µB is greater than µA.
983 . 0 0170 . 0 1 ) 12 . 2 ( 1 ) ( ) 2 ( ) 2 , 5 ( ~ ) , 55 60 ( ~ ) ( 2 5 2 16 4 16 42 2 = − = − ≤ − = ≥ = ≥ − + − − − _{P} _{Z} Z P X X P N N X X low high low high Section 73 716. a) 2 2
1.815
SE Mean σ
0.4058
20
Variance σ
1.815
3.294
Xs
n
=
=
=
=
=
=
=
b) Estimate of mean of population = sample mean = 50.184 717. a)
s
SE Mean
10.25
2.05
N
25
N
=
→
N
=
→
=
2 23761.70
Mean
150.468
25
Variance
10.25
105.0625
Sum of Squares
Variance
105.0625
2521.5
1
25 1
S
ss
SS
n
=
=
=
=
=
=
→
=
→
=
−
−
b) Estimate of mean of population = sample mean = 150.468 718. a) b) 719.
( )
(
(
_{1})
2)
2 1θ
n/
(
1)
/
iE
=
E
∑
_{=}X
−
X
c
=
n
−
σ
c
( )
(
1
)
2 2 21
Bias
E
θ
θ
n
(
n
1)
c
c
σ
σ
σ
−
−
=
− =
=
−
720. E( )
_{⎟⎟}=( )
µ =µ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ =∑
_{∑}
= = _{n} n X E n n X E X n i i n i i 2 2 1 2 1 2 2 1 2 1 1 E( )
_{⎟⎟}=( )
µ =µ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ =∑
_{∑}
= = _{n} n X E n n X E X n i i n i i _{1} _{1} 1 12 , X1and X2 are unbiased estimators of µ.
The variances are V
( )
2n 2 1 =σ X and V
( )
n 2 2 =σ2 1 2 / 2 / ) ˆ ( ) ˆ ( 2 2 2 1 _{=} _{=} _{=} n n n n MSE MSE σ σ Θ Θ
Since both estimators are unbiased, examination of the variances would conclude that X1 is the
“better” estimator with the smaller variance.
721. E
( )
Θ =[
+ + +]
= = (7µ)=µ 7 1 )) ( 7 ( 7 1 ) ( ) ( ) ( 7 1 ˆ 7 2 1 1 E X E X E X E X E( )
Θ =[
+ +]
= [2µ−µ+µ]=µ 2 1 ) ( ) ( ) 2 ( 2 1 ˆ 7 6 1 2 E X E X E Xa) Both Θˆ1and Θˆ2 are unbiased estimates of µ since the expected values of these statistics are equivalent to the true mean, µ.
b) V
( )
(
)
2 2 7 2 1 2 7 2 1 1 7 1 ) 7 ( 49 1 ) ( ) ( ) ( 7 1 7 ... ˆ _{σ} _{σ} Θ _{⎥}= + + + = = ⎦ ⎤ ⎢ ⎣ ⎡ + + + =V X X X V X V X V X 7 ) ˆ ( 2 1 σ Θ = V V( )
(
)
(4 ( ) ( ) ( )) 4 1 ) ( ) ( ) 2 ( 2 1 2 2 ˆ 4 6 1 4 6 1 2 4 6 1 2 V X V X V X V X V X V X X X X V _{⎥}= + + = + + ⎦ ⎤ ⎢ ⎣ ⎡ − + = Θ = 1(
)
4 4 2 2 2 σ +σ +σ = 1 4 6 2 ( σ ) 2 3 ) ˆ ( 2 2 σ Θ = VSince both estimators are unbiased, the variances can be compared to select the better estimator. The variance of Θˆ1is smaller than that of Θˆ2,Θˆ1is the better estimator.
722. Since both Θˆ1and Θˆ2 are unbiased, the variances of the estimators can compared to select the better estimator. The variance of θ_{2} is smaller than that of θˆ1thus θ2 may be the better estimator.
Relative Efficiency = 2.5 4 10 ) ˆ ( ) ( ) ˆ ( ) ˆ ( 2 1 2 1 _{=} _{=} _{=} Θ Θ Θ Θ V V MSE MSE 723. E(Θˆ_{1})=θ E(Θˆ_{2})=θ/2 θ Θ − =E(ˆ_{2}) Bias = θ θ 2− = − θ 2 V(Θˆ1)= 10 V(Θˆ2)= 4
For unbiasedness, use Θˆ1since it is the only unbiased estimator. As for minimum variance and efficiency we have:
Relative Efficiency = 2 2 2 1 2 1 ) ) ˆ ( ( ) ) ˆ ( ( Bias V Bias V + + Θ Θ
where bias for θ_{1}is 0. Thus, Relative Efficiency =
(
)
(10 0) 4 2 40 16 2 2 + +⎛_{⎝⎜}− ⎞_{⎠⎟} ⎛ ⎝ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ = + θ θUse Θˆ1, when 40 16 2 1 ( +θ )≤ 40≤(16+θ2) 24≤ θ2 θ ≤ −4 899. or θ ≥ 4 899. If −4 899. < <θ 4 899. then use Θˆ2.
For unbiasedness, use Θˆ1. For efficiency, use Θˆ1when θ ≤ −4 899. or θ ≥ 4 899. and use Θˆ2 when
−4 899. < <θ 4 899. . 724. E(Θˆ_{1})=θ No bias V(Θˆ_{1})=12=MSE(Θˆ_{1}) θ Θˆ )= ( _{2} E No bias V(Θˆ_{2})=10=MSE(Θˆ_{2}) θ Θˆ )≠ ( _{3}
E Bias MSE(Θˆ_{3})=6 [note that this includes (bias2)]
To compare the three estimators, calculate the relative efficiencies:
2 . 1 10 12 ) ˆ ( ) ˆ ( 2 1 _{=} _{=} Θ Θ MSE
MSE _{, since rel. eff. > 1 use }
2
ˆ
Θ as the estimator for θ
2 6 12 ) ˆ ( ) ˆ ( 3 1 _{=} _{=} Θ Θ MSE
MSE _{, since rel. eff. > 1 use }
3
ˆ
Θ as the estimator for θ
8 . 1 6 10 ) ˆ ( ) ˆ ( 3 2 _{=} _{=} Θ Θ MSE
MSE _{, since rel. eff. > 1 use }
3
ˆ
Θ as the estimator for θ Conclusion:
3
ˆ
Θ is the most efficient estimator with bias, but it is biased. Θˆ2 is the best “unbiased” estimator. 725. n1 = 20, n2 = 10, n3 = 8
Show that S2_{ is unbiased: }
( )
(
) ( ) ( )
(
)
(
2)
(
2)
2 3 2 2 2 1 2 3 2 2 2 1 2 3 2 2 2 1 2 38 38 1 8 10 20 38 1 8 10 20 38 1 38 8 10 20 σ σ σ σ σ + + = = = + + = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + + = S E S E S E S S S E S ETherefore, S2_{ is an unbiased estimator of σ}2_{ . }
726. Show that
(
)
n X X n i i 2 1∑
= − _{is a biased estimator of σ}2_{ : } a)
(
)
(
)
( )
( )
(
)
(
)
n n n n n n n n n n X nE X E n X n X E n n X X E n i n i i n i i n i i 2 2 2 2 2 2 2 1 2 2 2 2 2 1 2 1 2 1 2 ) ) 1 (( 1 1 1 1 1 σ σ σ σ µ σ µ σ µ σ µ − = − = − − + = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + − + = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ _{−} = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ _{−} = ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ −∑
∑
∑
∑
= = = = ∴(
)
n X X_{i} 2∑
− _{is a biased estimator of σ}2_{ . } b) Bias =(
)
n n n X n X E i 2 2 2 2 2 2 2 _{σ} σ σ σ σ = − − =− − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡_{∑}
_{−}c) Bias decreases as n increases.
727. a) Show that 2
X is a biased estimator of µ2. Using E X
( )
2 =V X( )+[
E X( )]
2( )
( )
(
)
(
2 2 2) ( )
2 2 2 2 2 2 2 2 1 2 2 2 1 1 2 2 1 2 2 1 1 1 1 1 µ σ µ σ µ σ µ σ + = + = + = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ =∑
∑
∑
∑
= = = = n X E n n n n n n n n X E X V n X E n X E n i n i i n i i n i i Therefore, 2 X is a biased estimator of µ.2 b) Bias =( )
n n X E 2 2 2 2 2 2 _{−}_{µ} _{=}σ _{+}_{µ} _{−}_{µ} _{=}σ c) Bias decreases as n increases.728. a) The average of the 26 observations provided can be used as an estimator of the mean pull force because we know it is unbiased. This value is 336.36 N.
b) The median of the sample can be used as an estimate of the point that divides the population into a “weak” and “strong” half. This estimate is 334.55 N.
c) Our estimate of the population variance is the sample variance or 54.16 N2_{. Similarly, our } estimate of the population standard deviation is the sample standard deviation or 7.36 N. d) The estimated standard error of the mean pull force is 7.36/26½_{ = 1.44. This value is the } standard deviation, not of the pull force, but of the mean pull force of the sample.
e) No connector in the sample has a pull force measurement under 324 N.
729. Descriptive Statistics
Variable N Mean Median TrMean StDev SE Mean Oxide Thickness 24 423.33 424.00 423.36 9.08 1.85
a) The mean oxide thickness, as estimated by Minitab from the sample, is 423.33 Angstroms. b) Standard deviation for the population can be estimated by the sample standard deviation, or 9.08
Angstroms.
c) The standard error of the mean is 1.85 Angstroms. d) Our estimate for the median is 424 Angstroms.
e) Seven of the measurements exceed 430 Angstroms, so our estimate of the proportion requested is 7/24 = 0.2917 730. a)
np
p
n
X
E
n
n
X
E
p
E
(
ˆ
)
=
(
)
=
1
(
)
=
1
=
b) We know that the variance of
pˆ
isn
p
p
⋅
(
1
−
)
so its standard error must be
n
p
p
⋅
(
1
−
)
. To estimate this parameter we would substitute our estimate of p into it.
731. a)
E
(
X
_{1}−
X
_{2})
=
E
(
X
_{1})
−
E
(
X
_{2})
=
µ
_{1}−
µ
_{2} b) 2 2 2 1 2 1 2 1 2 1 2 1)
(
)
(
)
2
(
,
)
(
.
.
n
n
X
X
COV
X
V
X
V
X
X
V
e
s
=
−
=
+
+
=
σ
+
σ
This standard error could be estimated by using the estimates for the standard deviations of populations 1 and 2. c) 2 2 2 1 1 2 2 2 2 1 1 2 2 1 2 1 2 2 2 1 2 2 2 1 1 2 2 1 2 1 2
(
1)
(
1)
1
(
)
(
1) (
) (
1)
(
)
2
2
1
2
(
1)
(
1)
)
2
2
pn
S
n
S
E S
E
n
E S
n
E S
n
n
n
n
n
n
n
n
n
n
σ
σ
n
n
σ
σ
⎛
− ⋅
+
− ⋅
⎞
_{⎡}
_{⎤}
=
_{⎜}
_{⎟}
=
_{⎣}
−
+
− ⋅
_{⎦}
+
−
+
−
⎝
⎠
+ −
⎡
⎤
=
_{⎣}
− ⋅
+
− ⋅
_{⎦}
=
=
+ −
+ −
732. a)E
(
µ
ˆ
)
=
E
(
α
X
_{1}+
(
1
−
α
)
X
_{2})
=
α
E
(
X
_{1})
+
(
1
−
α
)
E
(
X
_{2})
=
αµ
+
(
1
−
α
)
µ
=
µ
b) 2 1 1 2 2 2 1 2 2 1 2 1 2 1 2 2 2 2 2 1 2 1 2 2 2 1 2 2 1 ) 1 ( ) 1 ( ) 1 ( ) ( ) 1 ( ) ( ) ) 1 ( ( ) ˆ .( . n n an n n a n n n X V X V X X V e s α α σ σ α σ α σ α σ α α α α α µ − + = − + = − + = − + = − + =c) The value of alpha that minimizes the standard error is:
1 2 1 an n an + = α
d) With a = 4 and n1=2n2, the value of alpha to choose is 8/9. The arbitrary value of α=0.5 is too small and will result in a larger standard error. With α=8/9 the standard error is
2 1 2 2 2 2 2 2 1 667 . 0 2 8 ) 9 / 1 ( ) 9 / 8 ( ) ˆ .( . n n n n e s µ =σ + = σ
If α=0.5 the standard error is
2 1 2 2 2 2 2 2 1 0607 . 1 2 8 ) 5 . 0 ( ) 5 . 0 ( ) ˆ .( . n n n n e s µ =σ + = σ 733. a) ( ) 1 ( ) 1 ( ) 1 1 2 2 1 2 ( 1 2) 2 1 1 1 2 2 1 1 2 2 1 1 _{n} _{p} _{p} _{p} _{E} _{p} _{p} n p n n X E n X E n n X n X E − = − = − = − = − b) 2 2 2 1 1 1(1 ) (1 ) n p p n p p − _{+} −
c) An estimate of the standard error could be obtained substituting 1 1 n X for 1
p
and 2 2 n X for 2p
in the equation shown in (b).d) Our estimate of the difference in proportions is 0.01 e) The estimated standard error is 0.0413
Section 74 734.
(
)
=
(
1
−
)
x−1p
p
x
f
n x n n i x i n i ip
p
p
p
p
L
− = − ∑ =−
=
−
=
∏
1)
1
(
)
1
(
)
(
1 1∑
∑
∑
∑
∑
∑
= = = = = ==
−
=
−
+
−
−
=
−
⎟
⎠
⎞
⎜
⎝
⎛
_{−}
−
−
=
≡
−
−
−
=
−
⎟
⎠
⎞
⎜
⎝
⎛
−
+
=
n i i n i i n i i n i i n i i n i ix
n
p
x
p
n
p
p
pn
x
p
np
n
p
p
n
x
p
n
p
p
n
x
p
n
p
p
L
p
n
x
p
n
p
L
1 1 1 1 1 1ˆ
0
)
1
(
0
)
1
(
)
1
(
0
0
1
)
(
ln
)
1
ln(
ln
)
(
ln
∂
∂
735.!
)
(
x
e
x
f
xλ
λ −=
∏
∏
= − = − ∑==
=
_{n} i i i x n n i i xx
e
x
e
L
n i i 1 1!
_{!}
)
(
λ
λ
λ
1 λ λ n x n x x n x n d L d x x e n L n i i n i i n i i n i i i n i n i i∑
∑
∑
∑
∑
∑
= = = = = = = = = + − ≡ + − = − + − = 1 1 1 1 1 1 ˆ 0 0 1 ) ( ln ! ln ln ln ) ( ln λ λ λ λ λ λ λ λ λ736. _{f}_{(}_{x}_{)}_{=}_{(}_{θ}_{+}_{1}_{)}_{x}θ θ θ θ θ θ θ θ θ θ i n i n n i i x x x x L
∏
∏
= = + = × + × + = + = 1 2 1 1 ) 1 ( ... ) 1 ( ) 1 ( ) 1 ( ) ( 1 ln ˆ ln 1 0 ln 1 ) ( ln ln ) 1 ln( ... ln ln ) 1 ln( ) ( ln 1 1 1 1 2 1 − − = − = + = + + = + + = + + + + =∑
∑
∑
∑
= = = = i n i i n i i n i i n i x n x n x n L x n x x n L θ θ θ ∂θ θ ∂ θ θ θ θ θ θ 737. _{(} _{)}_{=}_{λ} −λ(x−θ) e x f for x≥θ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − − = − − _{=}∑
= _{=}∑
= =_{∏}
n i n i n x n x n n i x _{e} _{e} e L 1 1 ) ( 1 ) ( ) ( θ λ θ λ θ λ_{λ}
_{λ}
λ
λ
θ λ θ λ θ λ θ λ λ θ λ θ λ λ λ θ λ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ _{−} = − = ≡ − − = − − =∑
∑
∑
∑
= = = = x n x n n x n n x n d L d n x n L n i i n i i n i i n i i 1 ˆ / ˆ 0 ) , ( ln ln ) , ( ln 1 1 1 1The other parameter θ cannot be estimated by setting the derivative of the log likelihood with respect to θ to zero because the log likelihood is a linear function of θ. The range of the likelihood is important. The joint density function and therefore the likelihood is zero for θ <
) , , ,
(X1 X2 Xn
Min … . The term in the log likelihood nλθ is maximized for θ as small as possible
within the range of nonzero likelihood. Therefore, the log likelihood is maximized for θestimated with Min(X_{1},X_{2},…,X_{n}) so that θˆ x= min
b) Example: Consider traffic flow and let the time that has elapsed between one car passing a fixed point and the instant that the next car begins to pass that point be considered time headway. This headway can be modeled by the shifted exponential distribution.
Example in Reliability: Consider a process where failures are of interest. Suppose that a unit is put into operation at x = 0, but no failures will occur until θ period of operation. Failures will occur only after the time θ.
738.
∑
∏
∑
∑
−
=
∂
∂
−
−
=
=
= −θ
θ
θ
θ
θ
θ
θ
θ
θ
θn
x
L
n
x
x
L
e
x
L
i n i i i x i i2
1
)
(
ln
ln
2
)
ln(
)
(
ln
)
(
2 1 2Setting the last equation equal to zero and solving for theta yields
n
x
n i i2
ˆ
_{=}
∑
=1θ
739.∑
==
=
−
=
n i iX
X
n
a
X
E
11
2
0
)
(
, therefore:a
ˆ
=
2
X
The expected value of this estimate is the true parameter so it is unbiased. This estimate is
reasonable in one sense because it is unbiased. However, there are obvious problems. Consider the sample x1=1, x2 = 2 and x3 =10. Now ⎯x = 4.37 and aˆ= x2 =8.667. This is an unreasonable estimate of a, because clearly a ≥ 10.
740. a)
c
x
dx
cx
c
x
)
2
c
2
(
1
)
1
(
1 1 1 1 2=
+
=
=
+
− −∫
θ
θ
so that the constant c should equal 0.5 b)
∑
∑
= =⋅
=
=
=
n i i n i iX
n
X
n
X
E
1 11
3
ˆ
3
1
)
(
θ
θ
c)θ
θ
θ
_{⎟⎟}
=
=
=
=
⎠
⎞
⎜⎜
⎝
⎛
⋅
=
_{∑}
=(
3
)
3
(
)
3
3
1
3
)
ˆ
(
1X
E
X
E
X
n
E
E
n i i d)∑
∑
∏
= = =+
=
∂
∂
+
+
=
+
=
n i i i n i i n i iX
X
L
X
n
L
X
L
1 1 1)
1
(
)
(
ln
)
1
ln(
)
2
1
ln(
)
(
ln
)
1
(
2
1
)
(
θ
θ
θ
θ
θ
θ
θ
741. a)
∑
==
=
n i iX
n
X
E
1 2 2_{)}
_{2}
1
(
θ
so∑
= = n i i X n 1 2 2 1 ˆ Θ b)∑
∏
∑
∑
−
=
∂
∂
−
−
=
=
= −θ
θ
θ
θ
θ
θ
θ
θ
θ
θn
x
L
n
x
x
L
e
x
L
i n i i i x i i 2 2 1 2 22
1
)
(
ln
ln
2
)
ln(
)
(
ln
)
(
2Setting the last equation equal to zero, the maximum likelihood estimate is
∑
= = n i i X n 1 2 2 1 ˆ Θand this is the same result obtained in part (a) c)
)
2
ln(
2
)
5
.
0
ln(
2
1
5
.
0
)
(
0 2 / 2θ
θ
θ=
−
=
−
=
=
∫
−a
e
dx
x
f
a aWe can estimate the median (a) by substituting our estimate for θ into the equation for a. 742. a)
aˆ
cannot be unbiased since it will always be less than a.b) bias =
1
1
)
1
(
1
+
=
−
+
+
−
+
n
a
n
n
a
n
na
0
∞ →→
n . c)2
X
d) P(Y≤y)=P(X1, …,Xn ≤ y)=(P(X1≤y))n=
n
a
y ⎟
⎠
⎞
⎜
⎝
⎛
. Thus, f(y) is as given. Thus, bias=E(Y)a =
1
1
−
=
−
+
+
n
a
a
n
an
.e) For any n >1, n(n+2) > 3n so the variance of 2
ˆa
is less than that of 1ˆa
. It is in this sense that the second estimator is better than the first.743. a)
∏
∏
= − ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − −⎟
⎠
⎞
⎜
⎝
⎛
∑
=
⎟
⎠
⎞
⎜
⎝
⎛
=
= n i i x n i x ix
e
e
x
L
i n i i 1 1 1 1 1)
,
(
β δ δ βδ
δ
β
δ
δ
β
δ
β
β β( )
[
]
( ) ( )
_{∑}
∑
∑
∑
−
−
+
=
⎟
⎠
⎞
⎜
⎝
⎛
−
=
= − β δ δ δ β β β δ δ ββ
δ
δ
β
i i i x x i n i xn
x
L
ln
)
1
(
)
ln(
ln
)
,
(
ln
1 1 b)( )
( )( )
1 ) 1 ( ) , ( ln ln ln ) , ( ln +∑
∑
∑
+ − − − = − + = β β β δ δ δ δ β δ β δ ∂δ δ β ∂ β ∂β δ β ∂ i x x x x n n L n L i i i Upon setting ∂δ δ β∂ln L( , )_{ equal to zero, we obtain }
β β β β _{δ} δ / 1 ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ = =
_{∑}
∑
n x and x n i i Upon setting ∂β δ β∂ln L( , )_{ equal to zero and substituting for δ, we obtain }
⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − + − = − + ∑ ∑
∑
∑
∑
∑
∑
∑
∑
n i x i i i i i n i x i i i i x x n x x x n n x n x x n x n β β β β β β β β β β β δ δ δ β ln ln ln ln ) ln (ln 1 ln ln 1 and ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + =∑
∑
∑
n x x x x i i i i ln ln 1 β β βc) Numerical iteration is required.
744. a) Using the results from Example 712 we obtain that the estimate of the mean is 423.33 and the estimate of the variance is 82.4464
b)
The function seems to have a ridge and its curvature is not too pronounced. The maximum value for std deviation is at 9.08, although it is difficult to see on the graph.
c) When n is increased to 40, the graph looks the same although the curvature is more pronounced. As n increases, it is easier to determine the maximum value for the standard deviation is on the graph.
435 430 425 7 600000 8 420 Mean 400000 9 200000 10 0 11 12 13 14 415 410 15 log L Std. Dev.
745. From Example 716 the posterior distribution for µ is normal with mean
n
x
n
/
)
/
(
2 2 0 2 0 0 2σ
σ
σ
µ
σ
++
and variancen
n
/
)
/
/(
2 2 2 0 2 0σ
σ
σ
σ
+. The Bayes estimator for µ goes to the MLE as n increases. This
follows since
σ
2/
n
goes to 0, and the estimator approaches _{2}0 2 0
σ
σ
x
(the
σ
_{0}2’s cancel). Thus, in the limitµ
ˆ
=
x
. 746. a) Because 2 2 2 ) (2
1
)

(
σ µσ
π
µ
=
e
−x−x
f
andb
a
b
f
≤
≤
−
=
µ
µ
)
1
for
a
(
, the joint distribution is _{2} 2 2 ) (2
)
(
1
)
,
(
σ µσ
π
µ
− −−
=
e
xa
b
x
f
for ∞ < x < ∞ and a≤ ≤µ b. Then,µ
σ
π
σ µd
e
a
b
x
f
b a x∫
− −−
=
_{2} 2 2 ) (2
1
1
)
(
and this integral is recognized as a normal probability.Therefore,
[
( ) ( )
b_{σ}x a_{σ}x]
a
b
x
f
Φ
−−
Φ
−−
=
1
)
(
where Φ(x) is the standard normal cumulativedistribution function. Then,
( ) ( )
[
_{σ} _{σ}]
σ
π
µ
µ
σ µ x a x b xe
x
f
x
f
x
f
_{−} _{−} −Φ
−
Φ
=
=
−2
)
(
)
,
(
)

(
2 2 2 ) (b) The Bayes estimator is
( ) ( )
[
]
∫
− − −Φ
−
Φ
=
− b a x a x bd
e
x σ σσ
π
µ
µ
µ
σ µ2
~
2 2 2 ) ( . Let v = (x  µ). Then, dv =  dµ and
[
( ) ( )
]
( ) ( )
[
]
( ) ( )
[
]
∫
[
( ) ( )
]
∫
− − − − − − − − − − − − − −Φ
−
Φ
−
Φ
−
Φ
Φ
−
Φ
=
Φ
−
Φ
−
=
a x b x x a x b x a x b b x a x a x b x x a x bdv
ve
x
dv
e
v
x
v v σ σ σ σ σ σ σ σσ
π
σ
π
µ
σ σ2
2
)
(
~
2 2 2 2 2 2 Let w = _{2} 22
σ
v
. Then,dw
[
v]
dv
[
v]
dv
2 2 2 2 σ σ=
=
and( ) ( )
[
]
( ) ( )
_{⎥}⎥⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ Φ − Φ − + = Φ − Φ − = − − − − − − − − − − −∫
σ σ σ σ σ σ σ σ π σ π σ µ x a x b x a x b w b x a x a x b x e e x dw e x 2 2 2 ) ( 2 2 2 ) ( 2 2 2 ) ( 2 2 2 ) ( 2 2 ~ 747. a)!
)
(
x
e
x
f
xλ
λ −=
for x = 0, 1, 2, and)
1
(
1
)
(
0 ) 1 ( 1 0Γ
+
⎟
⎠
⎞
⎜
⎝
⎛ +
=
+ − +m
e
m
f
m m m _{λ}λλ
λ
λ
for λ > 0.Then,
(
)
!
)
1
(
1
)
,
(
_{1} 0 ) 1 ( 1 0x
m
e
m
x
f
_{m} m x m m+
Γ
+
=
_{+} + − − + +λ
λ
λ
λ λ λ .This last density is recognized to be a gamma density as a function of λ. Therefore, the posterior distribution of λ is a gamma distribution with parameters m + x + 1 and 1 + m+ 1
0
λ .
b) The mean of the posterior distribution can be obtained from the results for the gamma distribution to be
[ ]
_{⎟⎟}
⎠
⎞
⎜⎜
⎝
⎛
+
+
+
+
=
+
+
+ +1
1
1
0 0 1 1 0λ
λ
λm
x
m
x
m
m748 a) From Example 716, the Bayes estimate is
4
.
518
1
)
85
.
4
(
1
)
4
(
~
25 16 25 16=
+
+
=
µ
b.)
µ
ˆ
= x
=
4
.
85
The Bayes estimate appears to underestimate the mean.749. a) From Example 716,
2
.
288
018
.
0
0045
.
0
)
29
.
2
)(
018
.
0
(
)
28
.
2
)(
0045
.
0
(
~
_{=}
+
+
=
µ
b)
µ
ˆ
= x
=
2
.
29
The Bayes estimate is very close to the MLE of the mean. 750. a)(

)
=
−,
≥
0
x
e
x
f
λ
λ
λx andf
(
λ
)
=
0
.
01
e
−0.01λ. Then, ) 01 . 0 ( 2 01 . 0 ) ( 2 2 1,
,
)
1 20
.
01
0
.
01
1 2(
=
− x+x −=
− x+x+e
e
e
x
x
f
λ
λ
λ λλ
λ . As a function of λ, this isrecognized as a gamma density with parameters 3 and x1+x2+ .0 01. Therefore, the posterior
mean for λ is
0
.
00133
01
.
0
2
3
01
.
0
3
~
2 1=
+
=
+
+
=
x
x
x
λ
.b) Using the Bayes estimate for λ, P(X<1000)=
∫
−1000 0 00133 .
00133
.
0
e
xdx
=0.736. Supplemental Exercises 751.(
,
,...,
)
10
,
20
,...,
0
1 2 1=
∏
>
>
>
= − n n i x ne
for
x
x
x
x
x
x
f
λ
λi 752._{⎟}
⎠
⎞
⎜
⎝
⎛ ∑
−
⎟⎟
⎠
⎞
⎜⎜
⎝
⎛
=
= − 5 1 2 ) ( 5 5 4 3 2 1 2 2exp
2
1
)
,
,
,
,
(
i xix
x
x
x
x
f
_{σ}µσ
π
753. f(x1,x2,x3,x4)=1 for 0≤x1≤1,0≤x2 ≤1,0≤x3 ≤1,0≤x4 ≤1 754. ) 3683 . 0 , 5 ( ~ ) 30 5 . 2 25 0 . 2 , 105 100 ( ~ 2 2 2 1 − + − − N N X X 755. X ~ N(50,144) 8664 . 0 0668 . 0 9332 . 0 ) 5 . 1 ( ) 5 . 1 ( ) 5 . 1 5 . 1 ( ) 53 47 ( 36 / 12 50 53 36 / 12 50 47 = − = − ≤ − ≤ = ≤ ≤ − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ _{≤} _{≤} = ≤ ≤ − − Z P Z P Z P Z P X P
No, because Central Limit Theorem states that with large samples (n ≥ 30), X is approximately
normally distributed.
756. Assume X is approximately normally distributed.
34 38 0.7 / 9
(
34) 1
(
34) 1
(
)
1
(
17.14) 1 0 1
P X
P X
P Z
P Z
−>
= −
≤
= −
≤
= −
≤ −
= − =
757.5
.
6569
16
/
2
50
52
/
=
−
=
−
=
n
s
X
z
µ
P(Z > z) ~0. The results are very unusual.
758. P(X ≤37)=P(Z≤−5.36)=0
759. Binomial with p equal to the proportion of defective chips and n = 100.
760.
E
(
a
X
_{1}+
(
1
−
a
)
X
_{2}=
a
µ
+
(
1
−
a
)
µ
=
µ
1 2 1 1 1 2 1 1 2 1 1 2 1 1 2 2 1 2 2 1 2 2 1 1 1 2 2 2 2 2 2 2 2 2 1 2 2 2 2 2 2 1 2 2 1 ) ( 2 ) ( 2 2 2 2 0 0 ) 2 2 2 )( ( ) ( ) )( 2 ( 2 ) )( 2 1 ( ) ( ) ( ) 1 ( ) ( ] ) 1 ( [ ) ( 2 2 1 2 n n n a n n n a n n n a a n n a n a n n a n n n a X V n n a n a n n a n n a n a n n a a a a X V a X V a X a X a V X V n n + = = + = + + − = ≡ + − = + − + = + − + = + − + = − + = − + = σ ∂ ∂ σ σ σ σ σ σ σ 761.∑
∑
∑
∏
= = = = − + − = ∂ ∂ − + ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = ∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = = n i i n i i n i i n i i x n x n L x x n L x e L n i i 1 2 1 1 3 1 2 3 3 ) ( ln ln 2 2 1 ln ) ( ln 2 1 ) ( 1θ
θ
θ
θ
θ
θ
θ
θ
θ
θMaking the last equation equal to zero and solving for theta, we obtain:
n x n i i 3 ˆ_{=}
∑
=1 Θas the maximum likelihood estimate.
762. ∑ ∑ ∏ = = = − + = ∂ ∂ − + = = n i i n i i n i i n x n L x n L x L 1 1 1 1 ) ln( ) ( ln ) ln( ) 1 ( ln ) ( ln ) ( θ θ θ θ θ θ θ θ θ
making the last equation equal to zero and solving for theta, we obtain the maximum likelihood estimate.
∑
= − = _{n} i i x n 1 ) ln( ˆ Θ 763.∑
∑
∏
= = = − − − = ∂ ∂ − + − = = n i i n i i n i i n x n L x n L x L 1 2 1 1 1 ) ln( 1 ) ( ln ) ln( 1 ln ) ( ln 1 ) ( θ θ θ θ θ θ θ θ θ θ θ θmaking the last equation equal to zero and solving for the parameter of interest, we obtain the maximum likelihood estimate.
∑
= − = n i i x n 1 ) ln( 1 ˆ Θθ
θ
θ
θ
= = = − = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡_{−} = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡_{−} =∑
∑
∑
∑
= = = = n n n x E n x E n x n E E n i n i i n i i n i i 1 1 1 1 1 ] ) [ln( 1 ) ln( 1 ) ln( 1 ) ˆ (∫
−=
1 0 1)
(ln
))
(ln(
X
x
x
dx
E
_{i} θ θ letu
=
ln
x
anddv
x
θdx
θ −=
1 then,θ
θθ
θ−
=
−
=
_{∫}
x
−dx
X
E
1 0 1))
(ln(
764. a) Let . Then Therefore and Therefore, is a b) 765.
23.5 15.6 17.4
28.7
21.9
10
x
µ
= =
=
+
+ +
=
Demand for all 5000 houses is θ = 5000µ
5000
5000(21.9) 109,500
θ
=
µ
=
=
The proportion estimate is
MindExpanding Exercises 766. ) 1 ( ) 1 ( ) 0 , 0 ( _{1} _{2} − − = = = N N M M X X P ) 1 ( ) ( ) 1 , 0 ( 1= 2= = −_{−} N N M N M X X P ) 1 ( ) 1 )( ( ) 1 , 1 ( ) 1 ( ) ( ) 0 , 1 ( 2 1 2 1 − − − − = = = − − = = = N N M N M N X X P N N M M N X X P N M N N M N N M N N M N M N X P X X P X P X X P X P N M N M N N M N M N M X P X X P X P X X P X P N M N X P N M X P − = − × − − − + × − − = = = = + = = = = = = − × − + × − − = = = = + = = = = = − = = = = 1 1 1 ) 1 ( ) 1  1 ( ) 0 ( ) 0  1 ( ) 1 ( 1 1 1 ) 1 ( ) 1  0 ( ) 0 ( ) 0  0 ( ) 0 ( ) 1 ( / ) 0 ( 1 1 2 1 1 2 2 1 1 2 1 1 2 2 1 1 Because _{1}1 1 2 0 0) ( = = = M_{N}_{−}− X X P is not equal to N M X
P( _{2} = )0 = , X_{1} and X_{2} are not
independent. 767. a)
)
1
/(
2
)
2
/
(
]
2
/
)
1
[(
−
Γ
−
Γ
=
n
n
n
c
_{n}b) When n = 10, cn = 1.0281. When n = 25, cn = 1.0105. So S is a fairly good estimator for the standard
deviation even when relatively small sample sizes are used. 768.
a)
Z
i= −
Y
iX
i; so is N(0, 2
Z
iσ
2). Let *
σ
2=
2
σ
2. The likelihood function is2 2 2 2 1 1 _{(} _{)} 2 2 * 1 1 2 * 2 / 2
1
( * )
* 2
1
( * 2 )
i n i i n _{z} i z nL
e
e
σ σσ
σ
π
σ
π
= − = −=
∑
=
∏
The loglikelihood is 2 2 _{2} 2 11
ln[ ( * )]
ln(2
* )
2
2 *
n i in
L
σ
πσ
z
σ
== −
−
∑
2 2 2 4 1 2 2 1 2 2 2 1 1
ln[ ( * )]
1
0
*
*
2 *
2
*
1
1
ˆ *
(
)
2
2
n i i n i i n n i i i i id
L
n
z
d
n
z
z
y
x
n
n
σ
σ
σ
σ
σ
σ
= = = == −
+
=
=
=
=
−
∑
∑
∑
∑
2 2But *
σ
=
2
σ
,
so the MLE is2 2 1 2 2 1
1
ˆ
2
(
)
1
ˆ
(
)
2
n i i i n i i iy
x
n
y
x
n
σ
σ
= ==
−
=
−
∑
∑
b) 2 2 1 2 1 2 2 1 2 2 1 2 2 1 2 21
ˆ
(
)
(
)
4
1
(
)
4
1
(
2
)
4
1
[ (
)
(2
)
(
)]
4
1
[
0
]
4
2
4
2
n i i i n i i i n i i i i i n i i i i i n iE
E
Y
X
n
E Y
X
n
E Y
Y X
X
n
E Y
E Y X
E X
n
n
n
n
σ
σ
σ
σ
σ
= = = = =⎡
⎤
=
_{⎢}
−
_{⎥}
⎣
⎦
=
−
=
−
+
=
−
+
=
− +
=
=
∑
∑
∑
∑
∑
So the estimator is biased. Bias is independent of n. c) An unbiased estimator of
σ
2is given by2
σ
ˆ
2769.   1_{2} c X P n c _{⎟}_{≤} ⎠ ⎞ ⎜ ⎝
⎛ _{−}_{µ} _{≥} σ _{ from Chebyshev's inequality. Then, }
2 1 1   c X P n c _{⎟}_{≥} _{−} ⎠ ⎞ ⎜ ⎝ ⎛ _{−}_{µ} _{<} σ _{. Given an ε, n } and c can be chosen sufficiently large that the last probability is near 1 and
n cσ is equal to ε. 770. a)
(
)
(
)
( )
n i n t P X t fori n F t X P ()≤ = ≤ =1,..., =[ ](
)
(
)
( )
n i t fori n Ft X P t X P (1)> = > =1,..., =[1− ]Then,
(
)
( )
n t F t X P (1) ≤ =1−[1− ] b)( )
( )
( )
( )
( )
F( )
t nF( )
t f( )
t t t f t f t F n t F t t f n X X n X X n n 1 1 ] [ ] 1 [ ) ( ) ( ) 1 ( ) 1 ( − − = = − = = ∂ ∂ ∂ ∂ c)(
)
( )
( )
n n X F p F X P (1) =0 = _{(}_{1}_{)} 0 =1−[1− 0] =1− because F(0) = 1  p.(
)
( )
( )
n(
)
n X n F F p X P n = − = − − − = =1 1 _{(}_{)} 0 1 [ 0] 1 1 ) ( d)(
_{≤})
_{=}( )
_{=}Φ[ ]
t−_{σ}µ t F t XP . From Exercise 762, part(b)
( )
{
[ ]
}
( )
{ }
[ ]
_{2} 2 2 ) ( ) ( 2 2 2 ) ( ) 1 ( 2 1 2 1 1 1 1 σ µ σ µ σ π Φ σ π Φ σµ σµ − − − − − − − − = − = t n t e n t f e n t f n t X n t X e)(
)
t e t XP ≤ =1− −λ. From Exercise 762, part (b)
( )
nt X t e F = 1− − λ ) 1 (( )
t n Xt
n
e
f
=
λ
− λ ) 1 (( )
t n X t e F n) [1 ] ( λ − − =( )
t n t X t n e e f n λ λ −_{λ} − − − = _{[}_{1} _{]} 1 ) ( 771.(
( )
)
(
( )
)
n n n t PX F t t X FP () ≤ = ()≤ −1 = from Exercise 762 for 0≤ ≤t 1. If Y=F(X(n)), then fY(y)=nyn−1,0≤ y≤1. Then, 1 ) ( 1 0 + = =
_{∫}
n n dy ny Y E n( )
(
)
(
( )
)
nt
t
F
X
P
t
X
F
P
_{(}_{1}_{)}≤
=
_{(}_{1}_{)}≤
−1=
1
−
(
1
−
)
from Exercise 762 for 0≤ ≤t 1.If Y=F(X_{(}_{1}_{)}), then ( )_{=} (1_{−} ) −1,0_{≤} _{≤}1 y t n y fY n . Then, 1 1 ) 1 ( ) ( 1 0 1 + = − =
_{∫}
− n dy y yn YE n where integration by parts is used. Therefore,
1 1 )] ( [ 1 )] ( [ () = _{+} (1) = _{+} n X F E and n n X F E n 772. ( ) [ ( 2 ) ( 2) 2 ( _{1})] 1 1 1 + + − = − + =
_{∑}
i i i i n i X X E X E X E k V E 2 2 2 2 2 2 1 1 2 ) 1 ( ) 2 ( σ µ µ σ µ σ − = − + + + =_{∑}
− = n k k n i Therefore, k=_{2}_{(}_{n}1_{−}_{1}_{)}773. a) The traditional estimate of the standard deviation, S, is 3.26. The mean of the sample is 13.43 so the values of
X
_{i}−
X
corresponding to the given observations are 3.43, 1.43, 4.43, 0.57, 4.57, 1.57 and 2.57. The median of these new quantities is 2.57 so the new estimate of the standard deviation is 3.81; slightly larger than the value obtained with the traditional estimator.b) Making the first observation in the original sample equal to 50 produces the following results. The traditional estimator, S, is equal to 13.91. The new estimator remains unchanged.
774. a) ) ... )( ( ... ... 1 2 3 1 2 1 1 2 3 1 2 1 2 3 1 2 1 1 2 1 1 − − − + + − + − + − + − + + − + − + + + − + − + + − + + = r r r r r X X X X X X X r n X X X X X X X X X X X X X X X X T
Because X1 is the minimum lifetime of n items, _{λ}
n X
E( 1)= 1 .
Then, X2 − X1 is the minimum lifetime of (n1) items from the memoryless property of the exponential and λ ) 1 ( 1 ) ( 2− 1 = _{−} n X X E . Similarly, λ ) 1 ( 1 ) ( − −1 = _{−} _{+} k n X X E k k . Then, λ λ λ λ r r n r n n n n n T E _{r} = + − + − + + − − + = ) 1 ( 1 ... ) 1 ( 1 ) ( and µ λ = = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ 1 r T E r
b) V(T_{r}/r)=1/(λ2r)is related to the variance of the Erlang distribution
2
/ )
(X r λ
V = . They are related by the value (1/r2). The censored variance is (1/r2) times the