• No results found

MSE performance of the weighted average estimators consisting of shrinkage estimators

N/A
N/A
Protected

Academic year: 2021

Share "MSE performance of the weighted average estimators consisting of shrinkage estimators"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)Kobe University Repository : Kernel タイトル Title. MSE performance of the weighted average estimators consisting of shrinkage estimators. 著者 Author(s). Namba, Akio / Ohtani, Kazuhiro. 掲載誌・巻号・ページ Communications in Statistics-Theory and Methods,47(5):1204-1214 Citation 刊行日 Issue date. 2018. 資源タイプ Resource Type. Journal Article / 学術雑誌論文. 版区分 Resource Version. author. 権利 Rights. This is an Accepted Manuscript of an article published by Taylor & Francis in Communications in Statistics-Theory and Methods on 2018 available online: http://www.tandfonline.com/10.1080/03610926.2017.1316860. DOI. 10.1080/03610926.2017.1316860. JaLCDOI URL. http://www.lib.kobe-u.ac.jp/handle_kernel/90004934. PDF issue: 2019-04-18.

(2) MSE Performance of the Weighted Average Estimators Consisting of Shrinkage Estimators Akio Namba∗ and Kazuhiro Ohtani† Graduate School of Economics Kobe University Rokko, Nada-ku, Kobe 657-8501 Japan. Abstract In this paper we consider a regression model and propose estimators which are the weighted averages of two estimators among three estimators; the Stein-rule (SR), the minimum mean squared error (MMSE) and the adjusted minimum mean squared error (AMMSE) estimators. It is shown that one of the proposed estimators has smaller mean squared error (MSE) than the positive-part Steinrule (PSR) estimator over a moderate region of parameter space when the number of the regression coefficients is small (i.e., 3), and its MSE performance is comparable to the PSR estimator even when the number of the regression coefficients is not so small.. Key Words: Mean squared error, Stein-rule estimator, Minimum mean squared error estimator, Adjusted Minimum mean squared error estimator, weighted average estimator. ∗ Akio. Namba Graduate School of Economics, Kobe University Rokko, Nada-ku, Kobe, 657-8501, Japan e-mail: namba@econ.kobe-u.ac.jp † Kazuhiro Ohtani Graduate School of Economics, Kobe University Rokko, Nada-ku, Kobe, 657-8501, Japan e-mail: kotani@econ.kobe-u.ac.jp.

(3) 1. Introduction. In the context of a linear regression, the Stein-rule (SR) estimator proposed by Stein (1956) and James and Stein (1961) dominates the ordinary least squares (OLS) estimator in terms of mean squared error (MSE) if the number of the regression coefficients is larger than or equal to three. Though the SR estimator dominates the OLS estimator, Baranchik (1970) showed that the SR estimator is further dominated by the positive-part Stein-rule (PSR) estimator. Also, as one of improved estimators, Theil (1971) proposed the minimum mean squared error (MMSE) estimator. However, Theil’s (1971) MMSE estimator includes unknown parameters, Farebrother (1975) suggested an operational variant of the MMSE estimator which is obtained by replacing the unknown parameters by the OLS estimators. As an extension of the MMSE estimator, Ohtani (1996b) considered the adjusted minimum mean squared error (AMMSE) estimator which is obtained by adjusting the degrees of freedom of the operational variant of the MMSE estimator. He showed by numerical evaluations that the AMMSE estimator has smaller MSE than the SR and the PSR estimators in a wide region of the noncentrality parameter when k ≤ 5, where k is the number of regression coefficients. In particular, when k = 3, the MSE of the AMMSE estimator can be much smaller than that of the PSR estimator for small values of noncentrality parameter. Therefore, Ohtani (1999) considered a heterogeneous pre-test estimator such that the AMMSE estimator is used if the null hypothesis that all the regression coefficients are zeros (in other words, the value of noncentrality parameter is zero) is accepted, and the SR estimator is used if the null hypothesis is rejected. Although the results were obtained by numerical evaluations, he showed that a heterogeneous pre-test estimator dominates the PSR estimator when k = 3 and the critical value of the pre-test is chosen appropriately. Extending the result of Ohtani (1999), Namba (2000) proposed another heterogeneous pre-test estimator and numerically showed that the proposed estimator has smaller MSE than the PSR estimator even when k ≥ 4. However, since the estimators considered by Ohtani (1999) and Namba (2000) connect two different estimators via a pre-test based on the F -statistic, they are not smooth. This is a drawback of their estimators common to the PSR estimator because the smoothness is required for the estimator to be admissible. In this paper, we propose estimators which are weighted averages of different kinds of estimators. The proposed estimators are smooth since they do not incorporate the pre-test. In the next section, we introduce the estimators, and derive an explicit formula for MSEs of the estimators in section 3. Using this formula, we examine the MSE performance of the estimators by numerical evaluations in section 4. Our numerical results show that one of our estimators has smaller MSE than the PSR estimator over a moderate region of parameter space. Finally, some concluding remarks are given in section 5.. 1.

(4) 2. Model and estimators. Consider a linear regression model, y = Xβ + ϵ, ϵ ∼ N (0, σ 2 In ).. (1). where y is an n × 1 vector of observations on the dependent variable, X is an n × k matrix of full column rank of observations on nonstochastic independent variables, β is a k × 1 vector of coefficients, and ϵ is an n × 1 vector of normal error terms with E[ϵ] = 0 and E[ϵϵ′ ] = σ 2 In . The ordinary least squares (OLS) estimator of β is b = S −1 X ′ y,. (2). where S = X ′ X. In the context of linear regression, the Stein-rule (SR) estimator proposed by Stein (1956) is defined as ( ) ae′ e bSR = 1 − ′ b, b Sb. (3). where e = y − Xb, and a is a constant such that 0 ≤ a ≤ 2(k − 2)/(ν + 2), where ν = n − k. If we use the loss function L(β̄; β) = (β̄ − β)′ S(β̄ − β),. (4). where β̄ is any estimator of β, the SR estimator dominates the OLS estimator in terms of mean squared error (MSE) for k ≥ 3. As is shown in James and Stein (1961), the MSE of the SR estimator is minimized when a = (k −2)/(ν +2). Thus we use this value of a hereafter. Although the SR estimator dominates the OLS estimator, Baranchik (1970) showed that the SR estimator is further dominated by the positive-part Stein-rule (PSR) estimator defined as ] [ ae′ e b. bPSR = max 0, 1 − ′ b Sb. (5). As one of improved estimators, Theil (1971) proposed the minimum mean squared error (MMSE) estimator. However, since Theil’s (1971) MMSE estimator includes unknown parameters, Farebrother (1975) suggested the following operational variant of the MMSE estimator ( ) b′ Sb bM = b. b′ Sb + e′ e/ν. (6). Hereafter, we call the operational variant of the MMSE estimator the MMSE estimator simply. There are several studies about the MMSE estimator and its variants. Some examples are Vinod (1976), Dwivedi and Srivastava (1978), Tracy and Srivastava (1994) and Ohtani (1996a, 1996b, 1997). Ohtani (1996a). 2.

(5) derived the exact MSE of the MMSE estimator and a sufficient condition for the MMSE estimator to dominate the OLS estimator. Furthermore, as an extension of the MMSE estimator, Ohtani (1996b) considered the following estimator which is obtained by adjusting the degrees of freedom of b′ Sb (i.e., k), ( ) b′ Sb/k bAM = b. b′ Sb/k + e′ e/ν. (7). We call this estimator the adjusted MMSE (AMMSE) estimator. Ohtani (1996b) showed by numerical evaluations that if k ≤ 5 the AMMSE estimator has smaller MSE than the SR, PSR and MMSE estimators in a wide region of noncentrality parameter defined as λ = β ′ Sβ/σ 2 . Thus, Ohtani (1999) considered the following heterogeneous pre-test estimator: βbPT (τ ) = I(F ≤ τ )bAM + I(F > τ )bSR ,. (8). where F = (b′ Sb/k)/(e′ e/ν) is the test statistic for H0 : β = 0 against H1 : β ̸= 0, τ is the critical value of the pre-test, and I(A) is an indicator function such that I(A) = 1 if an event A occurs and I(A) = 0 otherwise. He showed by numerical evaluations that the heterogeneous pre-test estimator dominates the PSR estimator for k = 3 if the critical values of the pre-test is chosen appropriately. Also, extending the result of Ohtani (1999), Namba (2000) proposed another pre-test estimator obtained by replacing the AMMSE estimator with the 2SHI estimator proposed by Tran Van Hoa and Chaturvedi (1990), and showed the dominance of the proposed estimator over the PSR estimator by numerical evaluations. Though the estimators proposed by Ohtani (1999) and Namba (2000) dominates the PSR estimator, they are not smooth because of the pre-test. This is a drawback of their estimators, since the smoothness is a necessary condition for the admissibility of an estimator. Thus, in this paper, we propose the following two estimators which are weighted averages of the estimators introduced above: ( ) ( ) F F b βWA1 = bM + 1 − bAM , 1+F 1+F ( ) ( ) F F βbWA2 = bSR + 1 − bAM . 1+F 1+F. (9) (10). These estimators are smooth since they do not incorporate any pre-test. Similar to βbPT (τ ) given in (8), bAM plays an important role in both βbWA1 and βbWA2 when the value of F is small. In the next section, we derive the explicit formula for the MSEs of βbWA1 and βbWA2 .. 3. MSE of the estimators. In this section, we derive the formula for the MSEs of βbWA1 and βbWA2 .. 3.

(6) Noting that b′ Sb/e′ e = kF/ν, substituting (3), (6) and (7) into (9) and (10), and conducting some calculations, we have [ ] F2 F βbWA1 = + b (F + 1)(F + 1/k) (1 + F )2. (11). and [ ] F − aν/k F b + βWA2 = b. F +1 (1 + F )2. (12). Thus, the MSE of βbWA1 is given by MSE[βbWA1 ] = E[(βbWA1 − β)′ S(βbWA1 − β)] } ] [{ F2 F4 F3 ′ + b Sb =E + 2 (F + 1)2 (F + 1/k)2 (F + 1)3 (F + 1/k) (F + 1)4 } ] [{ F F2 ′ + β Sb + β ′ Sβ. −2E (F + 1)(F + 1/k) (F + 1)2. (13). Similarly, [{. 1 F F2 + 2 + (F + 1)2 (F − aν/k)−2 (F + 1)3 (F − aν/k)−1 (F + 1)4 [{ } ] 2 F F −2E + β ′ Sb + β ′ Sβ. (F + 1)(F − aν/k)−1 (F + 1)2. MSE[βbWA2 ] = E. }. ′. ]. b Sb. Here, we define the functions H(p, q, r, m; α) and J(p, q, r, m; α) as ] [ Fr ′ m (b Sb) H(p, q, r, m; α) = E (F + 1)p (F + α)q. (14). (15). and [ J(p, q, r, m; α) = E. ] Fr ′ m ′ (b Sb) β Sb . (F + 1)p (F + α)q. (16). Then, we can express the MSEs of βbWA1 and βbWA2 as MSE[βbWA1 ] = H(2, 2, 4, 1; 1/k) + 2H(3, 1, 3, 1; 1/k) + H(4, 0, 2, 1; 1/k) −2J(1, 1, 2, 0; 1/k) − 2J(2, 0, 1, 0; 1/k) + β ′ Sβ. (17). and MSE[βbWA1 ] = H(2, −2, 0, 1; −aν/k) + 2H(3, −1, 1, 1; −aν/k) + H(4, 0, 2, 1; −aν/k) −2J(1, −1, 0, 0; −aν/k) − 2J(2, 0, 1, 0; −aν/k) + β ′ Sβ. As is shown in Appendix A, explicit formulae for H(p, q, r, m; α) and J(p, q, r, m; α) are given by ( )p+q−r ∑ ∞ k 2 m H(p, q, r, m; α) = (2σ ) wi (λ)Gi (p, q, r, m; α), ν i=0 ( )p+q−r ∑ ∞ k J(p, q, r, m; α) = β ′ Sβ(2σ 2 )m wi (λ)Gi+1 (p, q, r, m; α), ν i=0. 4. (18). (19) (20).

(7) where wi (λ) = exp(−λ/2)(λ/2)/i! and Γ((ν + k)/2 + m + i) Gi (p, q, r, m; α) = Γ(ν/2)Γ(k/2 + i). ∫ 0. 1. tk/2+m+r+i−1 (1 − t)ν/2+p+q−r−1 dt. [t + k(1 − t)/ν]p [t + αk(1 − t)/ν]q. (21). Substituting (19) and (20) into (17) and (18), we obtain explicit formulae for the MSEs of βbWA1 and βbWA2 . Since further theoretical analysis of the MSEs of βbWA1 and βbWA2 is difficult, we execute numerical evaluation in the next section.. 4. Numerical results. In this section, we compare the MSEs of estimators by numerical evaluations1 . The parameter values used in the numerical evaluations were k = 3, 4, 5, 8, 10, n = 20, 30, 50, and λ = various values. The numerical evaluations were executed on a personal computer, using the FORTRAN code. In evaluating the integral in Gi (p, q, r, m; α) given in (21), we used Simpson’s rule with 500 equal subdivisions. The infinite series in H(p, q, r, m; α) and J(p, q, r, m; α) were judged to converge when the increment of the series gets smaller than 10−12 . In order to compare the MSE performances of the estimators, we evaluated the values of relative MSE defined as MSE(β̄)/MSE(b), where β̄ is any estimator of β. Thus, the estimator β̄ has smaller MSE than the OLS estimator when the value of relative MSE is smaller than unity. Since the results for k = 3, k = 5 and n = 30 are qualitatively typical, we show the results only for these cases. Figure 1 shows the results for k = 3 and n = 30. We can see that all estimators considered here except for bAM dominate the OLS estimator b in terms of MSE, and βbWA2 has smaller MSE than the other estimators considered here except for bAM when λ is small (i.e., λ ≤ 2.0). Moreover, βbWA2 has smaller MSE than bPSR when λ is small and moderate (i.e., λ ≤ 8.0). Though bAM has smallest MSE among all the estimators when λ is small and moderate (i.e., λ ≤ 12.0), it has slightly larger MSE than the OLS estimator for large values of λ (i.e., λ ≥ 22.0). Also, though βbWA2 has smaller MSE than βbWA1 around λ = 0, the region where βbWA1 has smaller MSE than bPSR (i.e., λ ≤ 10.0) is slightly wider than that of βbWA2 (i.e., λ ≤ 8.0). Comparing βbWA1 and βbWA2 , the MSE of βbWA2 around λ = 0 is smaller than that of βbWA1 while the MSE of βbWA2 is slightly larger than that of βbWA1 for λ ≥ 3.0. Comparing the gain and the loss of MSE, βbWA2 may be preferred to bPSR and βbWA1 . Figure 2 shows the results for k = 5 and n = 30. Comparing Figures 1 and 2, we see that when k increases from 3 to 5, the MSE performance changes largely. Though all estimators considered here 1 As is suggested by an anonymous referee, we also calculated the squared lengths of bias of the estimators under the simplified assumptions. See Appendix B for details.. 5.

(8) Figure 1: MSEs for k = 3 and n = 30. 1.1 1. .,_--: - : . -. ---. -. ---. -. -- - -- - - -- - =-=-=---- -- -----------------. Relative MSE. 0.9 0.8 0.7 0.6 βbWA1 βbWA2 bSR bPSR bM bAM. 0.5 0.4 0.3 0. 5. 10. 15. 20. 25. 30. λ. dominate the OLS estimator, βbWA1 is dominated by bPSR , and both βbWA1 and βbWA2 no longer have smaller MSE than bPSR around λ = 0. Also, βbWA2 dominates βbWA1 and bM , and βbWA2 has smaller MSE than bPSR for 0.4 ≤ λ ≤ 22.0. This indicates that the MSE performances of two estimators, βbWA2 and bPSR , are comparable. When it is difficult to derive an exact distribution of an estimator, the bootstrap methods proposed by Efron (1979) have often been applied. (Some examples are Chi and Judge (1985), Brownstone (1990) and Yi (1991).) However, as is shown in Zaman (1996), the bootstrap methods are not so valid when an estimator is not smooth. Since the MSE performances of βbWA2 and bPSR are comparable and βbWA2 is smooth while bPSR is not smooth, we can say that βbWA2 can be a good smooth alternative to the PSR estimator, in particular when we apply the bootstrap methods.. 5. Concluding remarks. In this paper, we proposed estimators for regression coefficients which are weighted averages of two shrinkage estimators. Our numerical results show that the estimator βbWA2 which is a weighted average of the SR and the AMMSE estimators has smaller MSE than the PSR estimator over a moderate region. 6.

(9) Figure 2: MSEs for k = 5 and n = 30. 1 0.9. Relative MSE. 0.8 0.7 0.6 βbWA1 βbWA2 bSR bPSR bM bAM. 0.5 0.4 0.3 0. 5. 10. 15. 20. 25. 30. 35. 40. λ. of parameter space when k = 3. Even when k > 3, βbWA2 has comparable MSE performance to the PSR estimator bPSR . Also, the proposed estimators βbWA1 and βbWA2 have smaller MSE than the OLS estimator for all parameter values considered in this paper. Moreover, because of their structure, the proposed estimators are smooth. Compared with the PSR estimator, this is the virtue of proposed estimators. Considering this result, we may able to construct smooth estimators which have more preferable performance by taking weighted averages of possibly some other estimators. However, seeking for such estimators is beyond the scope of this research and a remaining problem for future research.. Appendix A First, we derive the formula for H(p, q, r, m; α). Let u1 = b′ Sb/σ 2 and u2 = e′ e/σ 2 . Then, u1 is distributed as the noncentral chi-square distribution with k degrees of freedom and noncentrality parameter λ = β ′ Sβ/σ 2 , and u2 is distributed as the chi-square distribution with ν = n − k degrees of freedom. Using u1 and u2 , H(p, q, r, α) is expressed as ] [ ( νk uu12 )r 2 m (σ u1 ) H(p, q, r, m; α) = E ν u1 ( k u2 + 1)p ( νk uu12 + α)q. 7.

(10) = ν r k p+q−r (σ 2 )m. ∞ ∑. Ki i=0 ∫ ∞∫ ∞ k/2+m+r+i−1 ν/2+p+q−r−1 u1 u2 × p (νu1 + ku2 ) (νu1 + αku2 )q 0 0. exp[−(u1 + u2 )/2]du1 du2 ,. (22). where Ki =. wi (λ) (ν+k)/2+i 2 Γ(ν/2)Γ(k/2. + i). ,. (23). and wi (λ) = exp(−λ/2)(λ/2)i /i !. Making use of the change of variables, v1 = u1 /u2 and v2 = u2 , the integral in (22) reduces to ∫ ∞∫ ∞ k/2+m+r+i−1 (ν+k)/2+m+i−1 v2 v1 exp[−(v1 + 1)/2]dv1 dv2 . p (νv1 + k) (νv1 + αk)q 0 0 Again, making use of the change of variable, z = (1 + v1 )v2 /2, (24) reduces to ∫ ∞ k/2+m+r+i−1 v1 (ν+k)/2+m+1 2 Γ((ν + k)/2 + m + i) dv1 . (νv1 + k)p (νv1 + αk)q (v1 + 1)(ν+k)/2+m+1 0. (24). (25). Further, making use of the change of variable, t = v1 /(1+v1 ), and substituting (25) in (22) and performing some manipulations, (22) reduces to (19) in the text. Next, we derive the formula for J(p, q, r, m; α). Differentiating H(p, q, r, α) given in (19) with respect to β, we obtain ( )p+q−r ∑ ] ∞ [ k ∂wi (λ) Gi (p, q, r, m; α) ν ∂β i=0 ] ( )p+q−r ∑ ∞ [ Sβ Sβ k − 2 wi (λ) + 2 wi−1 (λ) Gi (p, q, r, m; α) = (2σ 2 )m ν σ σ i=0 ( )p+q−r ∑ ∞ Sβ Sβ k = − 2 H(p, q, r, m; α) + 2 (2σ 2 )m wi (λ)Gi+1 (p, q, r, m; α), σ σ ν i=0. ∂H(p, q, r, α) = (2σ 2 )m ∂β. (26). where we define w−1 (λ) = 0. Expressing H(p, q, r, m; α) by b′ Sb and e′ e, we have: [ ] ′ r ( νk be′Sb ′ m e ) H(p, q, r, m; α) = E ν b′ Sb (b Sb) ′ q ( k e′ e + 1)p ( νk be′Sb e + α) ′ ∫ ∞∫ ∞ r ( νk be′Sb e ) (b′ Sb)m fN (b)fe (e′ e)db d(e′ e), = ν b′ Sb p ( ν b′ Sb + α)q + 1) ( −∞ k e′ e 0 k e′ e where fe (e′ e) is the density function of e′ e, and [ ] (b − β)′ S(b − β) 1 exp − fN (b) = . 2σ 2 (2π)k/2 |σ 2 S −1 |1/2 Differentiating H(p, q, r, m; α) given in (27) with respect to β, we obtain: ] [ ′ r ( νk be′Sb Sβ ∂H(p, q, r, m; α) 1 ′ m e ) (b Sb) Sb − 2 H(p, q, r, m; α). = 2 E ν b′ Sb ′ q ∂β σ σ + α) ( k e′ e + 1)p ( νk be′Sb e Equating (26) and (29), and multiplying β ′ from left, we obtain (20) in the text.. 8. (27). (28). (29).

(11) Appendix B In this Appendix, we explain the way to calculate the squared lengths of bias of the estimators under some simplified conditions. In order to make the calculation tractable, we assume S = X ′ X = I, β = β[1, 1, . . . , 1]′ = βι, where β is a scalar and ι is a k × 1 column vector whose elements are 1s. In addition, we assume σ 2 = 1 for simplicity. These conditions are assumed only when we calculate the the squared lengths of bias of the estimators. Under these assumptions, we have 2. 2. λ = β ′ Sβ/σ 2 = β ι′ ι = kβ . Also, equation (16) can be transformed as follows [ ] Fr ′ J(p, q, r, 0; α) = E β Sb (F + 1)p (F + α)q [ ] Fr ′ =E βb (F + 1)p (F + α)q [ ] Fr ′ = βι E b (F + 1)p (F + α)q under the above mentioned conditions. Moreover, under the same conditions, all the k elements of [ ] Fr Fr b are distributed as the same distribution, and accordingly, all the elements of E b p q p q (F +1) (F +α) (F +1) (F +α) [ ] Fr are the same. Thus, if we denote the elements of E (F +1)p (F +α)q b by ξ, we have: J(p, q, r, 0; α) = kβξ [ Similarly, if we denote the elements of E. Fr. ′. (F +1)p′ (F +α′ )q′. ] b as ζ, we have. J(p′ , q ′ , r′ , 0; α′ ) = kβζ Thus, we obtain ] { [ ]}′ [ ′ Fr Fr E b E b = kξζ (F + 1)p (F + α)q (F + 1)p′ (F + α′ )q′ =. J(p, q, r, 0; α)J(p′ , q ′ , r′ , 0; α′ ) kβ. 2. Since the bias of βbWA1 is [ ] ] [ F2 F b Bias[βWA1 ] = E b − β, b +E (F + 1)(F + 1/k) (F + 1)2 the squared length of the bias is given by { [ ]}′ [ ] F2 F2 ′ b b Bias[βWA1 ] Bias[βWA1 ] = E b E b (F + 1)(F + 1/k) (F + 1)(F + 1/k) ]}′ [ ] { [ F F b E b + β′β + E (F + 1)2 (F + 1)2. 9. ..

(12) { [ +2 E (F [ −2β ′ E (F [ −2β ′ E (F. ]}′ [ ] F2 F b E b + 1)(F + 1/k) (F + 1)2 ] F2 b + 1)(F + 1/k) ] F b + 1)2. = J(1, 1, 2, 0; 1/k)J(1, 1, 2, 0; 1/k)/kβ. 2 2. +J(2, 0, 1, 0; 1/k)J(2, 0, 1, 0; 1/k)/kβ + kβ +2J(1, 1, 2, 0; 1/k)J(2, 0, 1, 0; 1/k)/kβ. 2. 2. −2J(1, 1, 2, 0; 1/k) − 2J(2, 0, 1, 0; 1/k). The squared lengths of the bias vectors of the other estimators are obtained in a similar way. Using the formula obtained above, we calculated the squared bias lengths of the estimators defined as [Bias[β̃]′ Bias[β̃]/MSE[b], 2. where, β̃ is any estimator for β. The results for k = 3 and 5, n = 30, and various values of λ = kβ are given in Figures 3 and 4. From the Figures, we can see: 1. The bias of the MMSE estimator is smaller than that of the AMMSE estimator. As a whole, the bias of the MMSE estimator is small compared with those of the other estimators. 2. The SR estimator has smaller bias than the AMMSE estimator for k = 3, however, the former has larger bias than the latter for k = 5. 3. Generally, the bias of βbWA1 exists between those of the MMSE and the AMMSE estimators, and the bias of βbWA1 exists between those of the AMMSE and the SR estimators. The bias of an estimator is mostly determined by the shrink factor which is a functions of F . Also, F will be large when λ is large. Further, βbWA1 is an weighted average of bM and bAM , and βbWA2 is an weighted average of bSR and bAM . From these facts, we can consider most of the above results on bias vectors are as expected.. Acknowledgment The authors are grateful for anonymous referees for helpful comments and suggestions. This work was supported by JSPS KAKENHI Grant Number 23243038, 26780136.. 10.

(13) Figure 3: Squared lengths of bias of the estimators for k = 3 and n = 30. 0.14. βbWA1 βbWA2 bSR bPSR bM bAM. /. 0.12. Squared length of bias. I 0.1 0.08. -- ---. 0.06. .... - 0.04 0.02 0 0. 5. 10. 15. 20. 25. 30. 35. 40. λ. Figure 4: Squared lengths of bias of the estimators for k = 5 and n = 30. 0.14. Squared length of bias. 0.12. /. --. --. I. 0.1. --. βbWA1 βbWA2 bSR bPSR bM bAM. 0.08 0.06 0.04 0.02 0 0. 5. 10. 15. 20 λ. 11. 25. 30. 35. 40.

(14) References Baranchik, A. J. (1970) A family of minimax estimators of the mean of a multivariate normal distribution. The Annals of Mathematical Statistics 41, 642–645. Brownstone, D. (1990) Bootstrapping improved estimators for linear regression models. Journal of Econometrics 44, 171–187. Chi, X. W. & G. Judge (1985) On assessing the precision of Stein’s estimator. Economics Letters 18, 143–148. Dwivedi, T. & V. Srivastava (1978) On the minimum mean squared error estimators in a regression model. Communications in Statistics-Theory and Methods 7, 487–494. Efron, B. (1979) Bootstrap methods: Another look at the jackknife. Annals of Statistics 7, 1–26. Farebrother, R. W. (1975) The minimum mean square error linear estimator and ridge regression. Technometrics 17, 127–128. James, W. & C. Stein (1961) Estimation with quadratic loss. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1, pp. 361–379. University of California Press, Berkeley. Namba, A. (2000) MSE dominance of the PT-2SHI estimator over the positive-part Stein-rule estimator in regression. Journal of Statistical Planning and Inference 89, 175–185. Ohtani, K. (1996a) Exact small sample properties of an operational variant of the minimum mean squared error estimator. Communications in statistics-Theory and methods 25, 1223–1231. Ohtani, K. (1996b) On an adjustment of degrees of freedom in the minimum mean squared error estimator. Communications in Statistics-Theory and Methods 25, 3049–3058. Ohtani, K. (1997) Minimum mean squared error estimation of each individual coefficient in a linear regression model. Journal of Statistical Planning and Inference 62, 301–316. Ohtani, K. (1999) MSE performance of a heterogeneous pre-test estimator. Statistics & probability letters 41, 65–71. Stein, C. (1956) Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1, pp. 197–206. University of California Press, Berkeley. Theil, H. (1971) Principles of Econometrics, John Wiley, New York. Tracy, D. S. & A. K. Srivastava (1994) Comparison of operational variants of best homogeneous and heterogeneous estimators in linear regression. Communications in statistics-Theory and Methods 23, 2313–2322. Van Hoa, T. & A. Chatruvedi (1990) Further results on the two-stage hierarchial information (2SHI) estimators in the linear regression models. Communications in Statistics-Theory and Methods 19, 4697–4704. Vinod, H. (1976) Simulation and extension of a minimum mean squared error estimator in comparison with Stein’s. Technometrics 18, 491–496.. 12.

(15) Yi, G. (1991) Estimating the variability of the Stein estimator by bootstrap. Economics Letters 37, 293–298. Zaman, A. (1996) Statistical Foundations for Econometric Techniques, Academic Press.. 13.

(16)

References

Related documents

In this study, we used the data from the BELFRAIL study to analyze the ability of GFR, estimated by older equations like the MDRD and CKD-EPI creatinine equations and the three new

In this study we aimed to evaluate the serum and tissue levels and local expression pattern of apelin in eutopic and ectopic endometrium from patients with and without

We propose a new theoretical approach to calculate nuclear structure using bare nuclear interac- tion, in which the tensor and short-range correlations are described by using the

Components in each ensemble are evaluated for compatibility based on the value of attributes in their component specification and a repository of software engineering

Indications Phase 1 ongoing Development Fully proprietary Commercial Rights HER2-Targeting Antibody 4-1BB-Targeting Anticalin Proteins.. 4-1BB (CD137): Validated Target in Need

Clostridium perfringens: a review of the disease in pigs, horses and broiler chickens Clostridium perfringens : uma revisão da doença em suínos, equinos e frangos de

Quando é analisada a eficiência dos pro- dutos, observa-se que alguns já se mostram com uma pequena eficiência, pois a fórmula de Henderson & Tilton (1955) leva em consi-

Other groups most likely to have quite a lot or a great deal of confidence in their local/municipal government include: persons with higher household incomes, persons age 65