prediction interval coverage probability

Top PDF prediction interval coverage probability:

Bayesian forecasting of mortality rates by using latent Gaussian models

Bayesian forecasting of mortality rates by using latent Gaussian models

To assess the quality of the prediction intervals obtained for the future probabilities of death we calculated the empirical coverage probabilities of the prediction intervals obtained, the mean width of the prediction intervals and the mean interval score. The quality of the mean forecasts was assessed by using the root-mean-squared error of the predicted means. For a fixed prediction horizon k and age z the empirical coverage probability of the prediction interval obtained from a given model was computed as the proportion of the 25 −k intervals that include the observed probability of death at age z at the year T +k , for T = 1989, : : : , 2013 −k . The mean width of the prediction interval is the sample mean of the 25 −k widths of the prediction intervals obtained and the mean interval score is the sample mean of the scoring rule called the interval score; see equation (43) in Gneiting and Raftery (2007). As was explained in Gneiting and Raftery (2007) the interval score is a scoring rule which rewards the forecaster who obtains narrow prediction intervals and incurs a penalty, proportional to the level of significance of the interval, if the observation misses the prediction interval. This means that we would like to obtain prediction intervals with low mean interval score. See also the on-line supplementary material of the present paper for a more detailed presentation of the interval score.
Show more

23 Read more

Bayesian and frequentist methods for approximate inference in generalized linear mixed models

Bayesian and frequentist methods for approximate inference in generalized linear mixed models

The plug-in method seems to have the smallest bias. We see that the second set of priors (inverse gamma for the partial sill and uniform for the other two) tends to have lower bias at the tails of the distribution which is the most significant when a prediction interval is constructed. Also the first set of priors (all uniform) has a small bias. On the other hand, the standard deviation is too large to allow for a clear answer as to which prior should be preferred. It’s also interesting to see in how many cases over the 1000 simulations there are such that the absolute bias of the predictive density under one prior is less than the one under a different prior. This comparison is presented in Table 7.5. According to this table and in combination with Table 7.4, the second set of priors results to a predictive distribution with smaller coverage probability bias among the Bayesian predictive densities. The plug-in method also results to low coverage probability bias.
Show more

112 Read more

Asymptotic multivariate kriging using estimated parameters with bayesian prediction methods for non-linear predictands

Asymptotic multivariate kriging using estimated parameters with bayesian prediction methods for non-linear predictands

Smith and Zhu (2004) consider several properties of predictive inference which while not limited to spatial processes have useful applications in spatial statistics. Here they develop a second-order expansion for predictive distributions in Gaussian processes using estimated covariances where the covariance parameters are obtained using restricted max- imum likelihood estimated. Smith and Zhu focus on the estimation of quantiles for the predictive distribution and the application to prediction intervals. They consider both a “plug-in” approach and a Bayesian approach. The Bayesian approach proves superior in the tails of the distribution regardless of the prior implemented. The second-order coverage probability bias is also considered and a frequentist correction is established that has zero second-order coverage probability bias. This is analogous to the existence of a “matching prior” for the Bayesian method. Another key result is an expression for the expected length of a prediction interval. Smith and Zhu provide the original development for the univariate normal predictive distribution of the methods considered in this dissertation for the non- linear multivariate case.
Show more

129 Read more

Measuring inter-rater reliability for nominal data – which coefficients and confidence intervals are appropriate?

Measuring inter-rater reliability for nominal data – which coefficients and confidence intervals are appropriate?

For Krippendorff ’s alpha the theoretical distribution is not known, even not an asymptotic one [28]. However, the empirical distribution can be obtained by the boot- strap approach. Krippendorff proposed an algorithm for bootstrapping [28, 29], which is also implemented in the SAS- and SPSS-macro from Hayes [28, 30]. The pro- posed algorithm differs from the one described for Fleiss’ K above regarding three aspects. First, the algorithm weights for the number of ratings per individual to ac- count for missing values. Second, not the N observa- tions, with each observation containing the associated assessments of all raters, are randomly sampled. Instead the random sample is drawn from the coincidence matrix, which is needed for the estimation of Krippen- dorff’s alpha (see Additional file 1). This means that the dependencies between the raters are not taken into ac- count. The third difference is that Krippendorff keeps the expected disagreement fixed, and only the observed disagreement is calculated anew in each bootstrap step. We performed simulations for a sample size of N = 100 observations, which showed that the empirical and the theoretical coverage probability differ considerably (median empirical coverage probability of 60 %). Therefore, we decided to use in our study the same bootstrap algorithm for Krippendorff’s alpha as for Fleiss’ K (in the following labelled as standard approach). This leads to a vector of the bootstrap estimates (sorted by size) Α B = (Â [1] , …, Â [B] ). Then the bootstrap 1 – α/2 confi-
Show more

10 Read more

Comparison of Different Confidence Intervals of Intensities for an Open Queueing Network with Feedback

Comparison of Different Confidence Intervals of Intensities for an Open Queueing Network with Feedback

expected length of confidence intervals. They first pro- posed bootstrapping technique and concept of relative coverage to queueing system. They studied five estima- tion approaches of intensity for a queueing system with distribution free inter-arrival and service times for short run. They have introduced a new measure called relative coverage to assess the efficient performances of confi- dence intervals.

21 Read more

Associations between dairy cow inter service interval and probability of conception

Associations between dairy cow inter service interval and probability of conception

Figure 1 The distribution of interservice intervals ISIs from a large dataset of UK dairy cows, showing the number of inseminations left axis both resulting in a pregnancy black bars, th[r]

16 Read more

Comparision of various Wind Turbine Generators

Comparision of various Wind Turbine Generators

This approach combines the advantage of probabilistic and area based method. In probabilistic method depend on pre- defined fixed probability to determined whether to rebroadcast the packet or not but the problem is that how to set rebroadcast probability. As the values of all nodes are same so it is critical to identify and categorise the node in the various regions and appropriately adjust their rebroadcasting probability. So we can dynamically determine the rebroadcasting probability. By using dynamic probabilistic broadcasting based on coverage area and neighbour confirmation in that coverage area is used to adjust the rebroadcasting probability and by using neighbour confirmation confirm that all neighbour received the broadcast packet if some are not received forward packet to that node and determine the suitable probability. For this author used three steps to determine or adjust the rebroadcasting probability.
Show more

6 Read more

Prediction of Diabetes using Probability Approach

Prediction of Diabetes using Probability Approach

T2he problem of work is about predicting whether a person is diabetic or non diabetic in a dataset by applying bayesian network . This problem is solved using the primary attribute . The dataset variables which are used for prediction of diabetes are fast plasma glucose concentration in an oral glucose tolerance test ,casual plasma glucose tolerance test and diastolic blood pressure (mmHg) is decision variable.

6 Read more

Neural Network Probability Estimation for Broad Coverage Parsing

Neural Network Probability Estimation for Broad Coverage Parsing

in history-based models Black et al., 1993, the probability estimate for each derivation deci- sion di is conditioned on the previous derivation decisions d1,..., d,_1, which is called t[r]

8 Read more

Applying FDTD to the Coverage Prediction of WiMAX Femtocells

Applying FDTD to the Coverage Prediction of WiMAX Femtocells

The finite-di ff erence time-domain (FDTD) [1] method for electromagnetic simulation is today one of the most e ffi cient computational approximations to the Maxwell equations. Its accuracy has motivated several attempts to apply it to the prediction of radio coverage [2, 3], though one of the main limitations is still the fact that FDTD needs the implemen- tation of a highly time-consuming algorithm. Furthermore, the deployment of metropolitan wireless networks in the last years has recently triggered the need for radio network planning tools that aid operators to design and optimize their wireless infrastructure. These tools rely on accurate descriptions of the underlying physical channel in order to perform trustworthy link- and system-level simulations with which to study the network performance. To increase the reliability of these tools, accurate radiowave propagation models are thus necessary.
Show more

13 Read more

Coverage probability bias, objective Bayes and the likelihood principle

Coverage probability bias, objective Bayes and the likelihood principle

In the present paper we discuss objective Bayes methods which have some justification in terms of repeated sampling performance characteristics. More specifically, throughout this paper we shall use the term ‘objective Bayes’ to mean any Bayesian procedure, which can be justified on the basis of small coverage probability bias; that is, any Bayesian procedure that is well calibrated. Here we use the term ‘coverage probability’ to refer to the frequentist probability of some statement either about a model parameter or about a future observation. The rationale behind these ideas is that the resulting Bayesian statements are endowed with additional frequentist validity. A major aim of the paper is to elucidate some of the issues underlying objective Bayes construction via coverage probability bias with a view to the future development of this approach, especially in relation to objective prior construction for multiparameter models.
Show more

32 Read more

The Association of Gender, Age, and Coping with Internalizing Symptoms in Youth with Sickle Cell Disease

The Association of Gender, Age, and Coping with Internalizing Symptoms in Youth with Sickle Cell Disease

A few studies have been performed to understand how restrictive and inclusive MI compare. For example, Kwon (2011) examined the impact of listwise deletion, mean substitution, restrictive and inclusive EM algorithm (a FIML approach using the EM algorithm), and restrictive and inclusive MI on the second level of a two-level MLM where the probability of missingness was MAR. Results showed that the number of level- 2 predictors and sample size did not impact bias of the MDTs, while the proportion of missing data significantly impacted bias. Specifically, when the proportion of missing data increased, the relative bias among the MDTs tended to increase in most fixed effects and some random effects. Further, results showed the inclusive MI and listwise deletion generally outperformed the other MDTs that produced “practically acceptable” bias in most fixed effects that were highly related to missingness, however listwise deletion produced the largest RMSE and confidence intervals. Restrictive EM and inclusive EM performed well except in the cases with large proportion of missing data (30%). Lastly, restrictive MI and mean substitution produce bias with even a small proportion of missing data (less than 15%).
Show more

161 Read more

Probability Model of Forward Birth Interval and Its Application

Probability Model of Forward Birth Interval and Its Application

The application of the derived model on real observed data taken from Demographic Survey of Varanasi Rural, India. In the observed distributions of forward birth interval to females is larger marital duration. Further, to avoid the possible incidence of sterility and heterogeneity in the fertility characteristics females with marital duration 10 to 20 years have been included. As a close approximation in the estimates for the present surveyed population we have taken four point observed values of PPA eq. 3 months, 6 months, 12 months and 18 months with respective proportion of females b 1 = 0.25, b 2 =0.35, b 3 =0.320 and b 4 =0.080, such that
Show more

5 Read more

Financial Intermediation and Economic Growth: Bank Credit Maturity and Its Determinants

Financial Intermediation and Economic Growth: Bank Credit Maturity and Its Determinants

Since z 1 − α / 2 is approximately equal to 2 when α = 0.05 , ˆ J AC may be regarded as an adjusted estimate for the difference between two binomial proportions by adding two successes and two failures to the Bernoulli observations. ˆ J AC is also the difference between two correlated proportions. Hence, it is difficult to find the variance of ˆ J AC . Therefore, the most often used Wald interval cannot be directly applicable here. In this thesis we propose to use bootstrap method to estimate the variance of ˆ J AC . We summarize the procedure for computing the bootstrap variance in the following steps:
Show more

71 Read more

Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs

Estimation of AUC from 0 to Infinity in Serial Sacrifice Designs

Simulations using sample size of 3 and 5 per time point which are typical sample sizes for this type of studies indicate better coverage probabilities using bootstrap-t confidence intervals for normal and log-normal distributed errors. Asymptotic confidence intervals based on the normal distribution are therefore not recommended for such small sample size due to the substantial lack of coverage. Using a theoretical sample size of 100 animals per time point, both the asymptotic and the bootstrap-t confidence intervals indicate sufficient coverage. For asymmetrically distributed statistics like AUC \ 0−∞ the bootstrap-t some-
Show more

10 Read more

Efficient Coverage and Routing in Wireless Sensor Networks using Firefly Algorithm

Efficient Coverage and Routing in Wireless Sensor Networks using Firefly Algorithm

In fourth simulation we have calculated the value of fitness function which is equal to product of residual energy possessed by the network and the probability of covering the POI’s for both modified firefly algorithm and greedy algorithm. Because of greed in greedy algorithm, this algorithm always try to achieve better probability using more sensor nodes as a results the energy consumed is more. But keeping the energy constraint in mind the energy of sensor nodes must be conserved to the larger extent. The modified algorithm because of no greed gives equal preference to both the parameters resulting in better results. Also convergence parameter (ξ) in the fitness function converges the computations as early as possible which in turn results in increase in energy of the network. The graph is shown in Fig.4
Show more

9 Read more

High Dimensional Methods in Statistics, Data Mining and Finance.

High Dimensional Methods in Statistics, Data Mining and Finance.

will possibly result in a longer list of candidates than is required. Then we need to select the candidates from this longer list that deserve to be contained in the region based on the likelihood ratio criteria. Since this is about moving from a bigger set to a smaller set, some over estimation of the region is possible and the simulation study discussed later does report little over coverage of the bootstrap based confidence region. One downside of the bootstrap method is that, to make a sufficiently exhaustive list of quantile functions, so that the list contains the target confidence set well with in it, we need to replicate the bootstrap resampling a sufficiently large number of times. Lot of these replications will go completely wasted and unused as candidate quantile functions corresponding to lot of the replications won’t satisfy the likelihood ratio criterion and hence won’t be included as a part of the confidence region. However, for each replication, likelihood ratio needs to be calculated, no matter whether the corresponding candidate quantile function is a part of the target confidence region or not. Hence the associated heavy optimization is to be carried out too. This makes the bootstrap procedure very time consuming and slower than smoothing method and this is also confirmed in the simulation study, discussed in the next section.
Show more

141 Read more

Prediction Based on Generalized Order Statistics from a Mixture of Rayleigh Distributions Using MCMC Algorithm

Prediction Based on Generalized Order Statistics from a Mixture of Rayleigh Distributions Using MCMC Algorithm

 of the MTR distribution based on GOS. The conju- gate prior is assumed to carry out the Bayesian analysis. The results are specialized to the upper record values. The rest of the article is organized as follows. Section 2 deals with the derivation of the maximum likelihood estimators of the involved parameters. Sections 3 and 4, deals with studying the maximum likelihood (point and interval) and the Bayes prediction (point and interval) in the case of one-sample scheme and two-sample scheme. In Section 5, the numerical computations results are presented and the concluding remarks.
Show more

12 Read more

Germination Biology and the Ecology of Annual Plants

Germination Biology and the Ecology of Annual Plants

If we assume that unoccupied microsites occur independently at random in each time interval and that the probability of germinating in an unoccupied microsite is g,, and the probability [r]

26 Read more

Coverage prediction and optimization algorithms for indoor environments

Coverage prediction and optimization algorithms for indoor environments

The planning algorithm predicts the indoor coverage by means of path loss prediction based on the IDP [10]. This model is a compromise between semi-empirical models only considering the “ direct ” ray between trans- mitter Tx and receiver Rx (e.g., Motley-Keenan multi- wall model [29]) and ray-tracing models where hundreds of rays and their interactions with the environment are investigated. In the IDP model, propagation focuses on the dominant path between transmitter and receiver, i. e., the path along which the signal encounters the smal- lest obstruction in terms of path loss. It takes into account the length along the path, the number and type of interactions (e.g., reflection, transmission, etc.), the material properties of the objects encountered along the path, etc. The approach of using the IDP model is justi- fied by the fact that more than 95% of the energy received is contained in only 2 or 3 rays [10]. According to [10], predictions made by IDP models reach the accu- racy of ray-tracing models or even exceed it. In Section 9, model predictions will be compared with ray-tracing tool predictions.
Show more

23 Read more

Show all 10000 documents...