However, the ensemble forecasting is often still underdispersive (the centered weather forecasting on a value with low variance). As a result, the forecast interval becomes narrow and the observations cannot be contained in the forecast interval, so the ensemble calibration is required (Schmeits and Kok, 2010). Several methods for forecast calibrating of ensembles are **Bayesian** **Model** **Averaging** (BMA), Geostatistical Output Perturbation (GOP) and Spatial BMA, which are a combination of BMA and GOP.

18 Read more

We show in this paper why researchers ought to pay particular attention to the issues of **model** uncer- tainty and data poolability in their panel data applications. We focus on the identification of robust deter- minants of current account balances (CABs). Applying **Bayesian** **Model** **Averaging**, we adopt a flexible modelling approach to highlight that (i) some determinants have limited relevance when accounting for **model** uncertainty; (ii) slope homogeneity is unlikely to be a valid assumption; iii) cross-sectional and time-series relationships can diverge. We explain why estimating cross-sectional estimates is valuable, even in the potential presence of an omitted variable bias, and suggest a way for assessing the effects of unobserved country heterogeneity.

Show more
28 Read more

Applications of **Bayesian** **model** **averaging** to the economics field have primarily been made in the empirical growth regression literature, due to the large number of possible determinants of growth with little theoretical guidance regarding **model**/variable selection. Fernandez et al (2001) employ a Markov chain Monte Carlo **Model** Composition (MC3) technique to perform BMA for cross-country growth regressions. In a related paper Ley and Steel (2009) recommend the use of a hierarchical prior for the prior probability of inclusion of each variable rather than using a fixed probability which has strong implications for **model** size, and also argue against the use of the unit information prior (UIP). Eicher et al (2007) also investigate the effects of twelve different prior distribution assumptions on the results of BMA methods concluding that although priors affect the selection of models, the economic impact of the variables as measured by the posterior means of regression coefficients is very stable across priors. Excellent general discussions of the BMA procedure can be found in Raftery et al (1997), Hoeting et al (1999) and Montgomery and Nyhan (2010).

Show more
34 Read more

ABSTRACT: Some recent cross-country cross-sectional analyses have employed **Bayesian** **Model** **Averaging** to tackle the issue of **model** uncertainty. **Bayesian** **model** **averaging** has become an important tool in empirical settings with large numbers of potential regressors and relatively limited number of observations. We examine the effect of a variety of prior assumptions on the inference, posterior inclusion probabilities of regressors and on predictive performance. **Bayesian** **model** **averaging** (BMA) has become a widely accepted way of accounting for **model** uncertainty in regression models. However, to implement BMA, a prior is usually specified in two parts: prior for the regression parameters and prior over the **model** space. Hence, the choice of prior specification becomes paramount in **Bayesian** inference, unfortunately, in practice, most **Bayesian** analyses are performed with the so-called non-informative priors (i.e. priors constructed by some formal rule). The arbitrariness in the choice of prior or choosing inappropriate priors often lead to badly behaved posteriors. It is therefore imperative to study the effect of choice of priors in **Bayesian** **model** **averaging**. Six candidate parameter priors namely, Unit information prior (UIP), Risk inflation criterion (RIC), **Bayesian** Risk Inflation criterion (BRIC), Hannan- Quinn criterion (HQ), Empirical Bayes (EBL) and hyper-g and three **model** priors: uniform, beta-binomial and binomial were examined in this study. The performances of the resulting eighteen cases were judged using posterior inference, posterior inclusion probabilities of regressors and predictive performance. Analyses were carried out using datasets with 9- potential drivers of growth for 126 countries from 2010 to 2014. Finally, our analysis shows that the EBL parameter prior with random **model** prior robustly identifies far more growth determinants than other priors.

Show more
13 Read more

Abstract —**Bayesian** filters can be made robust to outliers if the solutions are developed under the assumption of heavy-tailed distributed noise. However, in the absence of outliers, these robust solutions perform worse than the standard Gaussian assumption based filters. In this work, we develop a novel robust filter that adopts both Gaussian and multivariate t -distributions to **model** the outliers contaminated measurement noise. The effects of these distributions are combined within a **Bayesian** **Model** **Averaging** (BMA) framework. Moreover, to reduce the computational com- plexity of the proposed algorithm, a restricted variational Bayes (RVB) approach handles the multivariate t -distribution instead of its standard iterative VB (IVB) counterpart. The performance of the proposed filter is compared against a standard cubature Kalman filter (CKF) and a robust CKF (employing IVB method) in a representative simulation example concerning target tracking using range and bearing measurements. In the presence of outliers, the proposed algorithm shows a 38 % improvement over CKF in terms of root-mean-square-error (RMSE) and is computationally 2.5 times more efficient than the robust CKF.

Show more
Abstract: The depreciation of Malaysia Ringgit (MYR) value since 2013 until 2016 have brought many negative impacts on Malaysia’s economy such as the depreciation of export value and the appreciation of import value. Such impacts are getting more severe when the exchange rate of MYR to the currency of Malaysia’s biggest trading partner, China Yuan Renminbi (CNY) is increasing. The study is conducted to explain the movement in the exchange rate of Malaysia Ringgit to China Yuan Renminbi (MYR/CNY). The four macroeconomic factors used to build the estimation models for the exchange rate of MYR/CNY in this study are relative current account balance, relative trade openness, relative sovereign debt, crude oil price. The estimation models are built using two different methods, **Bayesian** **Model** **Averaging** (BMA) and Multiple Linear Regression (MLR). The comparison of the results from the two models in the context of **model** accuracy shows that BMA **model** has better performance capability than MLR **model** in estimating the exchange rate of MYR/CNY.

Show more
Abstract: Inthis study, a hybrid JPSN-AR **model** is proposed based on binomial smoothing (BS) and **Bayesian** **model** **averaging** (BMA)techniques. The aim is to determine the combination technique that produced the best forecasting performance for the proposed hybrid **model**. The forecasting performance measurements employed to ascertain the foregoing assertion are the mean absolute percentage error (MAPE) and the root mean square error (RMSE). The results revealed that the best performance of the proposed hybrid **model** was achieved through the combination of the Jordan pi-sigma neural (JPSN) network **model** and an autoregressive(AR) time series **model** by the **Bayesian** **model** **averaging** (BMA) technique. Although the combination that produced the hybrid **model** made by the binomial smoothing technique also produced good forecasting performance by the hybrid **model**, but the combination technique with the **Bayesian** **model** **averaging**(BMA) technique puts the proposed hybrid **model** at its best. Simulations in this study are made possible by using MATLABsoftware version 8.03

Show more
13 Read more

Abstract: We examine the issue of variable selection in linear regression modeling, where we have a potentially large amount of possible covariates and economic theory offers insufficient guidance on how to select the ap- propriate subset. **Bayesian** **Model** **Averaging** presents a formal **Bayesian** solution to dealing with **model** uncertainty. Our main interest here is the effect of the prior on the results, such as posterior inclusion probabilities of regressors and predictive performance. We combine a Binomial-Beta prior on **model** size with a g-prior on the coefficients of each **model**. In addition, we assign a hyperprior to g, as the choice of g has been found to have a large impact on the results. For the prior on g, we examine the Zellner-Siow prior and a class of Beta shrinkage priors, which covers most choices in the recent literature. We propose a benchmark Beta prior, inspired by earlier findings with fixed g, and show it leads to consistent **model** selection. Inference is conducted through a Markov chain Monte Carlo sampler over **model** space and g. We examine the performance of the various priors in the context of simulated and real data. For the latter, we consider two important applications in economics, namely cross-country growth regression and returns to schooling. Recommendations to applied users are provided.

Show more
27 Read more

In the **Bayesian** **model** **averaging** approach, the aim is to iterate over all those plausible models to find the probability that a **model** is correct, given the historical data. Figure 2 shows this probability distribution for our test problem. The highest point in that distribution represents the most probable **model**, at parameters w0 = 0.32 and w1 = 6.9964. But picking the highest point in the probability distribution (i.e., the best-fit **model**) ignores all those other less- probable models that are still plausible, throwing away much of the information in the historical data.

Show more
In thestudy, sixteen covariates in matrix X were assessed. The first question we attempted to answer was to establish which variables would be included in the **model**. Secondly, we evaluated their importance in estimation of aircraft departure delay.One way was doing inference on a single **model** that could include all variables, but this proved to be inefficient and infeasible. We therefore employed the **Bayesian** **model** **averaging**; BMA that took the problem through estimating models for all possible combinations of {X} and then constructed a weighted average over all of them. Since X containedsixteen (16) potential variables as determinants, this meant estimating 2 16 (65,536) variable combinations and thus the same number of models. The **model** weights for this **averaging** stemmed from posterior **model** probabilities that arise from the Bayes’ theorem.

Show more
The hierarchical **Bayesian** **model** explored in this paper has a hyperprior on the covariate inclusion probability θ and a hyperprior on g, which leads to an integral for the marginal likelihood that is solved by running the MCMC sampler over models and g jointly. The advantage of these hyperpriors is to make the analysis more robust with respect to often arbitrary prior assumptions. We now allow the data to inform us on variable inclusion probabilities and the appropriate region for g. This will affect the **model** size penalty (and, to a much lesser extent, the lack-of-fit penalty) for each given **model**. Putting a prior on both θ and g makes the analysis naturally adaptive and avoids the information paradox (Liang et al., 2008), which affects analyses with fixed g. It is important to stress that we are not claiming that using hyperpriors on θ and g performs better (according to some criterion) than a **model** with fixed values of θ and g that are chosen to optimize this same criterion. The point is rather that we don’t know what the “optimal” values are and getting them wrong can lead to very poor performance. We feel the **model** used here with the recommended priors on g can be considered a safe “automatic” choice for use in **Bayesian** **Model** **Averaging** in the types of linear regression problems that typically arise in a variety of econometric settings.

Show more
34 Read more

Abstract: We examine the issue of variable selection in linear regression modeling, where we have a potentially large amount of possible covari- ates and economic theory offers insufficient guidance on how to select the appropriate subset. In this context, **Bayesian** **Model** **Averaging** presents a formal **Bayesian** solution to dealing with **model** uncertainty. Our main interest here is the effect of the prior on the results, such as posterior inclusion probabilities of regressors and predictive performance. We com- bine a Binomial-Beta prior on **model** size with a g-prior on the coefficients of each **model**. In addition, we assign a hyperprior to g, as the choice of g has been found to have a large impact on the results. For the prior on g, we examine the Zellner-Siow prior and a class of Beta shrinkage priors, which covers most choices in the recent literature. We propose a bench- mark Beta prior, inspired by earlier findings with fixed g, and show it leads to consistent **model** selection. The effect of this prior structure on penal- ties for complexity and lack of fit is described in some detail. Inference is conducted through a Markov chain Monte Carlo sampler over **model** space and g. We examine the performance of the various priors in the context of simulated and real data. For the latter, we consider two important appli- cations in economics, namely cross-country growth regression and returns to schooling. Recommendations to applied users are provided.

Show more
32 Read more

In both theoretical and empirical studies, many different kinds of variables have been considered as significant determinants of Gini coefficient. So in this research, by application of the method of **Bayesian** **Model** **Averaging** (BMA), the effects of influential factors on Gini coefficient which have been regarded in previous studies are investigated.We use Stata program to obtain the coefficient of BMA estimates.

This paper examines the role of long standing institutions – identified through geography, disease ecology, colonial legacy, and some direct measures of political and economic governance – on human development and its non income components across countries. The study employs a novel econometric technique called the **Bayesian** **Model** **Averaging** that allows us to select the relevant predictors by experimenting with a host of competing sets of variables. It constructs estimates as weighted average of OLS estimates for every possible combination of included variables. This is particularly useful in situations when there is **model** uncertainty and theory provides only a weak guidance on the selection of appropriate predictors. Of the 25 variables that we tried, three stand out in terms of their degree of importance and their robustness across various specifications. These include malaria ecology, KKZ index of good governance and fertility rate. Our finding on the dominant and robust role of malaria ecology in explaining differences in human development across countries, even in the presence of variables that directly and indirectly measure the quality of institutions, is extremely fascinating. It shows that malaria ecology has a direct negative impact on human development and this effect appears to be over and above its effect via institutions. Some of the other measures of climate and geography as well as those of colonial legacy are important as long as we do not control for some direct measures of the performance of political and economic institutions such as the KKZ index of good governance and democracy score. Once we control for these and other conditioning variables such as public spending on health and education; fertility rates; and measures of health infrastructure, the importance of geography and colonial legacy disappears.

Show more
53 Read more

This paper considers the instrumental variable regression **model** when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard **model** selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do **Bayesian** **model** **averaging**. The algorithm is very ‡exible and can be easily adapted to analyze any of the di¤erent priors that have been proposed in the **Bayesian** instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application.

Show more
49 Read more

10 Read more

In this paper, we focus on **model** selection with a **Bayesian** approach. **Model** selection has posed significant challenges for many statisticians, numerous strategies have been developed [3-5] and yet no universal agreed-upon standard has emerged. In conventional **model** selection, a single **model** typically is selected based on P-values, and only those variables ‘selected’ by the **model** are considered. Also, because a single universally approved **model** selection strategy is unavailable, different approaches are used, which can result in different subsets of variables selected in a final **model** and, in turn, different results and conclusions. **Bayesian** **model** **averaging** (BMA) is a solution that closes an important methodological gap and obviates the need for complicated or sometimes confused modeling strategies. Instead of focusing on a single **model** and a few factors, BMA considers all the models with non-negligible probabilities and the posterior probabilities for all variables are summarized at the end.

Show more
of the latter always exceeds the fraction of best performances. This behaviour is also illustrated by the fact that {Min, Mean, Max} for LPS of the full **model** is {0.74, 1.77, 3.77} for g = 1/n and {0.67, 2.15, 5.82} for g = 1/k 2 . For comparison, the null **model** leads to {1.67, 2.05, 2.72}. Having established that BMA is the strategy to adopt for prediction, we can focus on the forecast performance of BMA to compare the predictive ability across the different prior settings. Mean values of LPS (over all 100 samples) are not that different, but the maximum values indicate that fixing θ at 0.5 is the most risky strategy. This suggests using random θ. In addition, BMA always performs better with respect to the other prediction strategies for g = 1/k 2 . Indeed, the worst case scenario for BMA appears to be θ = 0.5 with g = 1/n. Finally, the choice of m almost leaves the random θ results unaffected, but has a substantial effect on the cases with fixed θ, in line with our expectations.

Show more
27 Read more

Only two other variables appear to have some role in explaining Midwestern rural murder rates: earnings per job, which is one of several variables potential measures of income, and the percent of households living in multi-family developments. Here again, both variables have relationships with murder rates opposite what we expected given the theory. One would expect higher income levels would be associated with lower crime, murder in this case, but the data suggests the opposite holds. Following the same logic as population density we expected that more people living in compacted areas, such as multi-family residential developments like apartment buildings, would increase conflict and crime. The rural data supports the opposite. Three other variables that are in the highest posterior probability **model** are not statistically significant when the final **model** is estimated via least squares. Finally, the percent of variance in the rural crime rate is relatively low ranging from about 12 to 18 percent, but this is fairly consistent with other studies examining the murder rate.

Show more
35 Read more