Top PDF Simulation-Based Inference on Mixture Experiments

Simulation-Based Inference on Mixture Experiments

Simulation-Based Inference on Mixture Experiments

In recent times, technology is improving at an exponential rate, trying to meet the requirements of the industry and the working sector. There has been a lot of fo- cus on increasing processing power and speed to enable multitasking and carrying out computationally heavy operations that were once only possible in theory. One such domain was that of Simulation Experiments. Traditionally, research in such a field was only restricted to theoretical aspects of statistics. Having to apply these techniques on data sets and extracting meaningful results involved a lot of time and cost. This was one of the biggest drawback of these methods. At present, all com- puters are being incorporated with an amazing processing power, that too at a mod- est price. This is because it has become an absolute necessity for running modern day applications, and carrying out multitasking and efficient programming. For in- stance, a social networking site could be using complex algorithms, neural networks and artificial intelligence in ways that one could never imagine. Hence, Simulation Methods that were previously not considered feasible, demonstrate a lot of research potential and modern day applications to statistical problems. Monte Carlo Simula- tion methods help in solving complex problems using random sampling, optimiza- tion, numerical procedures and probability distributions. Simulation is generally applied in situations when a closed form solution is unobtainable using the usual mathematical tools. We would be using this technique for our research method to encounter the same problem.
Show more

54 Read more

Computer Simulation-Based Designs for Industrial Engineering Experiments

Computer Simulation-Based Designs for Industrial Engineering Experiments

The proposed semi-LHDs have appealing properties. First of all, semi-LHDs inherit the space-filling property of their built-in LHDs. Secondly, with extra points added to the marginal points of the built-in LHDs, semi-LHDs would yield good prediction on the edge area. In practice, the behavior of an industrial engineering experiment on (or near) the edge of the experimental domain is usually of great importance to practitioners since it may be unstable or variable to influence the statistical inference. The proposed designs not only spread points over the experimental region but also put effort on the edge area, which benefits predictions on both the edge area and interior of the experimental region.
Show more

7 Read more

Simulation based Bayesian econometric inference: principles and some recent computational advances

Simulation based Bayesian econometric inference: principles and some recent computational advances

In order to illustrate the advantages of the ARDS methods, we investigate a mixture model for the analysis of economic growth in the USA, which is also considered by Bauwens et al. (2004). Bauwens et al. (2004) compare the performance of the ARDS methods with the (independence chain) Metropolis- Hastings algorithm and importance sampling with a Student-t candidate dis- tribution (with 5 degrees of freedom). They compare estimation results after a given computing time with the ‘true’ results - estimation results after many more draws - and inspect the graphs of estimated marginal densities resulting from different sampling methods. Here we take another approach to investigate the accuracy of different simulation methods given the same computing time. For each simulation method, we repeat the simulation process ten times with different ran- dom seeds, after which we compute the standard deviations of the ten estimates of the posterior means. We note that in these empirical examples the mixture process refers to the data space. However, such mixture processes may give rise to bimodality and skewness in the parameter space.
Show more

62 Read more

Dynamic Simulation of PEM Water Electrolysis and Comparison with Experiments

Dynamic Simulation of PEM Water Electrolysis and Comparison with Experiments

The electrolysis of water is the dissociation of water molecules into hydrogen and oxygen. A simple model was developed to explain the current and potential characteristics of electrolysis based on charge and mass balance as well as Butler-Volmer kinetics on the electrode surfaces [30]. A complete dynamic model based on the conservation of the molar balance at the anode and the cathode was developed [29], but it has not been experimentally validated. Predictive models were built using neural-network-based adaptive neuro-fuzzy inference systems for the hydrogen generation rate and PEM electrolyzer system-efficiency [27].
Show more

14 Read more

Inference and Properties of Mixture Two Extreme Lower Bound Distributions

Inference and Properties of Mixture Two Extreme Lower Bound Distributions

The bias and the mean square errors of the estimates are calculated based on 10,000 Monte Carlo simulation and the results are illustrated in Table 2 and Table 3. We see that in most of the considered cases, the mean square errors of the estimated parameters decrease as n increase.

7 Read more

Genetic Mapping by Bulk Segregant Analysis in Drosophila: Experimental Design and Simulation-Based Inference

Genetic Mapping by Bulk Segregant Analysis in Drosophila: Experimental Design and Simulation-Based Inference

ABSTRACT Identifying the genomic regions that underlie complex phenotypic variation is a key challenge in modern biology. Many approaches to quantitative trait locus mapping in animal and plant species suffer from limited power and genomic resolution. Here, I investigate whether bulk segregant analysis (BSA), which has been successfully applied for yeast, may have utility in the genomic era for trait mapping in Drosophila (and other organisms that can be experimentally bred in similar numbers). I perform simulations to investigate the statistical signal of a quantitative trait locus (QTL) in a wide range of BSA and introgression mapping (IM) experiments. BSA consistently provides more accurate mapping signals than IM (in addition to allowing the mapping of multiple traits from the same experimental population). The performance of BSA and IM is maximized by having multiple independent crosses, more generations of interbreeding, larger numbers of breeding individuals, and greater genotyping effort, but is less affected by the proportion of individuals selected for phenotypic extreme pools. I also introduce a prototype analysis method for simulation-based inference for BSA mapping (SIBSAM). This method identifies significant QTL and estimates their genomic confidence intervals and relative effect sizes. Importantly, it also tests whether overlapping peaks should be considered as two distinct QTL. This approach will facilitate improved trait mapping in Drosophila and other species for which hundreds or thousands of offspring (but not millions) can be studied.
Show more

14 Read more

Bayesian Inference of a Finite Mixture of Inverse Weibull Distributions with an Application to Doubly Censoring Data

Bayesian Inference of a Finite Mixture of Inverse Weibull Distributions with an Application to Doubly Censoring Data

In this article, we have considered the Bayesian inference of inverse Weibull mixture distribution based on doubly type II censored data. The prior belief of the model is represented by the independent gamma, beta priors and inverse Levy, beta priors on the scale and mixing proportion parameters. Numerical results of the simulation study presented in tables 1- 8 exposed salient properties of the proposed Bayes estimators. The parameters of the mixture distributions have been over/under estimated under different cases. In general the larger values of the parameters have been over estimated and smaller values of the parameters have been under estimated in majority of the cases. However, it is nice to observe that the estimated values converge to the true values and the amounts of the posterior risks tend to decrease by increasing the sample size. This simply indicates that the proposed estimators are consistent. The smaller (larger) values of the parameter representing one component of the mixture impose a positive (negative) impact on the estimation of the parameter representing the other component of the mixture distribution. The larger values of the mixing parameter (p 1 ) impose a positive impact on the
Show more

20 Read more

Planning for LVC Simulation Experiments

Planning for LVC Simulation Experiments

The use of Live, Virtual and Constructive (LVC) simulations are increasingly being examined for potential analytical use particularly in test and evaluation. In addition to system-focused tests, LVC simulations provide a mechanism for conducting joint mission testing and system of systems testing when fiscal and resource limitations prevent the accumulation of the necessary density and diversity of assets required for these complex and comprehensive tests. LVC simulations con- sist of a set of entities that interact with each other within a situated environment (i.e., world) each of which is represented by a mixture of computer-based models, real people and real physi- cal assets. The physical assets often consist of geographically dispersed test assets which are in- terconnected by persistent networks and augmented by virtual and constructive entities to create the joint test environment under evaluation. LVC experiments are generally not statistically de- signed, but really should be. Experimental design methods are discussed followed by additional design considerations when planning experiments for LVC. Some useful experimental designs are proposed and a case study is presented to illustrate the benefits of using statistical experimental design methods for LVC experiments. The case study only covers the planning portion of experi- mental design. The results will be presented in a subsequent paper.
Show more

16 Read more

The Effect of Simulation based Learning on Prospective Teachers' Inference Skills in Teaching Probability

The Effect of Simulation based Learning on Prospective Teachers' Inference Skills in Teaching Probability

Conceptual errors in the learning process of the probability concepts can affect the important personal decisions related to the daily life [3]. Coutinho [5], Batanero and Godino [2] have suggested that the development of ideas of probability should be based on three basic concepts which are chance, randomness and the interpretation of probability. Batanero and Godino [2] have pointed at the need to emphasize the variability in the small samples by comparing the results obtained by each student, and to create situations to observe unpredictability of individual results in a random experiment. Students should be aware of the convergence phenomenon by considering the total results of the class and then comparing the reliability of the small and large samples. In this way, the use of computer resources in the classroom is an important tool for increasing the number of random samples for test. Various researchers have proposed the use of computer in the probability training as a mean to understand the abstract or difficult concepts and improve students' skills [15, 10]. Batanero and Diaz [1] emphasize that the students conduct simulations in the computer courses to help them solve simple probability problems that cannot be possible with physical experiments. Simulation, when combined with the use of technology, is the most appropriate strategy to focus better on the concepts and reduce the technical calculations [3].
Show more

12 Read more

Simulation Based Inference for Dynamic Multinomial Choice Models

Simulation Based Inference for Dynamic Multinomial Choice Models

A finite order polynomial will in general provide only an approximation to the true future component. Hence, it is important to investigate the extent to which misspecification of the future component may affect inference for the model’s structural parameters. Below we report the outcome of some Monte Carlo experiments that shed light on this issue. The experiments are conducted under both correctly and incorrectly specified future components. We find that the Geweke-Keane approach performs extremely well when F H ( ) ⋅ is correctly specified, and still very well under a misspecified future component. In particular, we find that assuming the future component is a polynomial when it is actually generated by rational expectations leads to only “second order” difficulties in two senses. First, it has a small effect on inferences with regard to the structural parameters of the payoff functions. 3 Second, the decision rules inferred from the data in the misspecified model are very close to the optimal rule in the sense that agents using the suboptimal rule incur ‘small’ lifetime payoff losses.
Show more

27 Read more

Efficient deep neural network inference for embedded systems:A mixture of experts approach

Efficient deep neural network inference for embedded systems:A mixture of experts approach

This thesis seeks to offer an alternative approach to executing DNN models on embed- ded systems. The aim is to design a generalisable approach to DNN inference optimisation, making on-device inference feasible without incurring a penalty to model precision, even when compared to complex DNNs such as ResNet_v2_152. It is not always clear which DNN is best for the task at hand on embedded devices, therefore the suggested approach utilises multiple DNNs. Central to the approach is the design of an adaptive scheme to determine, at runtime, which of the available DNNs is the best fit for the input and evaluation criterion. Here, the key insight is that the optimal model – the model which is able to give the correct input in the fastest time – depends on the input data and the evaluation criterion. In fact, as a by-product, by utilising multiple DNN models it is possible to increase accuracy in some cases. In essence, for a simple input – an image taken under good lighting conditions, with a contrasting background; or a short sentence with little punctuation – a simple, fast DNN model would be sufficient; a more complex input would require a more complex model. Similarly, if an accurate output with high confidence is required, a more sophisticated but slower DNN model would need to be employed – otherwise, a simple model would provide satisfactory results. Given the diverse and evolving nature of user requirements, applications workloads, and DNN models themselves, the best model selection strategy is likely to change over time. This ever-evolving nature makes automatic design of statistical machine learning models highly attractive – models can be easily updated to adapt to the changing application context – a user simply needs to supply a set of candidate features.
Show more

172 Read more

Design and Simulation of Gamma Spectrometry Experiments in the CROCUS Reactor

Design and Simulation of Gamma Spectrometry Experiments in the CROCUS Reactor

Abstract—Gamma rays in nuclear reactors, arising either from fission or decay processes, significantly contribute to the heating and dose of the reactor components. Zero power research reactors offer the possibility to measure gamma rays in a purely neutronic environment, allowing for validation experiments of computed spectra, dose estimates, reactor noise and prompt to delayed gamma ratios. This data then contributes to models, code validation and photo atomic nuclear data evaluation. In order to contribute to aforementioned experimental data, gamma detection capabilities are being added to the CROCUS reactor facility. The CROCUS reactor is a two-zone, uranium-fueled light water moderated facility operated by the Laboratory for Reactor Physics and Systems Behaviour (LRS) at the Swiss Federal Institute of Technology Lausanne (EPFL). With a maximum power of 100 W, it is a zero power reactor used for teaching and research, most recently for intrinsic and induced neutron noise studies. For future gamma detection applications in the CROCUS reactor, an array of four detectors - two large 5”x10” Bismuth Germanate (BGO) and two smaller Cerium Bromide (CeBr 3 ) scintillators - was acquired. The BGO detectors are to
Show more

6 Read more

On the consistency of scale among experiments, theory, and simulation

On the consistency of scale among experiments, theory, and simulation

Abstract. As a tool for addressing problems of scale, we con- sider an evolving approach known as the thermodynamically constrained averaging theory (TCAT), which has broad ap- plicability to hydrology. We consider the case of modeling of two-fluid-phase flow in porous media, and we focus on issues of scale as they relate to various measures of pres- sure, capillary pressure, and state equations needed to pro- duce solvable models. We apply TCAT to perform physics- based data assimilation to understand how the internal be- havior influences the macroscale state of two-fluid porous medium systems. A microfluidic experimental method and a lattice Boltzmann simulation method are used to examine a key deficiency associated with standard approaches. In a hy- drologic process such as evaporation, the water content will ultimately be reduced below the irreducible wetting-phase saturation determined from experiments. This is problematic since the derived closure relationships cannot predict the as- sociated capillary pressures for these states. We demonstrate that the irreducible wetting-phase saturation is an artifact of the experimental design, caused by the fact that the bound- ary pressure difference does not approximate the true capil- lary pressure. Using averaging methods, we compute the true capillary pressure for fluid configurations at and below the irreducible wetting-phase saturation. Results of our analysis include a state function for the capillary pressure expressed as a function of fluid saturation and interfacial area.
Show more

14 Read more

Perfect posterior simulation for mixture and hidden Markov models

Perfect posterior simulation for mixture and hidden Markov models

Our approach will be to use and extend a collection of techniques for perfect simulation. Central to all of our methodology will be the use of the read-once coupling from the past (roCFTP) algorithm for perfect simulation; see [9]. At the core of this algorithm is the construction of uniformly ergodic blocks of Markov chain update rules. The main challenge is to construct these blocks of updates so that they have a significant coalescence probability, that is, the probability that a block maps the entire state space into just one state should be non-negligible.

15 Read more

Slope Optimal Designs for Third Degree Kronecker Model Mixture Experiments

Slope Optimal Designs for Third Degree Kronecker Model Mixture Experiments

From Table 2, at (1, 0, 0, 0) and (½, ½, 0, 0), there was no difference between the two designs in their D-, E- and A- efficiency. However, UWSCD was 8.42% and 10.09% more T- efficient than WSCD at respective blends. For ternary mixture (0.333, 0.333, 0.333, 0), WSCD was 2.13%, 20% and 26.32% more D-, E- and A- efficient than UWSCD respectively. It was also observed that WSCD was 9.33% less T-efficient than UWSCD. At point (¼, ¼, ¼, ¼), UWSCD was 8.33%, 12.5%, 12.9% and 6.66% more D-, E-, A- and T- efficient respectively than WSCD. Generally, the D-, E-, A- and T-optimal values for Uniformly Weighted Simplex Centroid (UWSC) designs were better than those of Weighted Simplex Centroid (WSC) designs for four ingredients.
Show more

6 Read more

Generic machine learning inference on heterogenous treatment effects in randomized experiments

Generic machine learning inference on heterogenous treatment effects in randomized experiments

Abstract. We propose strategies to estimate and make inference on key features of heterogeneous effects in random- ized experiments. These key features include best linear predictors of the effects using machine learning proxies, average effects sorted by impact groups , and average characteristics of most and least impacted units . The approach is valid in high dimensional settings, where the effects are proxied by machine learning methods. We post-process these proxies into the estimates of the key features. Our approach is generic, it can be used in conjunction with penalized methods, deep and shallow neural networks, canonical and new random forests, boosted trees, and ensemble methods. Our approach is agnostic and does not make unrealistic or hard-to-check assumptions; we don’t require conditions for consistency of the ML methods. Estimation and inference relies on repeated data splitting to avoid overfitting and achieve validity. For inference, we take medians of p-values and medians of confidence intervals, resulting from many different data splits, and then adjust their nominal level to guarantee uniform validity. This variational inference method is shown to be uniformly valid and quantifies the uncertainty coming from both parameter estimation and data splitting. The inference method could be of substantial independent interest in many machine learning applications. An empirical application to the impact of micro-credit on economic development illustrates the use of the approach in randomized experiments. An additional application to the impact of the gender discrimination on wages illustrates the potential use of the approach in observational studies, where machine learning methods can be used to condition flexibly on very high-dimensional controls.
Show more

45 Read more

Simulation and Experiments of a W-Band Extended Interaction Oscillator Based on a Pseudospark-Sourced Electron Beam

Simulation and Experiments of a W-Band Extended Interaction Oscillator Based on a Pseudospark-Sourced Electron Beam

To reduce the complexity of the simulation, a stable beam current and a time-varying beam voltage, as shown in Fig. 9(a), were used to simulate the beam–wave interaction in the EIO circuit [14]. To reduce the computation time, the self-focusing of the beam in the ion channel was not included in the simulation, as well as the plasma generated by the pseudospark discharge. Instead, an axial magnetic field of 0.4 T was applied to guide the electron beam. Typically, for the full discharge process, it takes ∼ 30–100 ns when the voltage changed from the applied 34 to 0 kV. The voltage change rate was ∼ 1 kV/ns and can be varied by adjusting the capacitance of the external capacitor C ext used to maintain the discharge. When the
Show more

5 Read more

The Evolution of Trust and Reputation: Results from Simulation Experiments

The Evolution of Trust and Reputation: Results from Simulation Experiments

Our scenario experiments reveal that trust and cooperation can evolve under two conditions. Some minimal fraction of buyers must make use of the sellers’ reputation in their buying strategies and trustworthy sellers must be given opportunities to gain a good reputation through their cooperative behavior. Buyer strategies which account for both these requirements turn out to be successful and conducive to a market based on trust and cooperation. None the less, our artificial market is not immune against deceitful sellers. The results from round-robin tournaments with many different buyer and seller strategies show that seller strategies which first build up a good reputation and exploit trustful buyers subsequently are successful. Even buyer strategies which only cooperate with sellers with a high reputation are no remedy for deceitful sellers since other less restrictive buyer strategies are more successful. However, after all, buyer strategies which apply the expected value principle succeed in the buyer population.
Show more

7 Read more

Randomized parcellation based inference

Randomized parcellation based inference

301 conducted with each of those parcellations samples correctly the set of 302 regions that display some activation for the effect considered. One way 303 to achieve this is to take bootstrap samples of subjects and apply Ward's 304 clustering algorithm to their contrast maps, to build brain parcellations 305 that best summarize the data subsamples, i.e. so that the parcel-level 306 mean signal summarizes the signal within each parcel, in each subject. 307 If enough subjects are used, all the parcellations offer a good represen- 308 tation of the whole dataset. It is important that the bootstrap scheme 309 generates parcellations with enough entropy (Varoquaux et al., 310 2012). Spatial models try to address the problem of imperfect voxel- 311 to-voxel correspondence after coregistration of the subjects in the 312 same reference space. Our approach is clearly related to anisotropic 313 smoothing (Sol et al., 2001), in the sense that obtained parcels are 314 not spherical and in the aggregation of the signals of voxels in a 315 given parcel, certain directions are preferred. Unlike smoothing or spa- 316 tial modeling applied as a preprocessing, our statistical inference em- 317 beds the spatial modeling in the analysis and decreases the number 318 of tests and their dependencies. In addition to the expected increase 319 of sensitivity, the randomization of the parcellations ensures a better 320 reproducibility of the results, unlike inference on one fi xed 321 parcellation. Last, the C t ð v ; P Þ statistic is reliable in the sense that is 322 does not depend on side effects such as the parcel size. This is formally 323 checked in Appendix B.
Show more

13 Read more

Causal inference based on counterfactuals

Causal inference based on counterfactuals

Especially in the fields of psychology, social sciences and economics, structural equation models (SEMs) with latent variables are frequently used for causal modelling. These models consist of (a) parameters for the relations among the latent variables, (b) parameters for the relations among latent and observed variables and (c) distribu- tional parameters for the error terms within the equations. Pearl [30] has shown that certain nonparametric SEMs are logically equivalent to counterfactual models and has demonstrated how they can be regarded as a "language" for interventions in a system. Furthermore, these models are useful to structure and reduce variance, for example, to reduce measurement error if several items on a question- naire are assumed to represent a common dimension. There are, however, several practical problems with the use of SEMs. First, in an under-determined system of equations, several assumptions are necessary to identify the parameters (i.e. to make the estimates unique). In psy- chological applications, the assumptions tend to be justi- fied only partially [53] and models with alternative assumptions are often not considered [54]. The results, on the other hand, may be very sensitive against these assumptions [55], and currently, there is no way to model uncertainty in these assumptions. Besides, the coefficients from these models are sometimes not interpretable as measures of conditional dependencies (i.e. regression coefficients), for instance, if there are loops in a model [56]. Finally, the meaning of the latent variables remains sometimes obscure, and — in economic applications — results from certain structural equation models have been found to fail to recur in experiments [57].
Show more

12 Read more

Show all 10000 documents...