As a rule Monte Carlo methods are not competitive with classical numerical methods for solving systems of linear equations (some special cases where Monte Carlo meth- ods can b[r]

298 Read more

[1-4] introduce the basic **Monte** **Carlo** theory and the **Monte** **Carlo** **method** from two perspectives and gives the corresponding state transition equations. The direct **simulation** **method** is based on components and completes the state transition of the system by sampling the transition time (life or repair time) of each component; the indirect **simulation** **method** is based on system and uses the sum of all component transition rates as the transition rate of the whole system. Meanwhile, discrete sampling is used to determine the transition component and the transition states. The two methods are applicable in normal circumstances. However, it is more convenient to use direct **simulation** **method** when there is correlation between components or the failure rate changes with the states.

Show more
The absence of a classical econometric model of forecasting that can be fully validated, due to the lack of a comprehensive database over an acknowledged minimum of terms needed (e.g. the Durbin–Watson test, which requires a series of data of at least 15 terms, being relevant in this respect) required the authors to build and make use of another solution, i.e. the alternative of **simulation** using the **Monte** **Carlo** **method**. The practical need may require an estimate, forecast or decision in signifi cant situations of uncertainty, which, according to several opinions and EViews of the scientifi c literature of the last two decades (Jackel, 2002; Glasserman, 2004; Robert & Casella, 2004; Del Moral, Doucet, & Jasra, 2006; Mun, 2006; Creal, 2012) conduces to the implementation of

Show more
17 Read more

of **simulation** using **monte** **Carlo** **method**. The path **simulation** is carried out according to the constraint conditions of callable convertible discount bond. The theoretical value is obtained by a large number of experiments. With the increase of convertible bond constraints, the path will become much more complex. The main purpose of this article is to provide some ideas for the pricing of China's convertible bonds.

regions [16]. This model has a large overall area that gives it very distinct aggregation properties. It is reasonable that real asphaltene fractions might have varying degrees of both types of molecules or a significant amount of intermediate structures. In this article, the aggregation behavior of colloidal asphaltene-resin-solvent systems are described and size distribution of asphaltene aggregates are predicted using **Monte** **Carlo** **simulation**. Asphaltene molecules with resin compounds are considered in crude oil. The basis of this **method** is minimization of total molecular energies in the crude oil. Two different potential functions are used in studying the intermolecular interactions. The effect of oil media on these interactions is considered by introducing some parameters in the applied potential functions.

Show more
13 Read more

18 Read more

In this research, we successfully apply the uncoupled winding number formulation of path integral **Monte** **Carlo** theory to the torsional degrees of freedom in the molecules ethane, n-butane, n-octane, and enkephalin. This torsional PIMC technique offers a significant reduction in computational cost for systems in which vibrational degrees of freedom may be safely neglected. Employment of the PIMC m ethod is simplified by the observation th a t contributions to calculated properties will be negligible for winding numbers greater than zero. For a simple ethane model potential, the PIMC result recovers the exact internal energy value obtained with a variational technique. For n-butane, n-octane, and enkephalin, the PIMC converged to the quantum me chanical limit with only two or three Trotter beads. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the PIMC technique.

Show more
88 Read more

3.1 The Generation of True Random Numbers **Monte** **Carlo** **method** is a **method** used to solve physical and mathematical problems by repeated statistical experiments. When addressing problems with **Monte** **Carlo** **method**, solutions are often constructed as mathematical expectations of a certain random variable. This random variable is derived from a hypothetical experiment on certain figures on a computer. The arithmetic mean value of its specific value is used as an approximate solution of the problem. It is worth noting that **Monte** **Carlo** **Simulation** has a high requirement for random numbers. Some pseudo-random numbers may bring about errors in the entire **simulation** and predicted results. Domestic and foreign scholars generally employ Matlab, Excel and other software to generate random numbers, which are pseudo- random numbers and will cause inaccuracy in some predicted results. Therefore, we use mixed congruential **method** to generate random numbers and carry out randomness test, to get true random numbers.

Show more
A simple example clarifies the nature of the **Monte** **Carlo** **method** in its simplest unsophisticated form as a simple analog experiment, which can be carried out in close relationship to the physics involved. Suppose one is asked to determine the value of π. Several approaches can be adopted for this purpose, simpler than the Buffon Needle’s approach used by Laplace.

Abstract. One of the issues raised in cloud computing is the dat- acenter locating problem and one of the effective factors in design- ing data center locations is amount of data volume and referrals to it. This depends on the number of customers or users who are going to use that data center, which is a probable issue. In this paper, the bandwidth **simulation** is described by considering the bandwidth system as a queuing system and simulating it by **Monte** **Carlo** **method**. We explain how to simulate the bandwidth con- sumption in different static and dynamic **simulation** states for a real computer system and we show that the bandwidth required at an Endpoint in cloud computing can be calculated with different Gamma distribution parameters.

Show more
10 Read more

Since we don’t know the law at time T of the killed diﬀusion it is clear that the explicit computation of ν is not possible and we resort to **Monte** **Carlo** methods to estimate ν . In this **simulation** study, we investigate the performance of the estimator of ν produced by the Exact **Monte** **Carlo** **method** (hereafter E 1). The plots in Figure 1 propose a comparison between E 1 and the estimators based on the continuous Euler scheme ( E 2) and on the discrete Euler scheme ( E 3). In particular, given a **Monte** **Carlo** sample suﬃciently large (10 6 ), for dif- ferent choices of the starting point y 0 and the barriers’ values a and b , we have computed the estimates of E 1 (dotted line) and the estimates produced by E 2 and E 3 for diﬀerent discretization intervals. Then we have plotted the values of E 2 (cross) and E 3 (circle ) versus the number of discretization intervals. As we expected, the values of E 2 and E 3 converge to E 1 as the number of discretization interval increases. Indeed it was shown by Gobet (2000) that, for killed diﬀusions, the weak approximation error of Euler schemes decreases to 0 as the number of discretization intervals increases. When the **Monte** **Carlo** sam- ple size is large enough, **Monte** **Carlo** error is negligible and the estimated values are aﬀected mainly by the discretization error. In this context the distance be- tween the values of E 2 and E 3 and the dotted line is a good representation of the (weak) discretization error aﬀecting the Euler schemes and their conver- gence to the dotted line reﬂects the theoretical convergence of the corresponding expected values. Furthermore, according to the conclusions of Gobet, we notice that the estimates based on the continuous Euler scheme show better conver- gence than the estimates based on the discrete Euler scheme.

Show more
26 Read more

In general, mechanical reliability refers to the ability of mechanical products to complete the specified function in the exact required service period, and it can be a measurable value. The two-common **method** for measuring the reliability are the moments **method** and **Monte** **Carlo** **simulation**. Comparing **Monte** **Carlo** **simulation** with moment **method**, the former one has the advantage of high universality and high precision [11]. However, **Monte** **Carlo** **method** is based on a large number of **simulation** and calculations, which is not suitable for predicting the reliability of complex numerical models as finite element **method**. As the case in this study does not involve complicated numerical model, **Monte** **Carlo** **simulation** is an appropriate tool to be used here, since numerical control cutting width is an explicit expression. Therefore, the chatter stability of the CNC turning operation was predicted and compared in the term of cutting width. Thus, the reliability of the turning system stability can be described as a multidimensional integral, given by:

Show more
Cet article établit des résultats nouveaux sur (i) la structure des portefeuilles optimaux, (ii) le comportement des termes de couverture et (iii) les méthodes numériques de **simulation** en la matière. Le fondement de notre approche repose sur l'obtention de formules explicites pour les dérivées de Malliavin de processus de diffusion, formules qui simplifient leur **simulation** numérique et facilitent le calcul des composantes de couverture des portefeuilles optimaux. Une de nos procédures utilise une transformation des processus sous- jacents qui élimine les intégrales stochastiques de la représentation des dérivées de Malliavin et assure l'existence d'une approximation faible exacte. Cette transformation améliore alors la performance des méthodes de **Monte**-**Carlo** lors de l'implémentation numérique des politiques de portefeuille dérivées par des méthodes probabilistes. Notre approche est flexible et peut être utilisée même lorsque la dimension de l'espace des variables d'états sous-jacentes est large. Cette méthode est appliquée dans le cadre de modèles bivariés et trivariés dans lesquels l'incertitude est décrite par des mouvements de diffusion pour le prix de marché du risque, le taux d'intérêt et les autres facteurs d'importance. Après avoir calibré le modèle aux données nous examinons le comportement du portefeuille optimal et des composantes de couverture par rapport aux paramètres tels que l'aversion au risque, l'horizon d'investissement, le taux d'intérêt et le prix de risque du marché. Nous démontrons l'importance des demandes de couverture. L'aversion au risque et l'horizon d'investissement émergent comme des facteurs déterminants qui ont un impact substantiel sur la taille du portefeuille optimal et sur ses propriétés économiques.

Show more
50 Read more

5.2 Future work
We now suggest possible improvements to the current system. In the current system, **simulation** performed for each input job is independent i.e., if two users submit jobs with same input parameters the system cannot identify that the two jobs are alike and performs the **simulation** twice, once for each of the jobs. To minimize such unnecessary use of the computing resources and to make the system smart, data mining and pattern recognition techniques can be used to identify and categorize jobs. Using these techniques, we can add capabilities to the system such that it checks if the input parameters of a new job exactly match with the input parameters of an existing job (which is already executed or is in the process of execution), and then provide the results without actually performing the **simulation** for the new job. The other case is when the input parameters of new job partially match the parameters of the existing jobs. In this case, the system can take the available SNR points data and simulate for the SNR points for which data is not readily available in the system. Only when the above two cases fail, **simulation** for the job can be performed, thus saving the computing resources.

Show more
83 Read more

plan 2008; Jia 2009). **Monte** **Carlo** **simulation** gen- erates a sample by drawing from a hypothesised ana- lytical distribution. One of the biggest advantages is that successive replications generate a collection of samples with the same distributional properties as the original data (Everaert 2011; Gitman 2009). Though, there are some disadvantages too, as results depend on whether the distributional assumption is correct, there is a slow rate of convergence, it is very time-consuming and computationally intensive. Moreover, **Monte** **Carlo** **simulation** is attractive rela- tive to other numerical techniques because it is flex- ible, easy to implement and modify, and the error convergence rate is independent of the dimension of the problem (Charnes 2000). Since the convergence rate of **Monte** **Carlo** methods is generally independ- ent of the number of state variables, it is clear that they become viable as the underlying models (asset prices and volatilities, interest rates) and derivative contracts themselves (defined on path-dependent functions or multiple assets) become more compli- cated (Fu et al. 2001; Jia 2009). A key specification in **Monte** **Carlo** simulations is the probability distri- butions of the various sources of risk. The implica- tions of different investment policy decisions can be assessed through simulated time. In addition, **Monte** **Carlo** **simulation** is widely used to develop estimates of Value at Risk (DeFusco et al.. 2001). This **method**- ology simulates many times the profit and loss per- formance of the portfolio over a specified horizon. Boyle (1977) was the first one who proposed a Mon- te **Carlo** **simulation** approach for European option valuation. The **method** is based on the idea that sim- ulating price trajectories can approximate probabil- ity distributions of terminal asset values. Option cash flows are computed for each **simulation** run and then averaged. The discounted average cash flow using the risk free interest rate represents a point estimator of the option value.

Show more
15 Read more

bilayer arrangement can be efficiently simulated. Our starting structure is a crystalline-like DPPC bilayer with straight fatty acyl chains and identical conformations for the 64 lipid molecules that comprise our system (see Section 2.2 and Figure 5A). We wanted to determine whether our local-move MC technique is able to produce conformational moves large enough to create a structural transition from a crystalline to a fluid-like state in the bilayer. As shown in Figure 5B, already, after 10,000 MC steps, the high molecular order typical for the crystal-like structure is lost and acyl chains of individual PC molecules show large conformational differences. Individual molecules are tightly packed, which is due to the fact that the box size in the NPT-ensemble **simulation** is rapidly adjusted (see also the accompanying video sequence). After 20,000 MC cycles, fatty acyl chains became more disordered (Figure 5C), while after 40,000 MC steps, the acyl chains of the phospholipids show large structural variation compared to the starting configuration. These changes indicate that the bilayer system made a transition to a fluid lipid bilayer. The simulated membrane model system is driven in a state of slight undulations, as suggested by the wave-like appearance of the head group regions (Figure 5D). This mesoscopic organisation has previously been described for long MD simulations of fluid DPPC membranes [25]. However, due to the simple solvent representation used in our MC **simulation**, we cannot rule out that these phenomena are caused or at least influenced by the simple solvent description. Starting from the crystalline-like ordered structure shown in Figure 5A, we determined next whether our MC algorithm leads to equilibration of the DPPC bilayer in terms of system enthalpy. As shown in Figure 5E, the system enthalpy drops to the equilibrium value in less than 10,000 MC steps, which are simulated in about one day on a Pentium PC. Thus, no energy minimization of the bilayer is required as often performed prior to extensive MD simulations (see [114] as example). A plateau value around −250 kcal/mol is obtained, which pertains stable during the **simulation** run. The PDF of the system enthalpy, p k (H), is well approximated by a Gaussian function with mean −256.7 and SD = 54.1 kcal/mol (Figure 5F) [112]. Using again the relation between fluctuation in energy (or enthalpy, as the membrane **simulation** was done in the constant NPT-ensemble; see section 2.2, above) and the heat capacity, we can derive a value of c p = 821.63

Show more
37 Read more

Chapter 3: **Method** Data Generation
Two-level multiple-group multivariate normal data were generated in Mplus 7.1 (Muthén & Muthén, 1998-2011). A grouping variable was generated at level-two to represent treatment versus control classrooms. The individual-level and cluster-level factor structures were identical. That is, invariance was assumed between level-one and level-two, regardless of the grouping variable. The two groups had identical population parameters except for the factor loading parameter of noninvariance. The current study generated data for three separate models, referred to as Replication Study, Extension Study 1, and Extension Study 2 (see Figures 1 – 3).

Show more
95 Read more

Figure 5―Units vs Time (Raw Mode)
Line “A” in figure 5 and figure 6, ”Units vs Time (Regression Mode),” is a representation of the actual data that needs to be forecasted. In both graphs, all the forecasts start at time 0 and end at time 125. The annual growth and annualized volatility parameters of Line “A” are used to generate the other forecast lines. While the other forecasts seem to mimic the characteristics of Line “A,” their margin of error is large. Nine of the lines each explain less than 10 percent of the data represented by Line “A.” Line “R1” explains better than 80 percent of the data in Line “A.” This line can be considered a regression by **Monte** **Carlo**.

Show more
12 Read more

between model categories of methodological perspectives. The first category is the bottom-up sector analysis where the microscopic data are available (Johansson, 1995; Bellasio et al., 2007; Junevičius et al., 2011; Praveen and Arasan, 2013; Hilmola, 2013; Domanovszky, 2014; Meszaros and Torok, 2014). This **method** is also useable to estimate emissions based on passenger travel behaviors in cities areas (He et al., 2013). The second **method** is the top-down analysis. Transport sector’s CO 2 emissions