Section 3.) of Chapter 2 and the curves presented in Figure 3.5 are the results of superimposing many segments of the random walk trajectory. As discussed in Chapter 2, estimates of the expectation values of the potential and kinetic energy can be obtained from the asymptotic behaviour of the curves presented in Figures 3.5. The values of these quantities are <V> = 2370 ± 50 K/molecule and <T> = 3400 ± 100 K/molecule giving the total ground state energy of the water dimer as <E0 > = 5770 ± 100 K/molecule. To within the **statistical** fluctuations inherent in these calculations this agrees with the average value of Vref <vref> = 5730 ± 20 K/molecule obtained during the run. As discussed earlier, this agreement provides a self-consistent check of the **quantum** **Monte** **Carlo** **method** and indicates that the Virial Theorem is satisfied. The eigenvalue estimate calculated from the average energy reference has less **statistical** uncertainty than the value obtained from the sum of the potential and kinetic energies since the evaluation does not rely on the ( j ) 02 generation procedure.

Show more
252 Read more

111 Read more

There are two main reasons for running this package on the computational Grid: (i) **quantum** **problems** are very computationally intensive; (ii) the inherently parallel nature of **Monte** **Carlo** applications makes efficient use of Grid resources. Grid (distributed) **Monte** **Carlo** applications require that the underlying random number streams in each subtask are independent in a **statistical** sense. The efficient **application** of quasi-**Monte** **Carlo** algorithms entails additional difficulties due to the possibility of job failures and the inhomogeneous nature of the Grid resource. In this paper we study the quasi-random approach in SALUTE and the performance of the corresponding algorithms on the grid, using the scrambled Halton, Sobol and Niederreiter sequences. A large number of tests have been performed on the EGEE and SEEGRID grid infrastructures using specially developed grid implementation scheme. Novel results for energy and density distribution, obtained in the inhomogeneous case with applied electric field are presented.

Show more
11 Read more

I entered North Carolina State University as a major in **physics**. I worked one semester for the WebAssign project, and immediately sought out a place to work in a ‘real’ lab. I approached Dr. Jacqueline Krim for a place as an undergraduate researcher and she agreed. I started in my second semester at NCSU. The first day, I leveled all the desks in her office and painted her filing cabinet. I spent some time helping her group move in and set up the lab, since she had just moved from Northeastern University. After a while, I did perform some research in her lab, none of which was probably worthy of publishing, but I very much enjoyed it. I worked on various projects, finding a quartz crystal that would still oscillate at 500 Celsius, building a ultra-high vacuum vapor deposition chamber, and ripping postdoc Brian Mason’s carefully constructed superconductivity-dependent friction experiment to pieces.

Show more
121 Read more

confidence functions, there are random variables that cannot be considered as independent and distributed variables that are not normal distributions. Moreover, some confidence functions Z are nonlinear functions, so it is difficult to find probability distribution law of Z. Probability statement of function Z and also cannot consider Z as a function of many random variables with normal distribution. In addition, the monitoring status as well as the statistics of reservoirs still have many shortcomings, many reservoirs do not have monitoring equipment or have but perform intermittent measurements. Analysis of monitoring data of some lakes with many monitoring data such as Hoa Binh, Vinh Son, Phu Ninh, Tri An and Yen Lap shows that most of the observation lists of the data are not long enough to meet the level exactly according to the **statistical** problem. In addition, the results of **statistical** analysis of water level monitoring data, water column permeability and displacement in 2 directions (vertical and horizontal) in the 5 above mentioned reservoirs show that these variables hardly follow the standard distribution law. Therefore, the **application** of reliability theory to implement probability **problems** at level II for dam safety calculations in Vietnam conditions still have many limitations, it is necessary to continue research. Calculations at level III are to find the reliability of works in these cases, and **problems** can use analytical **method** or **Monte** **Carlo** **method**. In principle, it is possible to calculate the safety probability by analytic **method**, but very limited because this **method** has many difficulties in determining the combined probability density function of random variables. Therefore, in the calculation of level III, **Monte** **Carlo** **method** is often used to determine safety probability [2].

Show more
The most atomic theories arise from the independent electron shell model which assumes that every electron is moving in a field combined of the nucleus and the mean distribution of the other electrons. In this model, the effect of two-electron repulsion neglected. This is leading to inaccurate results for the calculated energies of the helium atom and its ions. The study of the effects of the two-electron correlation has been a subject of interest in atomic **physics**. The ground state of helium and its ions calculated by using wave functions from the orbital product times correlated function depending on the distance between the two electrons. These wave functions depend on several variable parameters, which should satisfy the variational principle to give improved values for energies. The integration of the functions of inter-electronic distance is difficult so that the topic of electron correlation studied by using numerical methods. One of the most important numerical methods is the variational **Monte** **Carlo** (VMC) **method** [1]. It is based on a combination of two ideas, namely, the variational principle and the **Monte** **Carlo** evaluation of integrals using importance

Show more
12 Read more

The **Monte** **Carlo** **method** is intensively used in various areas of scientific research. From computational **physics** to fluid dynamics, it has seen exponential growth with the introduction of computationally powerful computers. Truly, the **Monte** **Carlo** **method** is of great interest for solving systems with unknown analytical solutions. In the real world, more often than not, straight forward analytical solutions are not readily available. So, empirical modeling and numerical simulations are much helpful to better understand the physical **problems** involved. While in the past such modeling and numerical simulations were not very accurate, ongoing research and advances in computational power have led to more and more complex and high quality models to better estimate the physical **problems**. Despite that the computational power has grown exponentially over the years, this power is not able to keep up with the ever increasing demands of the improved model developed by researchers.

Show more
170 Read more

66 Read more

Named after the district of Monaco renowned for gambling, the **Monte** **Carlo** **method** refers to a technique through which random numbers are used to solve **problems**. The inception of the **method** emerged in a game of solitaire in which Stan Ulam considered the use of successive random operations to estimate the probability of a successful outcome. With new advancements in electronic computing occurring at the time, Ulam envisioned the **application** of such a **statistical** sampling approach to a variety of **problems** in mathematical **physics** and proposed the **method** to John von Neumann in a 1946 correspondence. The **method** initially boasted the appeal of being an efficient means to approximate integrals which are unable to be solved analytically. With the establishment of several variations, the **Monte** **Carlo** **method** has since progressed to form a class of algorithms with the ability to address complex **problems** in a wide range of disciplines [1,13].

Show more
52 Read more

projections onto Legendre Polynomials. As the angular fluxes along different angles are mutually independent in each sweep assuming Cartesian geometry and explicit boundary conditions, the potential for high degrees of parallel processing exists. There are also several iterative acceleration techniques that can be applied to solving the transport equation with the discrete ordinates approximation. Many of these are dependent on efficiently solving some form or variation on the spherical harmonics equations to estimate what the final flux should look like. All this leads to discrete ordinates methods often being seen as a good middle ground in terms of computational efficiency, easily beating the speed of continuous energy **Monte** **Carlo** but often slower than diffusion solutions, and solution accuracy, better describing angular flux effects than diffusion but maintaining errors of energy, space, and angle discretization’s that are not incurred in **Monte** **Carlo** simulations. Instead, the latter incurs quantifiable **statistical** errors.

Show more
123 Read more

The main motivation behind this paper is to investigate the applicability and efficiency of a conceptually simple MC **method** for solving machining optimization **problems**. Although the aforementioned meta-heuristic optimization methods have proved that they can succeed in obtaining good optimization solutions, the average process engineer in real machining environment may not feel familiar with these methods, which require deeper knowledge of artificial intelligence and optimization. Moreover, in some cases, specialized software or programming skills are needed. With an increasing number of meta-heuristic optimization methods and their combinations that have emerged in recent years, it has become very difficult even for researchers to become familiar with all these optimization methods. Because of these reasons, as noted by Besseris (2008), in practice only the simplest of optimization tools are eventually proved workable in the factory. In that sense, this paper is an attempt to investigate the applicability of the MC **method** for solving machining optimization **problems**. The proposed optimization procedure was employed to search out the optimal combinations of machining parameters for six machining processes, i.e. drilling, turning, turn-milling, AWJ machining, electrochemical discharge machining (ECDM), and electrochemical micromachining (EMM). The obtained optimization solutions were compared with the optimization solutions obtained by the past researchers using meta-heuristic algorithms.

Show more
13 Read more

Neutrinos have played an important role in particle **physics** since their discovery half a century ago. They have been used to elucidate the structure of the electroweak symmetry groups, to illuminate the quark nature of hadrons, and to confirm our models of astrophysical phenomena. With the discovery of neutrino oscillations using atmospheric, solar, accelerator, and reactor neutrinos, these elusive particles now take center stage as the objects of study themselves. Precision measurements of the lepton mixing matrix, the search for lepton charge-parity (CP) violation, and the determination of the neutrino masses and hierarchy will be major efforts in HEP for several decades. The cost of this next generation of experiments will be significant, typically tens to hundreds of millions of dollars. A comprehensive, thoroughly tested neutrino event generator package plays an important role in the design and execution of these experiments, since this tool is used to evaluate the feasibility of proposed projects and estimate their **physics** impact, make decisions about detector design and optimization, analyze the collected data samples, and evaluate systematic errors. With the advent of high-intensity neutrino beams from proton colliders, we have entered a new era of high-statistics, precision neutrino experiments which will require a new level of accuracy in our knowledge, and simulation, of neutrino interaction **physics**.

Show more
191 Read more

Abstract. The emerging field of high energy atmospheric **physics** (HEAP) includes terrestrial gamma-ray flashes, electron–positron beams and gamma-ray glows from thun- derstorms. Similar emissions of high energy particles oc- cur in pulsed high voltage discharges. Understanding these phenomena requires appropriate models for the interaction of electrons, positrons and photons of up to 40 MeV en- ergy with atmospheric air. In this paper, we benchmark the performance of the **Monte** **Carlo** codes Geant4, EGS5 and FLUKA developed in other fields of **physics** and of the custom-made codes GRRR and MC-PEPTITA against each other within the parameter regime relevant for high energy atmospheric **physics**. We focus on basic tests, namely on the evolution of monoenergetic and directed beams of electrons, positrons and photons with kinetic energies between 100 keV and 40 MeV through homogeneous air in the absence of elec- tric and magnetic fields, using a low energy cutoff of 50 keV. We discuss important differences between the results of the different codes and provide plausible explanations. We also test the computational performance of the codes. The Sup- plement contains all results, providing a first benchmark for present and future custom-made codes that are more flexible in including electrodynamic interactions.

Show more
14 Read more

To perform radiation transport calculations, the probabilities in nuclear process, such as neutron in- teraction or γ -ray production, are provided as input data to the simulation. This saves computational resources a lot. However, important correlations between neutrons and γ -rays cannot be taken into account exactly. A **Monte** **Carlo** simulation of the nuclear reaction process will be able to provide fully correlated information, i.e. all the neutrons and γ -rays emitted at an event obey the energy and angular momentum conservation laws. We have been developing a technique to solve the **statistical** Hauser- Feshbach theory [1] by means of a stochastic **method**, called **Monte** **Carlo** Hauser-Feshbach (MCHF) [2, 3]. In our previous papers, the spin-dependence in the compound nucleus decay probabilities was summed before we perform the **Monte** **Carlo** simulations, which means the angular momenta are con- served in average only. This paper describes an extension of our MCHF calculations. A computer program, CGM (Cascading Gamma-rays and Multiplicity) [4] has been developed at Los Alamos, which can be combined with other transport simulation programs as an event-generator [5]. CGM runs in both stochastic and deterministic modes with very fine energy grid. This is particularly impor- tant to understand neutron emission at very low energies. In this paper we will also discuss the neutron evaporation process that can be studied with CGM.

Show more
10 Read more

An unexpectedly high sensitivity of the nodal quality to the details of the pair orbital at large distances are also observed. Although the suboptimal orbitals used in the 4-atom RN-DMC simulations are modified only in their long-range tails (see the upper row of Fig. 4.6), the fixed- node energies raise by sizeable amounts. This suggests an explanation for the relatively slow convergence of the released-node energy: the long-range tails of the pair orbital affect the nodal hypersurfaces, although the energy cost of nodal hypersurfaces displacement is surprisingly low. One can further deduce that this makes the released-node **method** quite challenging to apply since it requires correcting the nodal surface change by sampling low-density regions with walkers travelling large distances. This is, however, difficult to achieve since the diffusive motion of walkers is slow, proportional to t 1/2 , while the growth of the noise is fast, proportional to exp(∆ BF t), where ∆ BF is the difference between the bosonic and fermionic ground-state

Show more
110 Read more

bitals of the up-spin electrons depend on the positions of the down-spin electrons and vice versa. This idea surfaced again much later in connection with the **quantum**-mechanical de- scription of “backflow.” Classical backflow is related to the flow of a fluid around a large impurity. Its **quantum** analog was discussed by Feynman 关12兴 and Feynman and Cohen 关13兴 in the contexts of excitations in 4 He and the effective mass of a 3 He impurity in liquid 4 He. They argued that the energy would be lowered if the 4 He atoms executed a flow pattern around the moving 3 He impurity which prevented the atoms overlapping significantly. This effect was shown to correspond to the requirement that the local current of par- ticles be conserved. They recognized that, without backflow, the effective mass of the 3 He impurity would equal the bare mass and incorporating backflow led to a substantial increase in the effective mass. It turns out that the mathematical form obtained by incorporating backflow into a single-determinant wave function is related to the wave functions considered by Wigner and Seitz 关 11 兴 .

Show more
15 Read more

Typically, the posterior distribution is intractable, in the sense that direct sampling is unavail- able. One way to circumvent this problem is to use a Markov chain **Monte** **Carlo** (MCMC) approach to sample from the posterior distribution [40, 18, 11, 32, 9]. However, for large-scale applications where the number of input parameters is typically large and the solution of the forward model expensive, MCMC methods require careful tuning and may become infeasible in practice.

27 Read more

Abstract— A simple computational procedure has been developed by using **Monte** **Carlo** for allocating redundancy among subsystems so as to achieve maximum reliability of a complex systems subjected to multiple constraints which may be linear, non linear separable or non separable. Two examples of linear, non linear separable and non separable constraints with having twenty **problems** are solved.

Since that the neighbor nodes within the range of transmission radius can communicate with each other, the known information from anchor nodes can be used to assist blind node’s localization [19]. The **Monte** **Carlo** localization **method** is based on the Bayes filtering the- ory, and the main idea is by utilizing the new observa- tion from the adjacent anchor nodes within the range, the sample and filter steps will be repeated until enough valid samples can be obtained. Then, the blind node can estimate its current location as it completes the move- ment [20]. Therefore, the resolution of the blind node’s localization can be transferred into the posterior prob- ability density function. Let t be a discrete time series, x t

Show more