Tandem solar cells have demonstrated the potential to increase the efficiency of solar energy conversion. The detailed balance principle introduced by Shockley and Queisser 1961 was later applied by De Vos 1980 to study the tandem structures. Although the mathematical formulation is **simple**, the mathematical resolution is rather complex but fairly accurate. In this work I describe a **simple** **Monte** **Carlo** (MC) technique to determine the detailed balance limiting efficiency for tandem solar cell stacks. This statistical **method** used a **simple** sampling scheme which is adequate to resolve a complex equation system that is depicting a large number of multi‐junction without any further approximations. In current‐matched tandem solar cells, the band gap of each sub‐cell has to be chosen so that the current flowing through each of the sub‐cells is the same. Finding from this study fcus on10 stacked junctions; the algorithm can be applied to a larger number of sub‐cells. The simulation is carried out with four different conditions; under black body, AM1.5G, AM1.5D spectrums and maximum concentration. The **method** employed in this study provides a useful tool for researchers to assess the optimum band gap arrangements of current constrained solar cell stacks together with a predicted efficiency limit. The results claim that the application of MC technique is in agreement with the finding from previous studies.

Show more
10 Read more

To generate samples of K(x) at the nodes of the computational domain, first, we need to generate samples of Gaussian field Z(x) at such nodes. One of the most popular methods to generate different (Gaussian distributed) Z(x) is the Karhunen–Loéve (KL) expansion **method** [2,13–16,21,22]. This **method** provides an approximation (due to the truncation of an infinite series) of the permeability fields at all the points of the continuous domain, which can be sampled afterwards on any grid. In order to avoid adding extra errors (arisen from the truncation of the KL expansion) to the model and produce more accurate representations of the hydraulic conductivity, alternative methods might be considered, for instance, the circulant embedding algorithm [23–25]. The circulant embedding **method** provides fast and exact representations of the Gaussian field but requires the use of the fast Fourier transform **method**, and thus, it is not straightforward to implement. Two alternatives to the circulant embedding **method** for producing exact decompositions of the covariance matrix associated to the correlation function given in (2.5) are the Cholesky **method** [11,25,26] and the KL decomposition [22,27]. These methods are not recommended for covariance functions that are differentiable at zero lag distance, e.g. the square exponential (or Gaussian) correlation function [22,28]. In those cases, the associated covariance matrix is likely to become extremely ill-conditioned [29,30]. They could be also inappropriate for problems in which the simulator necessitates an extremely fine discretization of the computational domain [30], but this does not apply to the problem considered in this paper. Conversely, the main advantages of this approach is that it only requires a single eigen-decomposition of the covariance matrix, the results of which are stored and used to generate new realizations of the permeability field very cheaply, and furthermore, its implementation is very **simple** and straightforward. In this paper, we will opt for the KL decomposition **method**, which is described briefly next (for full details, e.g. [22,25]).

Show more
19 Read more

The procedure to be used when applying local mode analysis to -the problem of vibrations in molecular clusters is not so obvious. Generally intermolecular bonds are rather loosly defined structures, so there are no local coordinates which will give a **simple** description of the intermolecular modes. The situation for the intramolecular modes, however, is more promising. In a cluster the motions involved in intramolecular vibrations are usually localized on the particular molecule. Thus it is likely that the local coordinates used in local mode calculations on the isolated molecules probably still provide a good representation of the intramolecular vibrations of the cluster. This idea forms the basis of an effective potential, or "frozen field", local mode **method** for studying how intramolecular motions are perturbed by clustering. The **method** was developed and applied to water clusters by Reimers and Watts (1984b) and has since been used by Miller, Watts and Ding (1984) to interpret the intramolecular vibrational spectra of nitrous oxide clusters.

Show more
252 Read more

Ideally one could compute g directly via thermodynamic integration along a reversible thermodynamic path between polymorphs. Such paths are difficult to obtain in general, but have been realized for **simple** transformations in atomic solids [8,9]. Instead, one normally resorts to the computation of absolute free energies g for each solid phase, most commonly by connecting them via a fictitious path to an Einstein crystal [10]. This is largely routine, particularly for atomic crystals [11], although other methods are available [12,13]. However, obtaining a small g by subtracting two much larger numbers is less than ideal, particularly as the uncertainly in the absolute g of each phase can be laborious to quantify.

Show more
is to generate the random path of the stock price. Then calculate the option yield according to the simulated random path. A large number of samples are generated and the benefits of each sample path are calculated by repeating these two steps over and over again. Then in the risk-neutral world, it can discount the risk-free interest rate to get the price of the callable convertible discount bond. It can get the price of the callable convertible discount bond by using **simple** arithmetic mean.

While we focused only on a post-experiment MCM uncertainty analysis (reporting phase), the general methodology may be applied to pre- and during-experiment phases, including planning, design, debugging, construction, execution, and data analysis. Moreover, the MCM is a suitable and straightforward way in which to perform an uncertainty analysis on complex WEC models. Therefore, we recommend future guidelines and codes pertinent to uncertainty analysis for WECs, such as those developed by the International Towing Tank Conference (ITTC) and international Electrotechnical Commission (IEC), to incorporate the MCM and provide a **simple** practical example.

Show more
10 Read more

The paper focus on analyzing charging characteristics of different types EV, which includes the charging time, initial charge capacity and initial charging time, predicting the quantity of different EV types in the future. Based on the charging power of different kind of charging characteristics of different type of EV, it uses the **Monte** **Carlo** simulation **method** to predict the EV charging load curve, analyzes the EV charging load impact on the power grid. Based on the study, the estimated level of future EV charging load in China is calculated and analyzed. The result shows that with the development of EV in China, the future impact of EV charging load on the grid will continue to enhance and the huge peak-valley difference provides a substantial potential to coordinate the charging of PEVs.

Show more
RyR RyR Channel Channel PNP/DFT Monte Carlo PNP/DFT Monte Carlo Selectivity filter Various. many − charges[r]

64 Read more

of the option price similar to our, but he follows a different approach from our. In this paper we compare the spectral expansion of the option price with the **Monte** **Carlo** **Method**. We show that using the spectral expansion is possible to define the computation complexity of the problem and thus it is possible to manage it, unlike of **Monte** **Carlo** **method**.

14 Read more

The current dynamic and turbulent manufacturing environment has forced companies that compete globally to change their traditional methods of conducting business. Recent developments in manufacturing and business operations have led to the adoption of preventive maintenance techniques that is based on systems and process that support global competitiveness. This paper employed **Monte** **Carlo** Normal distribution model which interacts with a developed Obudulu model to assess reliability and maintenance of Injection Moulding machine. The failure rate, reliability and standard deviations are reliability parameter used. **Monte** **Carlo** Normal distribution was used to analyse the reliability and failure rate of the entire system. The result shows that failure rate increases with running time accruing from wear due to poor lubrication systems; while system reliability decreases with increase time (years). Obudulu model was used to evaluate the variance ration of failure between system components under preventive maintenance and those outside preventive maintenance. The result shows that at reliability +0.3 and failure rate - 0.02, preventive maintenance should be done. Interaction between the **Monte** **Carlo** normal distribution and obudulu model shows that the total system reliability is 0.489 when maintained which is 49% and 0.412 (41%) when not maintained. Also quality of production increased during Preventive maintenance while system downtime reduced greatly. These models were programmed using **Monte** **Carlo** Excel tool package software, showing the graphs of reliability and failure rates for each system.

Show more
20 Read more

As in previous kinetic **Monte** **Carlo** algorithms, propagating an epitaxial system forward in time requires event selection and updating of data structures in each iteration of the algorithm. With the atoms of an epitaxial system defined to occupy positions on a lattice, the KMC variant introduced by Bortz, Kalos, and Lebowitz [2], termed the n-fold way, provides an appropriate scheme. The n-fold way algorithm was formulated to simulate an Ising spin system, which is a d-dimensional lattice where each site is assigned a spin variable assuming the value of +1 or −1 . A given site moves to a new state by flipping its spin from one value to the other; this is accomplished by either reversing its own spin or by interchanging its spin with the unlike spin of a neighbor. In the case of an epitaxial system, the spin variables may indicate the presence (+1) or absence (−1) of an atom in a given lattice site; a spin interchange would, therefore, correspond to the diffusion of an atom.

Show more
52 Read more

Abstract. Models of hydrogeology must deal with both heterogeneity and lack of data. We consider in this paper a flow and transport model for an inert solute. The conductivity is a random field following a stationary log normal distribution with an exponential or Gaussian covariance function, with a very small correlation length. The quantities of interest studied here are the expectation of the spatial mean velocity, the equivalent permeability and the macro spreading. In particular, the asymptotic behavior of the plume is characterized, leading to large simulation times, consequently to large physical domains. Uncertainty is dealt with a classical **Monte** **Carlo** **method**, which turns out to be very efficient, thanks to the ergodicity of the conductivity field and to the very large domain. These large scale simulations are achieved by means of high performance computing algorithms and tools.

Show more
18 Read more

For the **Monte** **Carlo** analysis, the model increases the possibilities of weights to each variable, applying random values in a range defined by the user. Instead of just calculating one result, the model simulates many possible values of weights inside a minimum and a maximum value and compares the results in the many scenarios simulated. In those parts in which the results changed more according to changes in weights randomly selected in a defined range, it is possible to say that the level of uncertainty is higher. Using the tools developed by Jankowski and Ligmann-Zielinska (2014), two maps were produced: an evaluation map, as a suitability map for walkability in the territory, resulted from equal weight values for all variables, and an analysis of uncertainty of the results. Using the Multi-Criteria evaluation **method**, the Suitability Analysis Map was produced, and using the **Monte** **Carlo** **method** to calculate the uncertainty, the Sensitivity Evaluation Map was generated.

Show more
15 Read more

In this thesis, we propose an efficient probabilistic approach for cooperative multi-robot localization in indoor environments. Our approach is based on **Monte** **Carlo** localization that has been applied with great practical success to single robot localization. The robots, capable of sensing and exchanging information one with another, localize themselves by maintaining their own belief functions which are the clustering based MCL algorithm. Our new developed information exchange mechanism is employed to synchronize each robot’s belief whenever one robot detects another in order to speed up the localization process with higher accuracy. Our proposed approach can prevent the localization from suffering the problem of delayed integration by comparing beliefs of both robots at each time of detection to avoid unnecessary information exchange. We utilize the information extracted from clustering component that analysis the distribution of whole particle set to quantify robot’s belief and transfer information across different robots. In addition, by analyzing how concentrated the particles are, the robot can carry the notion of whether it has been localized or not by itself instead of incorporating human observers. In our proposed approach, robots themselves are implicitly used as landmarks rather than only external landmarks, therefore, can further facilitate the localization process. Experimental results, carried out in both real and simulated environments using two robots, demonstrate that our proposed approach can significantly reduce the uncertainty compared with single robot localization.

Show more
116 Read more

particles and B is the offset. The second term obviously decreases in magnitude as the **Monte** **Carlo** (MC) simulation uses increasing numbers of particles. Thus, ultimately one recovers the correct osmotic coefficient as the number of particles grows. We investigated the causes of this size dependence and attempted to find ways to eliminate it. We found that B is dependent on the particular inverse **Monte** **Carlo** (IMC) protocol used, including truncation parameters associated with the Ewald summation (as would be expected), the number of MC steps at each iteration, and the criterion used to choose a final IMC solution. More work will be needed to establish definitive protocols for removing this size dependence. This report does not describe details regarding the new finding.

Show more
34 Read more

Abstract. Bayesian inference often requires integrating some function with respect to a posterior distribution. **Monte** **Carlo** methods are sampling algorithms that allow to com- pute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common **Monte** **Carlo** algorithms, among which rejec- tion sampling, importance sampling and **Monte** **Carlo** Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of **Monte** **Carlo** in experimental physics, and point to landmarks in the literature for the curious reader.

Show more
21 Read more

The noise term c can also be measured from noise events. Noise events are ran- domly triggered events during data taking to measure the pedestal offset and noise. The noise term c is the RMS of the reconstructed energy distribution of the noise events. Figure 5.7 shows the contribution of the noise to the energy resolution per beam energy. The average noise for both electrons and positrons is 2 MIPs. This value is converted to GeV by scaling it with the electromagnetic conversion factor a as determined in section 5.3. Using this **method**, the noise term c is between 53 and 56 MeV, which is roughly a factor 4 smaller than the values obtained from the fit but agrees with the iron calorimeter result. This is not yet understood and requires additional study.

Show more
63 Read more

The refinement procedure is currently limited to using a single response distribution when calculating updated parameter distributions. An important issue is the calibration of model parameters specific to the dairy herd in which the risk of tetany is estimated. In the model Mg intake is determined by feed intake and pasture characteristics, which may change from day to day due to changes in feed management practice on the farm. Measuring feed and Mg intake directly is not practical for a commercial dairy herd. A possible **method** of calibrating the model to a specific herd is to take measurements of Mg in samples of urine on successive days, then estimate parameters for feed and Mg intake that minimise the error between the simulated and measured urinary Mg excretion.

Show more
In this work we present a LODE estimator inspired by Gleser (1981) and by the Total Least Square procedure introduced by Golub and Van Loan (1980) and Van Huffel (2002), in which estimates of structural form parameters is obtained using the last m smallest singular right vectors associated to the m smallest singular values. The performances of this estimator are very good both in terms of bias and RMSE. In particular has to be notice that it works better than 3SLS and FIML in case of small samples and high correlation. A quite substantial improvements of the performances of FI LODE is shown in standard situation like the one considered in the **Monte** **Carlo** experiment presented here.

Show more
19 Read more