we should provide some model for the structure of the remaining part of the incoming hadron. This is a subtle problem, that cannot be treated in a rigorous way in QCD. The crudest approach one can think of, is to force initial state gluons at the end of the shower to arise from a quark in backward evolution, then let the remaining diquark in the incoming proton, carrying the left over momentum of the initial hadron, hadronize with the remaining particles in the event. In other approaches, if the backward shower stops with a gluon, the remaining quarks in the incoming hadron are put into a colour octet state, and this system is broken up with various rules, to yield objects that the hadronization mechanism can handle. There is some evidence 4.19.2 that, in order to represent the activity of the underlying event in a reasonable way, the effect of multiparton interactions must also be included. In other words, one must assume that the remnants of the incoming hadrons can undergo relatively hard collisions. Even this phenomenon is implemented with phenomenological models in Shower MonteCarlo programs. Among the ingredients entering these models, one assumes that partons have a given transverse distribution in a hadron. The cross section for secondary interactions is assumed to be given by the partonic cross section with an appropriate cutoff in transverse momentum. This cutoff has to be carefully tuned, since the partonic cross section diverges as the cutoff goes to zero. The momentum of the spectator partons has to be properly rescaled, to account for the momentum taken away by the parton that initiates the spacelike shower. Flavour and colour of the spectators has to be properly adjusted.
At this luminosity the interaction rate amounts to about 200 kHz, assuming that the total inelastic pp cross section is 70 mb. The TPC records tracks from interactions which have occurred during the time interval 88 µs before and after the triggered bunch crossing. Hence on average 40 events will pile-up during the drift time of the TPC, before and after the trigger. However, on average only half of the tracks will be recorded, due to the fact that the other half will be emitted outside the acceptance. Therefore the total data volume will correspond only to the equivalent of 20 complete events. The charged-particle density at mid-rapidity in pp collisions at the nominal LHC centre-of-mass energy of √ s = 14 TeV is expected to be about 7 particles per unit of pseudorapidity, resulting in a total of ∼ 250(400) charged tracks within the TPC acceptance |η| < 0.9 (or within the extended acceptance |η| < 1.5, when including also tracks with only a short path through the TPC). Clearly, tracking under such pile-up conditions is still feasible, since the occupancy is more than an order of magnitude below the design value of the TPC.
by the LHC data. An extension of the model HDPM of CORSIKA has been developed on a base of the recent measurements by LHCb, CMS, TOTEM... The new model GHOST involving a 4-source production is presented here. It correctly reproduces the pseudo-rapidity distributions of charged secondaries and can help the approach of the data in the mid and forward rapidity region, especially in the complex case of TOTEM. In parallel, simulations of cascades and EAS are also carried out in order to understand unexplained results in energy distributions of very high energy γ ’s.
Fluctuations in systems away from thermal equi- librium represnt a problem of long standing in statistical physics [Onsager and Machlup, 1953]. Well known examples of systems in which non- equilibrium fluctuations play a particularly impor- tant role include e.g. lasers [Keay et al., 1995], proteins [Serpersu and Tsong, 1983], Josephson junctions [Kautz, 1996], and chemical reactions [Smelyanskiy et al., 1999b]. In particular, activated processes are of big importance. Noise induced es- cape means a transition from one state to another, which e.g. in chemical system describes a reaction [Smelyanskiy et al., 1999b; Huber and Kim, 1996]. In non-equilibrium systems, where symmetries of detailed balance are broken, no general methods ex- ist for the calculation of even basic quantities like the probability distribution. This is a case where numerical and asymptotic theoretical methods for investigating the probability distribution are of par- ticular importance.
The 15th International Workshop on Meson Physics - MESON 2018 - took place in Cracow, Poland, from 7 to 12 June 2018. We were celebrating the 15th edition of this conference. There- fore the opening of the conference was devoted to the memories of previous editions and during the banquet an anniversary cake was served. At this point, we also wanted to recall briefly the history of this series of conferences. The tradition of MESON conferences dates back to 1991. The main ini- tiator of the first MESON conference was Walter Oelert and it was co-chaired by Eckart Grosse and Andrzej Magiera. The first three meetings were thematically related to the physics program planned for the accelerator COSY and started in GSI. The meetings were attended by many experts in meson physics, and the program was not limited only to low-energy meson physics which the organizers dealt with. Then we realized that meetings of this kind were very useful for the whole community in- volved in various aspects of meson physics. Therefore we decided to continue our meetings extending their program beyond standard meson physics. Since 1996 our conferences have gained their current name MESON and have become traditional meetings of the meson community. Since the begining the Jagiellonian University and Forschungszentrum J¨ulich were the conference organizers, and the third organizer GSI Darmstadt was replaced by LNF-INFN Frascati in 2000 and fourth organizer INP-PAS Cracow has joined in 2010. Over the years also the chairmen have changed. Among them, the largest number of conferences were presided over by Carlo Guaraldo, Hartmut Machner, Lucjan Jarczyk, St. Kistryn, Andrzej Magiera and Hans St¨oher. The number of participants from about 70 at the begining increased and stabilized on about 180 in recent conferences. Over 1200 physicists from all over the world participated in all conferences, and more than a thousand contributions were published in the conference proceedings.
While object-oriented physics generators in other fields of high energy physics were evolved from well established legacy systems, in neutrino physics no such ‘canonical’ MC exists. Until quite recently, most neutrino experiments developed their own neutrino event generators. This was due partly to the wide variety of energies, nuclear targets, detectors, and physics topics being simulated. Without doubt these generators, the most commonly used of which have been GENEVE , NEUT , NeuGEN , NUANCE  and NUX , played an important role in the design and exploration of the previous and current generation of accelerator neutrino experiments. Tuning on the basis of unpublished data from each group’s own experiment has not been unusual making it virtually impossible to perform a global, independent evaluation for the state-of-the-art in neutrino interaction physicssimulations. Moreover, limited manpower and the fragility of the overextended software architectures meant that many of these legacy physics generators were not keeping up with the latest theoretical ideas and experimental mea- surements. A more recent development in the field has been the direct involvement of theory groups in the development of neutrino event generators, such as the NuWRO  and GiBUU  packages, and the inclusion of neutrino scattering in the long-established FLUKA hadron scattering package .
To examine the boundary condition problems discussed above we have performed importance sampled calculation with a system of 108 particles. A time step of A t = 0.05 x 10“ *'5S W as used and the initial ensemble contained 100 systems distributed according to the variational wave function.. In Figure 4.4 we show the effects of extrapolating the radial distribution function obtained from this run by using the variational distribution. Again the results are compared with GFMC values. Good agreement between the diffusion MonteCarlo and GFMC results is generally observed. The two simulations were performed at slightly different densities, p p^C = 0 . 4 and p GFMC = 0*^01» so the small variations in the results are probably associated with this density difference. Agreement between the predicated eigenvalues (including only long range corrections) is also good, eDMC = “6.78 ± 0.06 and E q ^^ q = -6.743 ± 0.033 K/molecule. Whitlock et a l . (1979) have obtained a perturbation estimate of the three body correction, and at this density they give <V^ b > = 0.206 ± 0.002 K/molecule or about 3 l of the two body values given above. When the three body correction is made both the quantum MonteCarlo calculations give ground state energies which are approximately 0.5 K/molecule higher than the experimental value EeXp = -7.00 K/molecule (Roach, Ketterson and Woo (1970)). This discrepency is a result of the inadequacy of the Lennard-Jones potential (Whitlock et al. (1981) ).
models where the algebra is an algebra of matrices. Thus one can construct perfectly computable models of random geometry that are not lattice mod- els. Moreover, the standard model of particle physics has a non-commutative geometry using exactly the same framework [4, 11], so the hope is it will be easy to combine the two into a unified model of gravity and particle physics. The technical details of the Dirac operators, observable functions and MonteCarlo method are given in section 2. The results for the action Tr D 2 are given in section 3, where it is explained how the results relate to the standard theory of Gaussian random matrices. Actions including a Tr D 4
Standard Model of elementary particles needs an exten- sion to explain some facts. One of these facts is that the neutrinos are massive. B-L model (baryon number minus lepton number),  is a simple extension of the Standard Model which plays an important role in various physics scenarios beyond the Standard Model (SM). Firstly, the gauged U(1) B-L symmetry group is contained in a Grand Unified Theory (GUT) described by a SO(10) group. Secondly, the scale of the B-L symmetry breaking is related to the mass scale of the heavy right-handed Majorana neutrino mass [2,3], terms providing the well- known see-saw mechanism of light neutrino mass gene- ration. Three heavy singlet (right-handed) neutrinos v h
Interesting physics processes are likely to be rare. A high luminosity (collision rate) is therefore a driving factor in the design of a collider. The beam repetition rate is inherently lower at a linear collider than a circular collider (for example 40 MHz at LHC vs. 50 Hz at CLIC). Thus, to achieve a high luminosity at an LC a smaller spot size is required. This results however in strong electromagnetic radiation (beamstrahlung) caused by the opposite beams interacting with each other. It dilutes the luminosity spectrum while large background is created.
This paper analyzes the effects of CDO issuance on the risk of default of banks. Previous literature showed that the overall riskiness of a bank can increase when it sells part of the loans in its portfolio by issuing a CDO of which it retains the equity tranche. Using MonteCarlosimulations, this paper confirms previous results but also highlights that they can change substantially if one modifies the hypothesis regarding how the proceeds of securitizations are reinvested. The assessment of the effects of securitizations on bank stability is thus mainly a matter of empirical research. Using data for Italian banks I provide evidence that the securitization activity has been a relevant factor in changing the composition of the asset side of banks’ balance sheets. Results also show that these changes have probably contributed to lower the average ex-ante riskiness of Italian banks. I also compare the riskiness of loans that have been securitized with that of new loans granted by the same securitizing banks using loan- by-loan data. Results show that new loans are on average riskier than loans that have been securitized, thus pointing to an increasing amount of risk to be born by banks as a consequence of the reinvestment of the proceeds of securitizations.
A cell is modeled as a slab of tissue with a single nanoparticle attached to its surface (Figure 1). An analytical and a MonteCarlo simulation were compared for various source energies, nanoparticle materials, and concentrations. Endothelial cell lines of tumor vasculature are more likely to be exposed to nanoparticles. For both simulations, an endothelial cell was modeled as a 10 × 10 × 2 µ m slab composed of 10% hydrogen, 11% carbon, 2% nitrogen, and 76% oxygen according to the International Commission on Radiation Units and Measurements. X-ray energies from 10 to 900 keV were chosen to model the low-energy photon beam. Nanoparticle concentrations of 5, 10, 15, and 20 mg/g of tissue, evenly distributed through the cell, were chosen based on their usage
In this paper, we investigate three existing methods for outperforming MC, namely, multilevel MonteCarlo (MLMC) , quasi-MonteCarlo (QMC)  and multilevel quasi-MonteCarlo (MLQMC) . We apply these methodologies to the problem of travel time estimation in heterogeneous porous media. This is of central importance in a series of engineering applications ranging from groundwater management to groundwater remediation. It also involves the development of mathematical models for reactive transport in porous media. These models are used to assess, for instance, groundwater contamination, CO 2 sequestration, residence time distributions, etc. The QoI considered in this study will be the result
Falcke, H., Hörandel, J., Nelles, A., Rachen, J., Rossetto, L., Scholten, O., ter Veen, S., Thoudam, S., Ebert, U., Koehn, C., Rutjes, C., Alexov, A., Anderson, J. M., Avruch, I. M., Bentum, M. J., Bernardi, G., Best, P., Bonafede, A., Breitling, F., Broder- ick, J. W., Brüggen, M., Butcher, H. R., Ciardi, B., de Geus, E., de Vos, M., Duscha, S., Eislöffel, J., Fallows, R. A., Frieswijk, W., Garrett, M. A., Grießmeier, J., Gunst, A. W., Heald, G., Hessels, J. W. T., Hoeft, M., Holties, H. A., Juette, E., Kon- dratiev, V. I., Kuniyoshi, M., Kuper, G., Mann, G., McFadden, R., McKay-Bukowski, D., McKean, J. P., Mevius, M., Moldon, J., Norden, M. J., Orru, E., Paas, H., Pandey-Pommier, M., Pizzo, R., Polatidis, A. G., Reich, W., Röttgering, H., Scaife, A. M. M., Schwarz, D. J., Serylak, M., Smirnov, O., Steinmetz, M., Swin- bank, J., Tagger, M., Tasse, C., Toribio, M. C., van Weeren, R. J., Vermeulen, R., Vocks, C., Wise, M. W., Wucknitz, O., and Zarka, P.: Probing atmospheric electric fields in thunderstorms through radio emission from cosmic-ray-induced air showers, Phys. Rev. Lett., 114, 165001, doi:10.1103/PhysRevLett.114.165001, 2015. Sempau, J., Sanchez-Reyes, A., Salvat, F., ben Tahar, H. O., Jiang, S., and Fernández-Varea, J.: MonteCarlo simulation of electron beams from an accelerator head using PENELOPE, Phys. Med. Biol., 46, 1163, doi:10.1088/0031-9155/46/4/318, 2001. Shao, T., Zhang, C., Niu, Z., Yan, P., Tarasenko, V. F., Baksht, E. K.,
The pair correlation function is a measure of the microscopic structure of matter. Thermodynamic quantities that depend on the pair potential can be directly extracted from the pair correlation function, thus it provides suitable benchmark calculation for validating molecular scale simulations. Here we simulate simple homogeneous equilibrium electrolytes at concentrations of physiological interest using three quite different simulation methodologies – Equilibrium MonteCarlo (EMC), Molecular Dynamics (MD), and Boltzmann Transport MonteCarlo (BTMC). Ion-ion pair correlation functions computed for both monovalent and divalent electrolytes compare very well between the three different methodologies.
A no-slip boundary condition was imposed on the interface of the bound platelets and the fluid. For small, dense platelet deposits, the flow within the clot is likely to be small. For larger structures, pressure driven flow through the porous clot can contribute to transport of soluble species. The structure of a clot in vivo is not well understood as are the transport properties within the clot. In principal, the pore size within the clot structure could be predicted from a three dimensional simulation, and the flow through the structure could be resolved at the fluid pore level. However, platelets may not maintain an idealized shape inside the clot structure, which will change the pore size distribution. After approximately 10 minutes, platelets will also spread out on the surface. Furthermore, resolving fluid flow on the pore level is computationally costly, so a more coarse-grained approach may be more appropriate. Leiderman and Fogelson used a Brinkman term within the Navier-Stokes equation for the porous flow through the clot structure . A similar scheme is easily adapted to simulations using lattice Boltzmann . Xu et al. described flow within the clot using Darcy’s law which was coupled to the normal Navier-Stokes equation in the rest of the domain . However, with current experimental data, it is unclear what the appropriate parameters are for these models or whether inter-clot flow is important to the dynamics of clot growth for the conditions studied here.
This paper examines the forecasting power of various model-free option-implied volatilities for future returns and realized volatility via both MonteCarlosimulations and an empirical study using SPX options. By decomposing the model-free implied volatility into di¤erent components using various segments of the out-of-the money (OTM) put and call options, this study ascertains the role of each of the components. The paper provides a simulation study on the impact of the strike range and increment on the predictive power of the implied volatilities. Results show that: …rst, the forecast accuracy for future volatility improves with the range of strikes; second, the strike range exerts a negative impact on the predictive power of the implied volatilities for future returns; third, a smaller partition of strikes tends to result in greater performance of implied volatilities in predicting returns. These …ndings warrant the application of an interpolation and extrapolation scheme in order to enhance the forecasting power of implied volatilities for future volatility while only an interpolation method is needed in the case of return predictions.
through application of a discrete event based system model that will simulate life-cycle scenarios of terminal operations. The system must include the reliability of the units based on general historical data or the experts’ best judgement. The operating policies are also included in the model. The computer model can be run though a life-cycle of the terminal. Since the model includes events which are set by a random generator, each of them produces a different result. Based on the MonteCarlo method, the system must be run through a sufficient number of times in order to generate probability function. Typically more than 100 simulations are required.