Top PDF Some topics in theoretical high-energy physics

Some topics in theoretical high-energy physics

Some topics in theoretical high-energy physics

The standard model of the early universe has re- cently been used to place constraints on the masses and lifetimes of possible nearly-stable heavy neutrino- like particl[r]

67 Read more

Topics in theoretical condensed matter physics

Topics in theoretical condensed matter physics

Using similar arguments, we have given a simple proof that all local conserved quan- tities of a quadratic time-independent hamiltonian with delocalizing dynamics are them- selves quadratic, and hence that the GGE density operator of such a system is gaussian. We have described how to construct the GGE out of mode occupation numbers in a manner that properly accounts for degeneracies in the mode spectrum. Under an addi- tional assumption on the initial state (needed to avoid having to deal with hydrodynamic timescales comparable to the system size), we have shown that the local 2-point function of the system relaxes to its GGE value with a power law whose exponent can typically be extracted from the local density of single-particle levels at the band edge. Combined with our gaussification results, this proves relaxation to the GGE for a large class of quadratic systems and a large family of initial states, and also gives quantitative information about how local observables relax. We find that, if the initial state has a density wave of some conserved quantity, the system generically relaxes first to a gaussian state, and then, with a smaller inverse power of time, to the GGE. If the initial state is not ordered in this sense, “gaussification” and relaxation to the GGE occur with the same powers of time and cannot be distinguished as easily in general.
Show more

234 Read more

Topics in theoretical particle physics and cosmology

Topics in theoretical particle physics and cosmology

The other model parameters, namely the width d, circumference L, cut-off scale M 5 and overall coupling g, are treated as fixed. This treatment must appear quite arbitrary, and it really is. Among the myriad of other possibilities are to scan some or all of these parameters, to allow the up-sector and the down-sector to have different values of g or to allow their couplings to be scanned independently, to choose a different width d for different wavefunctions, etc. An extreme version of the landscape would allow everything to scan, leaving no fixed parameters to be input by hand. We adopt the assumption above—namely four fixed parameters—because of a practical attitude. Nothing is known about the distribution of these parameters within string theory, let alone what are the appropriate weighting factors coming from cosmological evolution and environmental selection effects. If the distribution of these parameters is sharply peaked around a certain point in the four-dimensional parameter space, then these parameters can be treated as effectively fixed. Such a possibility is not implausible because some toy landscapes predict Gaussian distributions for some parameters, and moreover, cosmological evolution may yield exponentially steep weight factors.
Show more

163 Read more

Some topics in statistical physics

Some topics in statistical physics

9 ( 2 ) ( 1 , 2 ) -3u (r 1 2 ,w i , ^ 2 ) e y (ri2> (1.17) The a p p r o ximation has been applied to a water model where the interaction energy is a Len n a r d - J o n e s potential plus Coulomb interactions b e t ween point charges in the molecule. Numerical difficu l t i e s have p r e vented the solution of the PY equation for this potential however, and Mon t e Carlo studies 16 indicate that the approx i m a t i o n is not good for large orientational anisotropies in the pair potential. It is however the correct low density form and should be a good a p p r oximation in wea k l y anisotropic systems. Ben Naim 17 has also solved the PY
Show more

229 Read more

The Some Common Problems Of  High Energy Physics, Gravitation And Cosmology

The Some Common Problems Of High Energy Physics, Gravitation And Cosmology

The envelope of this process is locus of points, locus of points of its maximum, it is a sinusoidal quantity and it rests in all reference frames; in other words, its phase velocity is equal zero in any reference frame, i.e. it’s relativistically invariant (only by means of it the results of the relativistic dynamics are absolutely correct). If we change a reference frame, we will receive a different value of wavelength of the envelope, but it will be motionless as well. As the computing shows the wavelength of the envelope is exactly equal to de Broglie wavelength, and the dependence of this wavelength on packet velocity is the same! As you see, all the Unified Unitary Quantum Theory is occupied with the resolute exploiting of this basic idea. It should be stressed that this periodical appearing and disappearing of particles doesn`t refer to the Quantum Mechanics, as an immovable packet doesn`t oscillate. The requirement of the relativistic invariance, that would be the main requirement for any theory, specifies the idea further. It states the following: when Lord has excited in space continuum wave packet with his finger and then he has taken it away, then the packet will go on oscillating as a membrane or a string after impact. The frequency  S of these free oscillations is very high: it is proportional to the rest energy of the particle and it is equal to the frequency of the so
Show more

36 Read more

Topics in Theoretical Astrophysics

Topics in Theoretical Astrophysics

objects captured at large distances. If there was some other mechanism that could put an inspiraling object onto an ergodic geodesic, there is the question of how the ergodicity could be identified in practice. Detection of EMRIs will rely on matched filtering or possibly time-frequency techniques [1]. In either case, it will probably not be possible to identify the gravitational radiation as being emitted from an ergodic orbit, but only that radiation from a regular orbit has ceased. It is clear from Figure 2.9 that during an ergodic phase, the emitted power is spread among many harmonics, which will consequently not be individually resolvable. This radiation will increase the broadband power in our detector, whereas if the orbit had plunged the radiated power would rapidly die away. However, the energy released during a typical EMRI is comparatively low, so it is unlikely that we could identify the presence of such broadband power over the instrumental noise. Therefore, the chances are that we will not be able to distinguish observationally between an inspiral that “ends” at a transition into an ergodic phase and one which ends by plunging into a black hole.
Show more

230 Read more

High Energy Physics and Cosmology as Computation

High Energy Physics and Cosmology as Computation

To start with in quantum mechanics we have quantum particles and quantum waves and hardly anything else. However there is no numeric assigned a priori to either a quantum particle or a quantum wave. In string theory we have some definite numbers because the vague point-like particles of quantum mechanics which have no spacetime. We have instead strings with a dimension equal that of a line, namely D = 1. The nearest we come to a fundamental thing resembling a wave in string theory would be the world sheet which has a dimension equal two so that we may start thinking of using two numbers, namely one and two. That way we may regard our 3D space as a string interacting with a world sheet while spacetime is the interaction of two world sheets. Other in- terpretations are of course possible and were proposed in different forms. This becomes more interesting as we move to E-infinity and von Neumann-Connes fractal strings [28]-[33]. Here we have besides the topological dimensions, a Hausdorff dimension. Thus the counterpart of a D = 1 string is D = 1 φ = + 1 φ which is the in- verse of the Hausdorff dimension of the pre-quantum particle zero set which was replaced by a string in string theory. Numerically it is a string plus a pre-quantum particle giving rise to what we call a fractal string with a Hausdorff dimension equal 1 + = φ 1.168033989 . Topologically it is however a two dimensional object because of the fact that d c ( ) 2 = 1 φ = 1.168033989 . As for the fractal object corresponding to the D = 2 world sheet, we have in this case another object with D = + = 2 φ 2.16803398 . This may be seen as a fractal string 1 + φ plus one non-fractal string interacting in union to give rise to 1 + + = + φ 1 2 φ or simply a pre-quantum particle in union with a classical string world sheet 2 giving rise to what we may call a fractal world sheet
Show more

16 Read more

Sizes and distances in high energy physics

Sizes and distances in high energy physics

Sizes and distances in high energy physics Vladimir Petrov 1 , 1 Institute for High Energy Physics, NRC Kurchatov Institute, Protvino, Russia Abstract. This is a critical discussion of physical relevance of some space-time charac- teristics which are in current use in high energy physics.

8 Read more

The sociology of theoretical physics

The sociology of theoretical physics

Pickering refers to the rst stage as ‘the old physics’ of the early 1960s, which “was characterised by its common sense approach to elementary particle phenomena. Ex- perimenters explored high cross-section processes, and theorists constructed models of what they reported.” In fact, the construction of these models was a massive effort of data- tting, where “the use of conservation laws, symmetry principles and group theory brought some order into the proliferation of particles”. Pickering shows how the eld- theoretic, symmetry-based explanations that laid the ground for the Standard Model appeared as a result of trying to gain further insight into the ‘population explosion’ of particle physics whereby the number of fundamental particles discovered in accelera- tor experiments grew from a handful to over seventy types in just a few years. Pickering points out that the ‘symmetry’ school that gave rise to the Standard Model was one of two major theories to tackle the population explosion problem, and that in fact by the later part of the ‘60s the alternative classi cation known as S-matrix/ bootstrap the- ory dominated the eld in terms of publication numbers. Eventually this alternative was surpassed by the eld-theory/symmetry approach, but during the late 1960s the nowadays dominant eld theory approach was not yet established truth. e most soci- ologically relevant point is the transformation in terms of levels of belief. In time, the theoretical data- tting model started gaining distance from experiment and, impor- tantly, started making successful predictions of unseen phenomena, until eventually it mutated into a ‘theory’ that could advance without being tied to the data tting process.
Show more

259 Read more

Economics and Theoretical Physics

Economics and Theoretical Physics

Anyone who can afford to, can put out a solar panel or windmill in their backyard and enjoy free energy; however, because these renewable energy sources have a cost structure they fit in with mainstream energy the source. The absence of this cost factor is what makes zero cost energies hypothetically impractical. Therefore, even if modern day inventors were able to invent devices able to produce energy at no cost or at very low cost it is likely that governments will inevitably levy specific rates on energy that is obtained in this way; for example, if you could stick a rod in the ground that costs ten bucks and power your household or business forever without spending another cent on energy this would be an example of zero cost energy. Though the energy is costs nothing to access businesses and households may at some point still be required by law to have their consumption metered and to pay the government and a supplier something for the energy they consume ensuring there is no revenue vacuum in the evolution from one technology platform to another. A government may set the price of this metered free energy such that it fits in with the price of other conventional sources of renewable and non-renewable energy allowing a safer introduction of these new technologies and whatever additional benefits they may bring by further diversifying humanity’s energy sources. Even if a person invented a device that allows water to be poured into a car instead of gasoline; energy obtained in this way would be classified as a zero cost energy and still be required to be measured and effectively priced by government. It might be cheaper but it would not be allowed to arbitrarily devastate other energy sources in the existing renewable and non-renewable energy sector. In this regard the price of zero cost energy of this kind would be known even
Show more

52 Read more

In theoretical physics, paradoxes are

In theoretical physics, paradoxes are

By fitting some fine details of the pat- tern of masses, one can get an estimate of what the quark masses are and how much their masses are contributing to the mass of the proton and neutron. It turns out that what I call QCD Lite— the version in which you put the u and d quark masses to zero, and ignore the other quarks entirely—provides a re- markably good approximation to reality. Since QCD Lite is a theory whose basic building blocks have zero mass, this re- sult quantifies and makes precise the idea that most of the mass of ordinary matter—90% or more—arises from pure energy, via m ⫽ E兾c 2 .
Show more

11 Read more

Support vector machines in high-energy physics

Support vector machines in high-energy physics

Fig. 10: Results for the comparison between neural networks and SVMs from a flavour separation problem from the OPAL experiment at LEP [18] 7 Conclusion After the presentation of the basics of support vector machines,tools and results, this conclusion includes some advice when and why to use SVMs. Firstly, it is probably not possible to replace an existing neural network algorithm that works fine with a SVM to get directly better performance. NN are still performing at least as good as out of the box SVM solutions in HEP problems, plus they are usually faster if the data is noisy and the number of support vectors is therefore high. This is a specific problem in HEP and it should be addressed for example in the preprocessing step. Another point of importance in HEP, which has to be clarified, is the robustness against training data that does not exactly reproduce real data. On the other hand, SVMs are a new tool for HEP that is easy to use, even for a non expert. So it is worth trying them on new problems and compare the results with other out of the box multivariate analysis methods. The advantage of the SVM is that they are theoretically understood, which also opens the way for new, interesting developments, and if more people are using them in physics, new solutions will maybe lead to classifiers that are better than today’s algorithms. It has to be kept in mind, that neural networks, now common in data analysis, took quite some time to become accepted and tailored for HEP problems. Furthermore the kernel trick is a general concept. Having experience with kernels suitable for SVMs in HEP opens the possibility to use other kernel methods like the ones mentioned in section 2.1. For those who want to know more about tools, documentation and publications on this topic a good starting point is the website www.kernel-machines.org.
Show more

11 Read more

Introduction to neural networks in high energy physics

Introduction to neural networks in high energy physics

Figure 8. Backpropagation of errors in a neural network. (Figure adopted from [ 4].) 4.3 Overtraining revisited We have seen that a network with a sufficient number of hidden neurons can evolve into a represen- tation of the optimal discriminating function for a given problem by adapting to the training data set during the learning phase - but what if the network adapts too well to the training data? This situation may be compared to a fitting problem where one tries to describe a set of data points which where generated from a linear model with some noise on top (the noise is usually associated with an uncer- tainty of the measurement) by polynomials of varying order. Even though the underlying model is linear, a polynomial of su ffi cient order will always be able to pass through all the data points and thus produce smaller residuals. But this supposedly better fit will have low predictive power since it does not generalize well: An interpolation (or even more extreme an extrapolation) using this polynomial will yield predictions which typically differ significantly from the underlying model. Overfitting has occurred. In the realm of neural networks, a behavior like this is called overtraining, and we have already encountered it in our discussion of neuron training, where it lead to overly confident class assignments.
Show more

18 Read more

High-energy neutrino interaction physics with IceCube

High-energy neutrino interaction physics with IceCube

We determined the energy range for which this mea- surement applied by studying the change in likelihood as we turned o ff Earth absorption, first starting from very low energies, working upward, and then starting from very high energies, working downward. The points where the likelihood worsened by 2∆LLH = 1 gave us the min- imum and maximum energies respectively, 6.3 TeV and 980 TeV. Figure 2 shows this result, along with previous lower-energy results from accelerator experiments. The data does not show a large rise, as would be expected in some BSM theories, particularly those involving lep- toquarks or additional rolled-up spatial dimensions.
Show more

6 Read more

Theory construction in high-energy particle physics

Theory construction in high-energy particle physics

models have been split into phenomenological and dynamical, while theory has been split into dynamical models and theoretical framework. There are a few reasons for this. First, the term “model” is ambiguous. In the first sense, we can understand the term as used in model theory. Then a model is simply an interpretation of a theory. Take, as an example, the theory of gen- eral relativity. Mathematically, any model of the theory is given by a specification of a tuple hM, g µν , T µν i including the manifold M, a pseudo-Riemannian metric tensor g µν , and a stress energy tensor encoding the matter content, T µν . In terms of model theory, the class of these models satisfying the Einstein equations constitutes the theory of general relativity, and any particular specification is a model of the theory. 3 Hence, an FLRW cosmological solution to the Einstein equations is a model of general relativity, though it forms the basis for the theory of cosmology. This is not usually the sense of the word “model” meant in the modeling literature in philosophy of science. This second meaning usually refers to partial constructions—with input from the relevant theory, other auxiliary theories, and perhaps phenomenology—meant to more directly model some proper subsystem that falls under a particular theory. The termi- nology schematized in the figure is partially meant to disambiguate between these two senses of “model”: model-theoretic models would fall under the class of dynamical models, and the theory specified by a particular class of models would be the theoretical framework. Models aimed at representing specific phenomena in the world fit under the heading of phenomenolog- ical models. The divisions in Figure 2.1B could likely be split even finer, though my primary focus is on the mutual interaction between all classes, as indicated by the double-arrows. The case of the discovery of parity violation will help to highlight the mutual interactions between, and evolution of, experiment, phenomenological models, and theoretical framework in HEP.
Show more

96 Read more

Nonextensive statistical mechanics and high energy physics

Nonextensive statistical mechanics and high energy physics

2 Santa Fe Institute - 1399 Hyde Park Road, Santa Fe, NM 87501, USA Abstract. The use of the celebrated Boltzmann-Gibbs entropy and statistical mechanics is justified for ergodic-like systems. In contrast, complex systems typically require more powerful theories. We will provide a brief introduction to nonadditive entropies (char- acterized by indices like q, which, in the q → 1 limit, recovers the standard Boltzmann- Gibbs entropy) and associated nonextensive statistical mechanics. We then present some recent applications to systems such as high-energy collisions, black holes and others. In addition to that, we clarify and illustrate the neat distinction that exists between Lévy distributions and q-exponential ones, a point which occasionally causes some confusion in the literature, very particularly in the LHC literature.
Show more

13 Read more

High Energy Atmospheric Physics: Ball Lightning

High Energy Atmospheric Physics: Ball Lightning

Following Tesla vacuum hypothesis [4] [5], we suppose that when sudden and very powerful TGF passes through the air and strike the surface of the Earth, “ the tremendous expansion of some portions of the air and subsequent rapid cooling and condensation gives rise to the creation of partial vacua in the places of greatest development of heat . These vacuous spaces , owing to the properties of the gas , are most likely to assume the shape of hollow spheres when , upon cooling , the air from all around rushes in to fill the cavity created by the explo- sive dilatation and subsequent contraction ”.
Show more

15 Read more

2009 European School of High-energy Physics

2009 European School of High-energy Physics

, (45) such that a n is suppressed by (Λ QCD /m B ) n . In principle we can calculate all the a n and get a very precise prediction. It is helpful that a 1 = 0. The calculation has been done for n = 2 and n = 3. The exclusive approach overcomes the problem of the b quark mass by looking at specific hadronic decays, in particular, B → Dℓ¯ ν and B → D ∗ ℓ¯ ν. Here the problem is that the decay cannot be calculated in terms of quarks: it has to be done in terms of hadrons. This is where using ‘form factors’ is useful as we now explain. The way to think about the problem is that a b field creates a free b quark or annihilates a free anti-b quark. Yet, inside the meson the b is not free. Thus the operator that we care about, cγ ¯ µ b, is not directly related to annihilating the b quark inside the meson. The mismatch is parametrized by form factors. The form factors are functions of the momentum transfer. In general, we need some model to calculate these form factors, as they are related to the strong interaction. In the B case, we can use HQS that tell us that in the limit m B → ∞ all the form factors are universal, and are given by the (unknown) Isgur–Wise function. The fact that the we know something about the form factors makes the theoretical errors rather small, below the 5% level.
Show more

234 Read more

The World Wide Web and High Energy Physics

The World Wide Web and High Energy Physics

A collaboratory facilitates scientific interaction within a group by creating a new, artificial environment in which group members can interact. This new environment must be socially acceptable to those who participate and improve their ability to work. Many computing tools must be brought together and integrated to allow seamless interaction. Some of these tools are already in wide use, such as WWW and e-mail, while others, like telepresence - the immersive electronic simulation of "being there" - are still being developed.

39 Read more

Some topics in homogenization

Some topics in homogenization

The various stages remain the same as for weak convergence except the conver- gence section is somewhat modified and the requirement on the energy H 1 is also slightly different. For H 1 , we need it to be large an increasingly large proportion of the time as opposed to at the end of the time interval with increasingly high prob- ability. We can approximate any continuous and bounded f in the limit as time tends to infinity by functions of compact support from Lemma 6.1.4. By taking the positive and negative parts of f , we can assume the f we are dealing with is posi- tive. Given an f of compact support we can approximate it uniformly by smooth functions of compact support, hence we will assume that f is positive, of compact support and smooth from now on.
Show more

219 Read more

Show all 10000 documents...