In this thesis a cluster method for evaluating the structure of semiconductor surfaces is formulated. Chadi’s total energy algorithm is used to express the total energy o f a cluster in terms o f a sum o f one-electron energies and a residual energy term. T h e one-electron energies are calculated within Harrison’s Tight-Binding Approximation, using his empirical interatomic matrix elements. The residual energy, being the difference between the ion-ion and electron-electron interaction energies, is treated as a bond stretching energy summed over all bonds in the cluster. The energy o f a bond is evaluated by comparison with the change in energy as a function of bond length as determined by a quantum chemistry calculation. A cluster includes all the atoms that are expected to be displaced from their bulk positions and enough other atoms such that displaced atom s have the correct local bonding. The edge o f the cluster is saturated with Hydrogen atoms. A form of self-consistency is included by relating the distribution o f charge to changes in the atomic term values and iterating the process until self-consistency is achieved. The model is tested on the S i( lll) ( 2 x l), Si(100)(2xl) and G aA s(110)(lxl) surfaces, and then used for calculations on the G e (l ll ) and /?-SiC(100) surfaces.
In addition to using EDNNs as a functional, it is ar- guably more convenient to train a EDNN to learn the mapping between the external potential and the con- tributing energies of that system. It is more convenient because it avoids calculating a self-consistentcharge den- sity with the KS scheme. We have trained EDNNs to predict the exchange, correlation, external, kinetic, and total energy simultaneously using the external potential as input rather than the charge density. Again, in Fig- ure 3 we show true minus predicted versus true for the correlation, exchange, external, kinetic, total energies for the RND external potentials. Here, it is evident that the charge density is more optimal as an input to an EDNN for the 1 electron systems. There is much more spread in the distribution when using external potentials as input compared to charge densities. For 10 electrons, this is not the case. Looking to Table III of Appendix C, we can see that for 1, 2, and 3 electrons no matter what choice of exchange-correlation functional, it is less diffi- cult to learn the mapping between ρ → E than V → E. The mean absolute errors are lower for all energies. In the case of 10 electrons, the mean absolute errors in the external and total energies are lower for the models that have potentials as input. Although the errors are lower for the external and total energies, the mean absolute errors for correlation, exchange, and kinetic energies are larger. When training a model on a set of energies, there is a balance between the errors of the energies since the loss function depends on the sum over the mean squared errors between the true and predicted energies. In the case of using charge densities as input to the EDNN, we found the exchange, correlation, and kinetic energies can be predicted with much better accuracy than the exter- nal or total energies. In the case of using potentials as input to the EDNN, we found that there is more of a balance of accuracy between the different energies being predicted.
of zero coupling. Physically, however, a single ion in solution next to a surface of a lower dielectric plate obviously should feel the image charge repulsion even in the absence of any surface charge, and the ion distribution - the probability of finding the ion at any location - should reflect the image charge interaction through the Boltzmann factor. This was the case studied in the pioneering work of Wagner, and Onsager and Samaras for the surface tension of electrolyte solutions. The discrepancy between different theories in treating the same physical weak-coupling condition need to be clarified. 4. It is still debated the existence and relative importance of the nonelectrostatic contributions of the self energy, such as cavity energy, hydration, and dispersion forces. Although the constituent components in the electrostatic self energy have become clear in recent years, the problem is then in the accurate and consistent treatment of the electrostatic effects. Such a treatment is essential both because the electrostatic part is a major component in the self energy of an ion, and because the relative importance of the nonelectrostatic contributions can only be evaluated when the electrostatic contribution is treated accurately.
Nuclei at the proton drip-line, provide a challenging domain for spectro- scopic studies far from stability . Recent advances in tagging techniques lead to the discovery of many excited states , resulting in a signiﬁcant progress on the knowledge of the nuclear structure properties of proton rich nuclei, and providing an important input towards the understanding of fun- damental symmetries and of the nucleosynthesis of the elements. In fact, the path of the rapid proton capture reactions for the production of nuclei, can involve nuclei at the proton drip-line , but the reaction rates will de- pend strongly on details of the structure of nuclei along the path, and in particular, on the knowledge of resonances, since it is mainly dominated by resonant proton capture. Penning traps, have also allowed to isolate isomeric states, and to determine high precision mass measurements [5–7], that lead to a clear deﬁnition of proton separation energies, which are very import to establish the drip lines and for astrophysical calculations of the abundances. In spite of the experimental advances in the area, a great part of the nu- clear structure knowledge of nuclei at the limits of stability, still relies on the- oretical extrapolations. In this context, the need to have a solid theoretical ground, with great independence from parameterizations, and with the ca- pability to interpret the data and help the search of new emitters, is crucial. Theoretical studies on proton radioactivity, were able to interpret the experimental half-lives and branching ratios measured in the observation of decay, and to predict the properties of the decaying state and shape of the emitter. Non-relativistic calculations using simple semi-classical WKB methods , were used for spherical emitters, while deformed proton emitters were studied more rigorously by identifying that the decay proceeds from a single-particle resonances [9, 10]. Microscopic studies of deformed proton emitters allow to test the details of the nuclear wave function to which the half-lives are quite sensitive. The most consistent non-relativistic theoretical approach for proton emission is the non-adiabatic quasiparticle approach which is very successful  in bringing out several interesting features of deformed odd-even proton emitters including the triaxially deformed ones . Its extension to odd-odd nuclei , suggested that it is possible to test the residual neutron proton interaction and to determine the neutron Nilsson level even if the neutron does not participate actively in the decay.
including well placement in sweet section of shale gas reservoir and geoengineering approach for optimizing the design for frac stages breakdown and perforation cluster placement [6-9]. However, these studies ignore that the tradition perforation system with deep-penetration charge is not optimized for hydraulic fracturing in horizontal well completions, and inconsistent perforation hole size caused higher initiation pressure and failure to stimulate the formation sometimes. Studies from production logging data have shown that many perforation clusters are not producing fluids after stimulation [10-11].
State-of-the-art calculations [1–5] suggest that the electroweak vacuum of the Standard Model suffers an instability at a scale of around 10 11 GeV, with the life- time of the electroweak vacuum lying in the so-called metastable region and being longer than the current age of the Universe (for a recent overview, see Ref. ). The origin of this instability is the generation of a high-scale global minimum in the Higgs potential through radia- tive effects due to the renormalization-group running of the Higgs quartic self-coupling [7–10]. Specifically, when one applies the standard renormalization procedure, this coupling is driven negative by top-quark loops, with the dominant experimental uncertainty originating from the current measurement of the top mass [11, 12]. The lat- ter effect has, however, been challenged recently  in the light of contradictory observations from lattice sim- ulations of Higgs-Yukawa models [14–16], where the full effective potential is found to remain stable so long as the ultraviolet cutoff is kept finite. Moreover, the presence of new physics at high scales has been shown to have a dramatic impact upon the tunneling rate [17–22].
In this paper, we derive the Grammian determinant solutions to the modiﬁed two-dimensional Toda lattice, and then we construct the modiﬁed two-dimensional Toda lattice with self-consistent sources via the source generation procedure. We show the integrability of the modiﬁed two-dimensional Toda lattice with
(2.12) for each field line using integration along the field line instead of a one- dimensional integration. The force line corresponding to the highest tunneling probabil- ity is selected. The grid point on the sphere surrounding the particular atom where the optimal field line originates is considered the emission site. In Fig. 2.9 the emission site is marked by point A and the optimal field line AD is indicated. Point D corresponds to the potential equal to the Fermi energy, i.e. electron motion along the field line beyond point D corresponds to a positive electron energy. Note that the field line has a sharp bend towards the hydrogenated surface near the emission site. This is similar to the one shown in Fig. 2.5(c), and is considered an IEM hallmark. Point A is not well defined; it is chosen here to lie on the sphere with a 1.0 Å radius. Plotted in Figure 2.9(d) is the radial density distribution for the surface atom where the optimal field line originates. The den- sity is plotted along the line connecting point A and the nucleus of the emission atom (in Fig. 2.9(d) it has coordinate –1.0). As seen from the plot, the density at A is one third of the peak density, which is assumed to be sufficient for emission. Applying Eq.
the thermodynamic quantities including specific heat and upper critical field were calculated, and the temperature dependence of order parameters is in good agreement with the spectroscopic tunneling experiments. Our results provide solid experimental ground to using a sufficiently simple, yet self-consistent γ-model for the description of multigap superconductivity.
We begin by reviewing the semi-classical calculation of the tunneling rate, before discussing the calculation of the first quantum corrections. We then apply the Green’s function method to two toy models. In the first model (reported from Ref. ), the vacuum instability is present at tree level, and we will calculate the quantum corrections analytically in the inhomogeneous background field configuration. In the second model (reported from Ref. ), the vacuum instability emerges only at the one-loop level, and, therein, we will apply a self-consistent numerical procedure to calculate the quantum-corrected tunneling rate. We will find that the modifications to the one-loop results from fully accounting for the background inhomogeneity can compete at the same level as the two-loop (homogeneous) corrections.
As the result of quick development of experimental tech- niques in nuclear physics, the bulk data on nuclear static moments has become very extensive and comprehensive , thus creating a challenge to nuclear theory. First of all, it concerns the nuclei distant from the β -decay stability val- ley which are often close to the drip lines and are of great interest to nuclear astrophysics. For this reason, a theoreti- cal approach used for describing such nuclei should have a high predictive power. The self-consistent Theory of Finite Fermi Systems (TFFS)  based on the EDF by Fayans et al.  is one of such approaches.
Interestingly, complex 2 appears silent in its powder X-band EPR spectrum at 300, 30, and 5 K, which may reflect the 3p– 5f mixing giving rise to efficient relaxation mechanisms. The natures of the five a-spin frontier orbitals of 2 are consistent with the overall occupation of four 5f and one cyclo-P 5 e 2
The evaluation of capacitance of different arbitrary shapes is important in Computational Electromagnetics (CEM). It deals with the modeling of the interaction between the electromagnetic fields and the physical objects  . Compared to the Finite Difference Methods (FDM) and Boundary Element Methods (BEM), the finite element methods provide additional elasticity for local mesh refinement, additional rigorous convergence analy- sis, additional selections of effective iterative solvers for the secondary linear systems and more elasticity for handling the nonlinear equations. The FEM - is a standard tool for solving the differential equations in electromagnetics. It is also one of the most preferred methods in engineering owing to its significant ability to deal with complex geometries. This paper presents the FEM based analysis for the computation of capacitance and chargedistribution of a three dimensional spacecraft circuit modeling. The charge density of the satellite body represented by metallic cube and solar panel consisting of two rectangular plates to the potential on the surface of these bodies is solved. In order to apply the FEM, the surfaces of the metallic bodies are discretized using uniform triangular subsections. The validity of this analysis has been established by comparing the data on capacitance available in the literature for a metallic cube with metallic plates with the data on capacitance com- puted by the present method for similar structures.
skilled workers that receive both labour and capital income. The Gini coefficient is suitably formulated to express personal income inequality as a function of the labour share (and various other factors). In their static framework, a rise in the labour share has opposing effects on inequality: on the one hand, it tends to reduce it by narrowing the income differential between capital and non-capital owners; on the other hand, it tends to increase it by raising the income gap between employed and unemployed agents. Palley (2015) develops a neo- Kaleckian-Goodwin model with three classes: workers, a middle-management middle class and a top management capitalist class. Personal income distribution is captured by the distribution of wage income between workers and middle managers. Functional and personal income distributions are linked since the latter is postulated to depend on the employment rate and the capacity utilisation rate, both of which rely on functional income distribution.
Industrial Revolution 4.0 has unprecedented historical growth, taking place not just on the scale of all areas in a country but also on a global scale. Industrial Revolution 4.0 has the potential to bring many benefits to workers by: (i) increasing labor productivity thereby increasing the level of income; (ii) improve the quality of life through new entertainment and services; and (iii) opening up the labor market by cutting down on travel, transportation costs and the creation of new jobs. In addition to the above mentioned benefits, the Industrial Revolution 4.0 also implies many risks for workers such as: (i) job loss due to mechanical replacement; (ii) no protection of the interests due to a change in the nature of the relationship between the worker and the owner of the means of production by the application of modern technology; and (iii) socially unequal discrimination between highly skilled and low skilled workers, between the owner of the machinery and the worker. Thus, labor distribution equality is based on the notion of Karl Marx There are significant meanings in the 4.0 revolution in Vietnam today.
Rogers believes that the neurotic feels frightened to go forward; it is clear that the very expression of this fear is an inevitable part of becoming what he is. He writes, “instead of simply being a façade, he is coming closer to being himself, namely, a frightened person hiding behind because he regards himself as too awful to be seen” (Rogers 1961, pp. 167-68). This transformation is what Stephen experiences: fear of turning into a “façade.” As an adult, Stephen takes another step towards self-actualization; he finds himself constantly in the process of becoming. This sense of becoming leads to illusion and a kind of narcissism. He loses faith in himself and consequently his potentials as an artist; he is extremely disappointed in his image and keeps suffering due to the burden of the guilt he feels. Indeed, “his eyes were dimmed with tears and, looking humbly up to heaven, he wept for the innocence he had lost” (Portrait, p. 121). This state is exactly what happens to a person whose ideal self, overcomes his real self, and he is unable to achieve self-actualization. To support Rogers’ idea, it might be proper to consider Horney's attitude towards the discrepancy between the real and ideal self. In Neurosis and Human Growth: the Struggle towards Self Realization (1950), Horney points out thatwhen an individual turns into his idealized self, he “not only exalts himself but also is bound to look at his actual self”, from a wrong perspective. The glorified self becomes not only “a phantomto be pursued,” but also a measuring rod with which to measure his actual being. This actual being is an embarrassing sight when viewed from the perspective of “a godlike perfection,” and he cannot but despise it (p. 110). Such self-hate leads him to endless fear and keeps him away from self- actualization. The disturbed individual is bound up with thoughts of self-comparison; the “dissatisfaction with self and envy towards others has formed the core of a neurotic personality” (Hitchcock 2005, p. 63). It is at this stage that he feels inferior not only to others but also to his real self. The individual realizes that “self-hate now is not so much directed against the limitations and shortcomings of the actual self as against the emerging constructive forces of the real self” (Horney 1950, p. 112). This conflict between the real and ideal self first appears in the mind of Stephen in the second chapter of the novel when he goes to school. As a young boy, disappointed with his surroundings, he realizes the fallacies of the real world and decides to take refuge in a fantastic world of dark masses. He feels sick and powerless.
In this article we explore more fully an approximation, previously used by Sasai and Wolynes (2003) for the variational treatment of the problem, the self-consistent proteomic field (SCPF) approximation. Within this approx- imation one assumes that the probability of finding the switch in a given state is a product of the probabilities of states of individual genes. One can then solve the steady-state master equation for the probability distribution of many regulatory systems exactly. We discuss the approximation and present a detailed study of different classes of genetic switches, some of which have never been considered theoretically. We con- sider separately several particular features of such systems that are found in known switches, to be able to charac- terize their contributions to the behavior of the whole system. To be specific, starting from a symmetric toggle switch, we go on to compare the effects of multimer binding, and of the production of proteins in bursts, on the stability of the switch. The stochastic effects prove to be modest for symmetric switches without bursts, especially if the genes have a basal production rate. We find the deterministic and stochastic SCPF solutions to have similar probabilities of particular genes to be on, and similar mean numbers of proteins of a given species in the cell. However, in the nonadiabatic limit, when the unbinding rate from the DNA is smaller than the death rate of proteins, the probability distributions have two well-defined peaks, unlike in the deterministic approx- imation or adiabatic limit of the stochastic SCPF solution.