that has weak k dependence 1 and a transfer function that arises **from** the growth of fluctuations at late times after they reenter the horizon [8, 9, 10, 11, 12, 13, 14, 15, 16]. Inflation occurs at an early time when the energy density of the universe is large compared to energy scales that can be probed by laboratory experiments. It is pos- sible that there are paradigm shifts in our understanding of the laws of nature, as radical as the shift **from** classical physics to quantum physics, that are needed to un- derstand physics at the energy scale associated with the **inflationary** era. Motivated by the lack of direct probes of physics at the **inflationary** scale Ackerman et. al. wrote down the general form that Eq. (3.1) would take [17] if rotational invariance was bro- ken by a small amount during the **inflationary** era (but not today) by a preferred direction and computed its impact on the microwave background anisotropy (see also [18, 19, 20, 21, 23, 24, 64]). They also wrote down a simple field theory model that realizes this form for the density perturbations where the preferred direction is asso- ciated with spontaneous breaking of rotational invariance by the expectation value of a vector field. This model serves as a nice pedagogical example, however, it cannot be realistic because of instabilities [31]. Evidence in the WMAP data for the violation of rotational evidence was found in Ref. [32, 33, 35]. Another anomaly in the data on the anisotropy of the microwave background data is the “hemisphere effect” [57, 59]. This cannot be explained by the model of Ackerman et. al. Erickeck et. al. proposed an explanation based on the presence of a very long wavelength (superhorizon) per- turbation [61]. This long wavelength mode picks out a preferred wave-number and can give rise to a hemisphere effect. It violates translational invariance and there

Show more
89 Read more

One of the **problems** within the **standard** **inflationary** model is in quantifying cosmic entropy. Entropy is typically defined in terms of the total number of possible microstates and the probability of a given set of conditions with respect to that number of microstates. These values are impossible to quantify in an in- finite-sized **inflationary** universe or multiverse. FSC, on the other hand, is a fi- nite model with a spherical horizon surface area. Furthermore, since the Be- kenstein-Hawking definition of black hole entropy would appear to apply to the FSC model, very precise values for cosmic entropy can be calculated for any time, temperature or radius of the FSC model. Thus, the “entropic arrow of time” is clearly defined and quantified in the FSC model [Tatum, et al . (2018)]. The quantifiable entropy of the FSC model allows for model correlations with cosmic entropy theories, such as those of Roger Penrose [32] and Erik Verlinde [33]. Thus, the entropy rules of FSC allow for falsifiability, whereas **standard** in- flationary **cosmology** is not able to clearly define or quantify entropy or to estab- lish falsifiability in terms of entropy. This feature favors the FSC model, particu- larly with respect to Verlinde’s “emergent gravity” theory described in the next section.

Show more
16 Read more

In [115], Aguirre stressed that life might be possible in universes for which some of the cosmological parameters are orders of magnitude different **from** those of our own universe. The point is that large changes in one parameter can be compensated by changes in another in such a way that life remains possible. Anthropic predictions for a particular parameter value will therefore be weakened if other parameters are allowed to vary between universes. One cosmological parameter that may significantly affect the anthropic argument is Q, the **standard** **deviation** of the amplitude of primordial cosmological density perturbations. Rees in [116] and Tegmark and Rees in [117] have pointed out that if the anthropic argument is applied to universes where Q is not fixed but randomly distributed, then our own universe becomes less likely because universes with both Λ and Q larger than our own are anthropi- cally allowed. The purpose of the work in this section is to quantify this expectation within a broad class of **inflationary** models. Restrictions on the a prori probability distribution for Q necessary for obtaining a successful anthropic prediction for Λ, were considered in [118, 119].

Show more
125 Read more

Measurements are common in science and engineering to determine the value of parameters, such as length, angle, temperature etc. Normally, in cases requiring a high degree of reliability, a single measurement is not enough. Therefore apart **from** several observations, a check measurement may even be conducted by a different team. As such this unknown quantity is of stochastic nature, a random variable that can take on any value within a range.

We have tested the above two techniques for the removal of divide by zero condition in MATLAB 7.13.0.564 with different window sizes. A detailed study is performed on a satellite image, i.e., the photograph of earth taken **from** the space as in Fig. 1. First, we have counted the number of LSD’s having the value zero against the relevant window sizes. Table 1 shows the relevant information with different window sizes. When the size of the window is small, number of LSD’s with zero value is larger. If the size of the window is increased, the number of LSD’s with zero value is smaller.

Show more
Various grading methods have appeared in the literature of educational measurement and evaluation. These methods have been categorized into two major groups: relative grading methods and absolute grading systems (Terwillinger, 1989; Ebel and Frisbie, 1991; Frisbie and Waltman, 1992; Haladyna, 1999). Examples of the relative grading methods include the curve, the distribution gap, and the **standard** **deviation**. Fixed percent scale, total point, and content based are all examples of the absolute grading methods. Compared with other grading systems, the SDM is regarded as the soundest and the fairest in terms of assigning letter grades objectively though it is the most complicated computationally (Frisbie and Waltman, 1992). As a relative grading method, SDM is grounded primarily on the assumption that grading is needed for identifying students who perform best against their peers and to weed out the unworthy (CTL, 2001). According to this method, student grades are based on their distance **from** the mean score for the class. The distance is measured by using the **standard** **deviation** of the scores, which describes the variability or spread of scores around the mean score. In this sense, the **standard** **deviation** is employed for the sole purpose of determining cutoff points for each grade level.

Show more
12 Read more

In meta-analysis of continuous outcomes, the sample size, mean, and **standard** **deviation** are required **from** included studies. This, however, can be difficult because results **from** different studies are often presented in dif- ferent and non-consistent forms. Specifically in medi- cal research, instead of reporting the sample mean and **standard** **deviation** of the trials, some trial studies only report the median, the minimum and maximum val- ues, and/or the first and third quartiles. Therefore, we need to estimate the sample mean and **standard** devia- tion **from** these quantities so that we can pool results in a consistent format. Hozo et al. [3] were the first to address this estimation problem. They proposed a simple method for estimating the sample mean and the sample variance (or equivalently the sample **standard** **deviation**) **from** the median, range, and the size of the sample. Their method is now widely accepted in the literature of sys- tematic reviews and meta-analysis. For instance, a search of Google Scholar on November 12, 2014 showed that the article of Hozo et al.’s method has been cited 722 times where 426 citations are made recently in 2013 and 2014.

Show more
13 Read more

In proposed SD scheduling, a queue is maintained in which processes are ordered on the basis of how close is the value of their burst time with the value of the **standard** **deviation**. Process whose burst time is close to the **standard** **deviation** value will be executed first and so on.

In 1965 the radio astronomers Robert Wilson and Arno Penzias observed a faint background glow, almost exactly the same in all directions. This glow was identified to be the CMB radiation and the two scientists were awarded the 1978 Nobel Prize for their discovery. Ground-based experiments during the 1980s put limits on the anisotropy that can exist in the CMB radiation, but it was not until the 1992 NASA mission COBE (COsmic Background Explorer) were the theoretically predicted 10 −5 deviations **from** isotropy observed. The COBE team was awarded the 2006 Nobel Prize in Physics for their work on the precision measurement of the CMB radiation. The COBE team observed a nearly uniform temperature across the sky and mea- sured the spectrum of CMB radiation across the sky. They showed that the CMB radiation has a thermal black body spectrum at a temperature of 2.73K as shown in figure 1.2. COBE was also the first to observe the theoretically predicted deviations **from** isotropy.

Show more
123 Read more

The weighted mean **standard** **deviation** distribution and related parameters have been determined following a procedure where the geometrical framework is clearly shown using typical formulation generalized to n -spaces. After exploiting three different attempts, the integration has been performed via a change of reference frame, where the integration domain turns out to be an infinitely thin, homotetic n -cylindrical corona that is ( n − 1 ) - axially symmetric 6 with respect to a coordinate axis.

27 Read more

For the given set of ingredients in concrete mixes, cement content increases in higher grade of concrete. Due to increase in heat of hydration, increased risk of cracking due to drying and shrinkage in thin sections, early thermal cracking and increased risk of damage due to alkali silica reactions, maximum use of ordinary Portland cement (OPC) in concrete is limited to 450 kg/m 3 ( sub section 8.2.4.2 IS 456 : 2000). Accordingly, in higher strength concrete, cement content is to be maintained within permissible limit by adopting appropriate measures without compromising strength and durability parameters. Initially for designing concrete mix of a given characteristic strength, target strength is determined adopting assumed **standard** **deviation**. By observing good quality control measures for producing homogeneous concrete mix, established **standard** has lower value. Based on established SD values, target strength reduces and in turn resulting increase in w/c and decrease in cement content for the same grade of concrete.

Show more
Contemporary geometry deals with manifolds, spaces that are considerably more abstract than the familiar Euclidean space, which they only approximately resem- ble at small scales. Modern geometry has multiple strong bonds not only with physics, exemplified by the ties be- tween pseudo-Riemannian geometry and general relativ- ity (e.g., [1]), but also **cosmology** (e.g., [2]), dynamical systems (e.g., [3]), cristallography (e.g., [4]), and music (e.g., [5]), for instance.

10 Read more

In the field of hydrogeology it is important to know how easy water can move through porous media . Hydraulic conductivity is one of the most important physical properties of soil which governs the quantitative evaluation of groundwater resources. It is highly dependent upon aquifer properties and flow regime. It is desirable to predict the hydraulic conductivity value of same shape group particles of different sizes and mixed randomly for a knowledge of their statistical size distribution. To investigate the variation of Hydraulic conductivity with respect to **standard** **deviation**, the present study has been conducted. The effect of **standard** **deviation** on hydraulic conductivity has been studied analytically as well as experimentally. The residuals and **deviation** of measured and estimated values of hydraulic conductivity have been measured.

Show more
11 Read more

If we insist on using the Higgs field to realize the inflation, then we need to introduce the nonmin- imal coupling... Obviously, this inflation model has been ruled out by the observati[r]

For comparison purpose, in addition to the fBm-based estimator (14), let us consider also DCT-based estimator at STD estimation stage of our algorithm. STD estimators based on DCT have demonstrated themselves to be quite accurate for high complexity images [14, 33], that is why we expect this approach to perform well within our framework. The additionally proposed estimator performs as follows. For each NI SW, we apply 7 × 7 DCT and consider only the six highest frequency coeﬃcients with indices (7, 7), (6, 7), (7, 6), (6, 6), (5, 7), and (7, 5). These DCT coe ﬃ cients are almost insensitive to image content because of two reasons. First, they are calculated for NI SWs. Second, high frequency components of orthogonal transforms are known to be less influenced by image content than low frequency ones [15]. These DCT coe ﬃ cients are collected **from** all NI scanning windows (SW type · i (t 0 , s 0 ) = “NI”) and the final noise STD

Show more
12 Read more

If one aims to demonstrate adverse effects on driving caused by a drug, the greatest sensitivity appears to be **from** a sample of healthy adult volunteers. To enhance generalizability both sexes should be studied. Participants should have sufficient driving experience to minimize learning effects during the study. Generally, a lower limit of 5 years of driving experi- ence, and driving at least 5000 km per year is recommended to reach this goal. The results obtained with the healthy volunteer study can be the basis of further investigation of driving performance comparing specific groups such as elderly versus young/adult drivers, novice versus experienced drivers, or professional versus regular drivers. After healthy volunteer data is obtained, a second subject sample should reflect the target population that will use the drugs under investigation clinically. A disadvantage of including these patients is that there are many confounding variables that make interpretation of the study results difficult. For example, if depressed patients are included to test the effects of a new antidepressant on driving ability, improved driving may be seen as a result of therapeutic efficacy of the drug. At the same time, adverse effects of the medication may impair driving. These two effects then cancel each other out and make the study results hard to interpret. There are a number of potential subject/drug interactions which require collecting

Show more
13 Read more

E with the results of the first measurement. But even according to **standard** quantum theory all this is not totally logical, because really existing vacuum fluctuations may meddle and they are able to change the result. Here we have evident violation of conservation law due to vacuum fluctuations, although the integrals of motion exist (contrary to UQT). The **standard** quantum theory carefully avoids the question of conservation laws for single events at small energies. Usually that question either does not being discussed at all, or there are said some words that quantum theory does not describe single events at all. But these words are wrong, because the **standard** quantum theory describes, in fact, single events, but is able to foreseen only the probability of that or other result. It is evident that at that case there are no conservation laws for single events at all. These laws appear only after averaging over a large ensemble of events. As the matter of fact it can be easily shown that classical mechanics is obtained **from** quantum one after summation over a large number of particles. And for a quite large mass the length of de Broglie wave becomes many times less than body dimensions, and then we cannot talk about any quantum-wave characteristics any more. It is well known that local laws of energy and momentum conservation for the individual quantum processes are valid within all experiments at high energies only. We cannot say so in the cases of law

Show more
36 Read more

ABSTRACT: Picture combination is an information combination innovation which keeps pictures as principle research substance. It alludes to the strategies that incorporate multi-pictures of a similar scene **from** various picture sensor information or coordinate multi pictures of a similar scene at various circumstances **from** one picture sensor. The picture combination calculation in light of Wavelet Transform which speedier created was a multi-determination investigation picture combination technique in late decade. Wavelet Transform has great time-recurrence attributes. It was connected effectively in picture preparing field. In any case, its superb trademark in one-measurement can't be reached out to two measurements or multi-measurement essentially. Distinct wavelet which was spreading over by one- dimensional wavelet has restricted directivity. This paper presents the Second Generation Curvelet Transform and utilizes it to wire pictures, various types of combination techniques are looked at finally. The investigations demonstrate that the strategy could extricate valuable data **from** source pictures to melded pictures with the goal that unmistakable pictures are gotten. This paper breaks down the qualities of the Second Generation Wavelet Transform and set forward a picture combination calculation in view of Wavelet Transform and the Second Generation Curvelet Transform. We took a gander at the choice standards about low and high recurrence coefficients as indicated by various recurrence area after wavelet.[2] In picking the low-recurrence coefficients, the idea of neighborhood was decided to measuring criteria. In picking the high recurrence coefficients, the window property and neighborhood qualities of pixels were broke down. At long last, the proposed calculation in this article was connected to investigations of multi- center picture combination and correlative picture combination.

Show more
The raw data PD measurement results by using a computer usually contain noise that can lead to miss interpretation. So as to obtain an accurate diagnosis, then this noise must be separated **from** the raw data. In this study, the electrodes used were needle-plane electrode and the samples tested are polymer films placed on the electrode plane and the air gap is located between the needle and polymer films. There are two types of arrangement of the electrodes, the first electrode needle wrapped with the film and the second polymer are not wrapped with polymer films. To separate the data PD of raw-data is necessary algorithms that can automatically select the threshold value of the **standard** **deviation** value. To obtain the optimum value, the threshold value selected based on the **standard** **deviation** of each value segment. The results show that this method has been used to distinguish between the electrode pattern PD wrapped or not wrapped polymer. Keywords: ceramic insulator, leakage current, partial discharge, infrared images, pollution.

Show more
We introduce an affine permutation at the beginning, at the end, inside the Feis- tel scheme, or both at the beginning and at the end. Thus, by symmetry, we will obtain results for Known Ciphertext Attacks (KCA) and non adaptive Chosen Ciphertext Attacks (CCA-1). The affine permutations and the functions used in the Feistel schemes are keyed dependent. With our attacks we want to distin- guish a random permutation **from** a random permutation produced by a scheme. For some of our attacks, we will make a precise analysis of **standard** **deviation**. The paper is organized as follows. In section 2, we define our schemes that we name A-Feistel schemes. In section 3, we describe our best KPA and CPA-1 on schemes with one affine permutation. We show that it is possible to attack up to 3 rounds after the affine permutation with a number of messages less than 2 2n and then we describe attacks against generators of permutations. We did some simulations of our attacks. The results of these simulations are given in sec- tion 3.4. In Section 4, we present attacks on schemes for which we apply first an affine permutation, then a Feistel scheme with several rounds and again an affine permutation. Appendices A and B are devoted to the computation of **standard** deviations. In appendix C, it is shown that A-Feistel permutations have even signature. This allows attacks by the signature when all the cleartext/ciphertext pairs are known.

Show more
27 Read more