non-Gaussian probability density functions

Top PDF non-Gaussian probability density functions:

Time-Dependent Probability Density Functions and Attractor Structure in Self-Organised Shear Flows

Time-Dependent Probability Density Functions and Attractor Structure in Self-Organised Shear Flows

The extension of refs. [17,18] solved a stochastic differential equation with a fourth-order stochastic Runga–Kutta method for Gaussian coloured noise in 1D and showed the transition from an unimodal stationary Probability Density Function (PDF) to a bimodal stationary PDF when the correlation time of a random forcing exceeds a critical value. The mean shear gradient is zero for a unimodal PDF, while its non-zero value represents the critical shear gradient around which a shear gradient continuously grows and damps through the interaction with fluctuations. The transition from a unimodal to bimodal PDF represents the formation of a non-zero mean shear gradient, or the formation of jets. Interestingly, In ref. [18], we found similar results in a 0D model and 2D hydrodynamic turbulence. In particular, the 2D results showed that a shear flow evolves through the competition between its growth and damping due to a localized instability, maintaining a stationary PDF, and that the bimodal PDF results from a self-organising shear flow with a linear profile.
Show more

18 Read more

STUDY OF SPI FRAMEWORK FOR CMMI CONTINUOUS MODEL BASED ON QFD

STUDY OF SPI FRAMEWORK FOR CMMI CONTINUOUS MODEL BASED ON QFD

filtering for multivariate stochastic systems with non-Gaussian noises ,Fault detection and diagnosis for general stochastic systems using B- spline expansions and nonlinear filters ([10]), Entropy optimization filtering for fault detection and diagnosis ([11]) and Optimal probability density function control for NARMAX stochastic systems([12]), online estimation algorithm for the unknown probability density functions of random parameters in stochastic ARMAX systems ([13]).

11 Read more

Time-Dependent Probability Density Functions and Attractor Structure in Self-Organised Shear Flows

Time-Dependent Probability Density Functions and Attractor Structure in Self-Organised Shear Flows

The extension of refs. [17,18] solved a stochastic differential equation with a fourth-order stochastic Runga–Kutta method for Gaussian coloured noise in 1D and showed the transition from an unimodal stationary Probability Density Function (PDF) to a bimodal stationary PDF when the correlation time of a random forcing exceeds a critical value. The mean shear gradient is zero for a unimodal PDF, while its non-zero value represents the critical shear gradient around which a shear gradient continuously grows and damps through the interaction with fluctuations. The transition from a unimodal to bimodal PDF represents the formation of a non-zero mean shear gradient, or the formation of jets. Interestingly, In ref. [18], we found similar results in a 0D model and 2D hydrodynamic turbulence. In particular, the 2D results showed that a shear flow evolves through the competition between its growth and damping due to a localized instability, maintaining a stationary PDF, and that the bimodal PDF results from a self-organising shear flow with a linear profile.
Show more

19 Read more

A refined statistical cloud closure using double-Gaussian probability density functions

A refined statistical cloud closure using double-Gaussian probability density functions

Considering the distribution of s from each model level in the LES data over the whole domain, we find that the PDF of s can be highly skewed in the cloud layer with positive skew- ness for shallow cumulus and negative skewness for stra- tocumulus (Fig. 1). For shallow cumulus, cloud formation is driven by surface heat fluxes that initiate few but strong updrafts in a slowly descending environment. Therefore the PDF of s is positively skewed with the moist tail representing the (cloudy) updrafts. In contrast, stratocumulus is driven by radiative and evaporative cooling at cloud top. Hence non- cloudy downdrafts emerge in a dry tail of the PDF of s and the PDF tends to be skewed negatively (Helfand and Kalnay, 1983; Moeng and Rotunno, 1990). Consequently for both the shallow cumulus regime and the stratocumulus regime, the success of a scheme diagnosing the cloud fraction and the average liquid water depends crucially on its ability to quan- tify the tail of the distribution.
Show more

17 Read more

Cosmic happenstance:24 μm selected, multicomponent Herschel sources are line of sight projections

Cosmic happenstance:24 μm selected, multicomponent Herschel sources are line of sight projections

To compare to the COSMOS sample, we randomly draw 5000 galaxies from the quality-controlled COSMOS photometric cata- logue, and compute their cumulative probability distributions in the same way. The only change to equation (1) is that c now ranges from 0 to 5000. The cumulative PDF for the random sampling is shown as the red dashed line in Fig. 3. It is immediately clear that while both samples cover a broad range of redshift space, the FIR- bright sample is more tightly clustered around a redshift of 1.0 than the randomly selected sample. The randomly selected photomet- ric sources, by contrast, have a stronger tail out to both lower and higher redshifts. This peak at z = 1.0 in the FIR-bright sample is not surprising, as it has been found that the 24-μm selected, 250-μm Herschel sources (similar to the selection in this work) have a peak in their redshift distribution around this value, with previous work reporting the peak between z = 0.85 (Casey et al. 2012) and z ∼ 1.0 (B´ethermin et al. 2012). Surveys selected at longer wavelengths, for instance an 850-μm selection, typically peak at a redshift of z ≈ 2.5 (e.g. Chapman et al. 2005; Casey et al. 2012)
Show more

14 Read more

Gaussian approximations for probability measures on Rd*

Gaussian approximations for probability measures on Rd*

by the famous Bernstein–Von Mises (BvM) theorem [28] in asymptotic statistics. Roughly speaking, the BvM theorem states that under mild conditions on the prior, the posterior distribution of a Bayesian procedure converges to a Gaussian distribution centered at any consistent estimator (for instance, the maximum likelihood estimator (MLE)) in the limit of large data (or, relatedly, small noise [5]). The BvM theorem is of great importance in Bayesian statistics for at least two reasons. First, it gives a quantitative description of how the posterior contracts to the underlying truth. Second, it implies that the Bayesian credible sets are asymptotically equivalent to frequentist confidence intervals and hence the estimation of the latter can be realized by making use of the computational power of Markov chain Monte Carlo algorithms. We interpret the BvM phenomenon in the abstract theoretical framework of best Gaussian approximations with respect to a Kullback–Leibler measure of divergence.
Show more

31 Read more

A Multivariate Student’s t Distribution

A Multivariate Student’s t Distribution

If support is [ µ − b β µ , + b β ] , then the general expressions need to be multiplied by functions that depend on b and ν . Truncation or effective truncation keeps the moments finite and defined for all ν ≥ 1 [3]-[5]. The general expressions for the covariance, Equation (24), yields, when i = j , the general expression for the variance, Equation (23). The general expression for the variance, Equation (23), is given to emphasize the 2

8 Read more

Wireless Sensor Network Factor Information Control Based on Genetic
Algorithm

Wireless Sensor Network Factor Information Control Based on Genetic Algorithm

The main task of multi-objective optimization is to improve the quality of the solution and maintain the solution of a broad distribution and uniformity, the weighted sum genetic algorithm is a straightforward, practical, strong multi-objective genetic algorithms. This paper focuses on the weighted sum of the genetic algorithm, uniform design created by combining the initial population, and their respective objective function Number of standardization, the establishment of a new fitness function, and proposes a dynamic allocation weighting scheme, designed a new weight-based allocation strategy Multi-objective Genetic Algorithm for multi-objective optimization problem. Second, the design of a uniform design method based on multi-objective optimization genetic algorithm, and gives a proof of convergence of the algorithm by simulation to verify the effectiveness of the algorithm. In this paper, the Gaussian normal density function and probability function of the coverage area are adopted to optimized set of sensor nodes to form a minimal subset and determine the maximum target set, by state transition of sensor nodes, nodes enter different states, work in turn, thus network energy consumption is saved , network lifetime is extended, the ratio of network resources and quality of service is improved, also redundant nodes are reduced, and last, the network performance is optimized. Finally, through simulation, the effectiveness and stability of the algorithm are verified, due to the presence of vast network throughput and constrain by external factors, amativeness to very large sensor network is the next focus of the study subject.
Show more

9 Read more

A Novel Framework to Produce Statistically Accurate GRNs by Using CLT

A Novel Framework to Produce Statistically Accurate GRNs by Using CLT

ABSTRACT: Gaussian random numbers (GRNs) generated by Central Limit Theorem (CLT)) be afflicted by errors because of deviation from perfect Gaussian behavior for any finite range of additives. In this paper, we can show that it is possible to compensate the error in CLT, thereby correcting the consequent probability density function, particularly within the tail areas. We will provide an in depth mathematical evaluation to quantify the error in CLT. This presents a design area with more than four degrees of freedom to build a diffusion of GRN generators (GRNGs). A framework makes use of this design space to generate customized hardware architectures. We will demonstrate designs of 5 distinctive architectures of GRNGs, which range in terms of consumed memory, logic slices, and multipliers on field- programmable gate array. Similarly, relying upon software, those architectures exhibit statistical accuracy from low (4σ) to extremely high (12σ ). A contrast with formerly posted designs surely suggest benefits of this system in phrases of both fed on hardware assets and accuracy. We can even provide synthesis results of same designs in application- specific integrated circuit using 65-nm standard cell library.
Show more

6 Read more

Gaussian Probability Density Functions: Properties and Error Characterization

Gaussian Probability Density Functions: Properties and Error Characterization

The fact that (1.1) is completely characterized by two parameters, the first and second order moments of the pdf, renders its use very common in characterizing the uncertainty in various domains of application. For example, in robotics, it is common to use Gaussian pdf to statistically characterize sensor measurements, robot locations, map representations.

30 Read more

Gaussian mixture probability hypothesis density filter for multipath multitarget tracking in over the horizon radar

Gaussian mixture probability hypothesis density filter for multipath multitarget tracking in over the horizon radar

Most of the conventional algorithms about multitarget tracking, such as multiple hypothesis tracker (MHT) [12], joint probabilistic data association (JPDA) [13, 14], and probability hypothesis density (PHD) filter [15], as- sume the following measurement model: (1) every target produces at most one measurement and (2) any meas- urement is produced by a target or clutter. In this paper, we consider the measurement model which satisfies these assumptions as standard measurement model. However, many measurement models in real-life target tracking scenarios do not satisfy these prerequisite of assumption, which are treated as nonstandard measure- ment model. Recently, the multiple detection joint prob- abilistic data association (MD-JPDA) filter [16] based on the JPDA framework was proposed to deal with the multiple detection targets, which can apply to OTHR.
Show more

18 Read more

Statistical Models of the Troposphere Refractive Index

Statistical Models of the Troposphere Refractive Index

Using the results of systematic measurements (for some cities 8 times a day, and for the other 4 times), meteorological parameters (temperature, pressure, and humidity) by regular weather stations in Ukraine were assessed values of the refractive index [11, 12] and formed a database of all regions of Ukraine. In total were covered 100 cities for the period from 01.01.2010 to 01.01.2012. In the winter, regardless of where the city is located (on the sea coast or deep in the interior) it's about 310 N-units. In the summer, depending on the location of the city on the sea coast or in the interior of the country it is from 380 N-units to 350 N-units. Significant non-stationary of daily and seasonal behavior is manifested in non-Gaussian of its distribution, the most notable at the values of the refractive index substantially larger or smaller than the average value. The investigation of distribution inside of each of the seasons, showed that it is in the first approximation satisfactorily described by the standard Gaussian model [6]. The final probabilities of each of the phase states P i were determined and shown that it allows approximately 70% of cases significantly reduce the approximation error compared to using a uniform probability model P i = 0 , 25 for the seasons
Show more

7 Read more

Reconstruction of one-dimensional chaotic maps from sequences of probability density functions

Reconstruction of one-dimensional chaotic maps from sequences of probability density functions

problem for a very restrictive class of piecewise con- stant density functions, using graph-theoretical meth- ods. Ershov and Malinetskii [16] proposed a numeri- cal algorithm for constructing a one-dimensional uni- modal transformation which has a given invariant den- sity. The results were generalized in Góra and Boyarsky [17], who introduced a matrix method for constructing a 3-band transformation such that an arbitrary given piecewise constant density is invariant under the trans- formation. Diakonos and Schmelcher [18] considered the inverse problem for a class of symmetric maps that have invariant symmetric Beta density functions. For the given symmetry constraints, they show that this problem has a unique solution. A generalization of this approach, which deals with a broader class of continu- ous unimodal maps for which each branch of the map covers the complete interval and considers asymmet- ric beta density functions, is proposed in [19]. Huang presented approaches to constructing smooth chaotic transformation with closed form [20,21] and multi- branches complete chaotic map [22] given invariant densities. Boyarsky and Góra [23] studied the problem of representing the dynamics of chaotic maps, which is irreversible by a reversible deterministic process. Baranovsky and Daems [24] considered the problem of synthesizing one-dimensional piecewise linear Markov maps with prescribed autocorrelation function. The desired invariant density is then obtained by perform- ing a suitable coordinate transformation. An alterna- tive stochastic optimization approach is proposed in [25] to synthesize smooth unimodal maps with given invariant density and autocorrelation function. An ana- lytical approach to solving the IFPP for two specific types of one-dimensional symmetric maps, given an analytic form of the invariant density, was introduced in [26]. A method for constructing chaotic maps with arbi- trary piecewise constant invariant densities and arbi- trary mixing properties using positive matrix theory was proposed in [5]. The approach has been exploited to synthesize dynamical systems with desired charac- teristics, i.e. Lyapunov exponent and mixing properties that share the same invariant density [27] and to analyse and design the communication networks based on TCP- like congestion control mechanisms [28]. An extension of this work to randomly switched chaotic maps is stud- ied in [29]. It is also shown how the method can be extended to higher dimensions and how the approach can be used to encode images. In [30], the inverse prob- lem is formulated as the problem of stabilizing a target
Show more

18 Read more

BAYESIAN CLASSIFICATION USING GAUSSIAN MIXTURE MODEL AND EM ESTIMATION: IMPLEMENTATIONS AND COMPARISONS

BAYESIAN CLASSIFICATION USING GAUSSIAN MIXTURE MODEL AND EM ESTIMATION: IMPLEMENTATIONS AND COMPARISONS

The above M-steps are not suitable for the basic EM algorithm though. When initial C is high, it can happen that all weights become zero because none of the components have enough support from the data. Therefore a component-wise EM algorithm (CEM) is adopted. CEM updates the components one by one, computing the E-step (updating W ) after each component update, where the basic EM updates all components ”simultane- ously”. When a component is annihilated its probability mass is immediately redistributed strengthening the remaining components. [7]

39 Read more

Exact and Approximate Weighted Model Integration with Probability Density Functions Using Knowledge Compilation

Exact and Approximate Weighted Model Integration with Probability Density Functions Using Knowledge Compilation

In the initial work on weighted model integration (Belle, Passerini, and Van den Broeck 2015) the authors perform weighted model integration on piecewise polynomials by it- eratively generating models by adding the negation of the model from the previous iteration to the formula. In subse- quent work by (Morettin, Passerini, and Sebastiani 2017) the number of generated models is substantially reduced by deploying SMT-based predicate abstraction (Graf and Sa¨ıdi 1997). In this line of work (Belle et al. 2016) also investigated component caching while performing a DPLL search when calculating a weighted model integral. Their approach is in- deed related to knowledge compilation. However, it is not applicable in cases when algebraic constraints exist between variables and couple these. The methods proposed on WMI are strictly limited to piecewise polynomials. We, completely lift this restrictions and are able to perform WMI via knowl- edge compilation on SMT(RA) and SMT(N RA) formulas using probability density functions instead of piecewise poly- nomials on SMT(LRA).
Show more

9 Read more

Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

Audio Query by Example Using Similarity Measures between Probability Density Functions of Features

This paper proposes a query by example system for generic audio. We estimate the similarity of the example signal and the samples in the queried database by calculating the distance between the probability density functions (pdfs) of their frame-wise acoustic features. Since the features are continuous valued, we propose to model them using Gaussian mixture models (GMMs) or hidden Markov models (HMMs). The models parametrize each sample efficiently and retain sufficient information for similarity measurement. To measure the distance between the models, we apply a novel Euclidean distance, approximations of Kullback- Leibler divergence, and a cross-likelihood ratio test. The performance of the measures was tested in simulations where audio samples are automatically retrieved from a general audio database, based on the estimated similarity to a user-provided example. The simulations show that the distance between probability density functions is an accurate measure for similarity. Measures based on GMMs or HMMs are shown to produce better results than that of the existing methods based on simpler statistics or histograms of the features. A good performance with low computational cost is obtained with the proposed Euclidean distance.
Show more

12 Read more

Diameter distribution of Cerrado – Caatinga forest formations using different Weibull functions forms

Diameter distribution of Cerrado – Caatinga forest formations using different Weibull functions forms

The vegetation from the Piaui State when considering the behavior and dynamics of the arborous specimens, lacks information. Aspects such as growth and mortality are fundamental to implant management plans, aiming to use natural resources in a sustainable manner. Besides, the existence of transition areas must be highlighted, since it indicates the interaction between climatic, geomorphologic and edaphic factors, resulting in elevated levels of biodiversity and species distribution (Emperaire, 1989).This study aimed to evaluate the application of different Weibull distribution methods, with two and three coefficients, and to describe the diameter distribution in a Brazilian BiomesCerrado and Caatinga in Piaui. The functions with two and three parameters, and truncated right function were adjusted for 12 plots allocated in a 29 ha fragment, located in Canto das Macambiras, in BatalhaCounty, Piaui, Brazil. The Weibull p.d.f. was adjusted using different functions and adjustment methods(two- parameter function adjusted by the maximum likelihood and linear approximation, truncated right function and three-parameter function adjusted using the maximum likelihood method). The Adherence data was evaluated by the Kolmogorov-Smirnov test (α = 0,01). The four functions forms tested obtained adhesion according to the Kolmogorov-Smirnov test, when analyzed for all species. In other hand, for the U. tomentosa, none of the four functions tested adhered the data observed. The two- parameter Weilbull distribution was not efficient to estimate the diameter distribution of the species. The three-parameter Weilbull distribution and the truncated right function estimate, the frequency of specimens per diameter class for the C. xanthocarpa and B. ungulataspecies, therefore, it is recommended to carry out subsequent surveys to verify the distribution behavior.
Show more

6 Read more

Nonparametric estimation and testing on discontinuity of positive supported densities: A kernel truncation approach

Nonparametric estimation and testing on discontinuity of positive supported densities: A kernel truncation approach

The remainder of this paper is organized as follows. Section 2 presents estimation and testing procedures of the density at a known discontinuity point c (> 0). As an important practical problem, a smoothing parameter selection method is also developed. Our particular focus is on the choice method for power optimality. In Section 3, we discuss how to estimate the entire density when the density has a discontinuity point. Convergence properties of density estimates are also explored. Section 4 conducts Monte Carlo simulations to evaluate finite-sample properties of the proposed jump-size estimator and test statistic. An empirical application on the validity of RDD is presented in Section 5. Section 6 summarizes the main results of the paper. Proofs are provided in the Appendix.
Show more

50 Read more

B0 s lifetime measurement in the CP odd decay channel B0 s → J=ψ f 0 (980)

B0 s lifetime measurement in the CP odd decay channel B0 s → J=ψ f 0 (980)

the fit is performed for each new mass window selection. This results in a systematic uncertainty of 8 µm. We test the modeling and fitting method used to estimate the life- time using data generated in pseudoexperiments with a range of lifetimes from 300 to 800 µm. A bias arises due to imperfect separation of signal and background. Since the background has a shorter lifetime than the signal, the result is a slight underestimate of the signal lifetime. The bias has a value of -4.4 µm for an input lifetime of 500 µm and 500 signal events. We have corrected the lifetime for this bias and a 100% uncertainty on the cor- rection has been applied to the result. We estimate the systematic uncertainty due to the models for the λ and mass distributions by varying the parameterizations of the different components: (i) the cross-feed contamina- tion is modeled by two Gaussian functions instead of one, (ii) the exponential mass distribution for the combinato- rial background model is replaced by a first order polyno- mial, (iii) the smoothing of the non-parametric function that models the B ± contamination is varied, and (iv) the exponential functions modelling the background λ distri- butions are smeared with a Gaussian resolution similar to the signal. To take into account correlations between the effects of the different models, a fit that combines all different model changes is performed. We quote the difference between the result of this fit and the nominal fit as the systematic uncertainty.
Show more

7 Read more

GaussianProcesses jl:A Nonparametric Bayes package for the Julia Language

GaussianProcesses jl:A Nonparametric Bayes package for the Julia Language

When the likelihood p(y | f , θ) is non-Gaussian, the posterior distribution of the latent function, condi- tional on observed data p(f | D, θ), does not have a closed form solution. A popular approach for addressing this problem is to replace the posterior with an analytic approximation, such as a Gaussian distribution de- rived from a Laplace approximation (Williams and Barber, 1998) or an expectation-propagation algorithm (Minka, 2001). These approximations are simple to employ and can work well in practice on specific problems (Nickisch and Rasmussen, 2008), however, in general these methods struggle if the posterior is significantly non-Gaussian. Alternatively, rather than trying to find a tractable approximation to the posterior, one could sample from it and use the samples as a stochastic approximation and evaluate integrals of interest through Monte Carlo integration (Ripley, 2009).
Show more

23 Read more

Show all 10000 documents...