Kernel Estimator

Top PDF Kernel Estimator:

Asymptotic Confidence Bands for Copulas Based on the Local Linear Kernel Estimator

Asymptotic Confidence Bands for Copulas Based on the Local Linear Kernel Estimator

In this paper, we establish asymptotically optimal simultaneous confidence bands for the copula function based on the local linear kernel estimator proposed by Chen and Huang [1]. For this, we prove under smoothness conditions on the derivatives of the copula a uniform in bandwidth law of the iterated logarithm for the maximal deviation of this estimator from its expectation. We also show that the bias term converges uniformly to zero with a precise rate. The performance of these bands is illustrated by a simulation study. An application based on pseudo-panel data is also pro- vided for modeling the dependence structure of Senegalese households’ expense data in 2001 and 2006.

19 Read more

On the convergence rates of kernel estimator and hazard estimator for widely dependent samples

On the convergence rates of kernel estimator and hazard estimator for widely dependent samples

We can further refer to some large sample properties of nonparameter estimate based on WOD samples. For instance, Wang et al. ([11], 2013) studied the strong consistency of estimator of fixed design regression model for WOD samples. Shi and Wu ([12], 2014) discussed the strong consistency of kernel density estimator for identically distributed WOD samples. Li et al. ([13], 2015) studied the pointwise strong consistency for a kind of recursive kernel estimator based WOD samples.

10 Read more

Regularized Data-Based Nonparametric Filtration of Stochastic Signals

Regularized Data-Based Nonparametric Filtration of Stochastic Signals

For the strong stationary sequences the nonparametric counterpart of the optimal equation was constructed in the theory of nonparametric signal processing. This approach was developed when the state equation and the probability distribution of an unobservable signal are unknown, and the stochastic observation equation is known completely. The estimation equation includes the kernel estimator of the logarithmic density derivative, which depends on bandwidths of density estimates and its derivatives.

5 Read more

Some Improvement on Convergence Rates of Kernel Density Estimator

Some Improvement on Convergence Rates of Kernel Density Estimator

  of MSE for orthogonal kernel estimators. Article [2] introduced geo- metric extrapolation of nonnegative kernels, while [3] discussed the number of vanishing moments of kernel or- der using Fourier transformation. Variable kernel estimation in [4] successfully reduced the bias by employing larger smoothing parameters in low density regions, while [5] introduced the idea of inadmissible kernels which also results in reduced bias. On the other hand, [6] proposed an estimator using some probabilistic arguments which achieves the goal of bias reduction. Article [7] suggested a locally parametric density estimator, a semi- parametric technique, which effectively reduces the order of bias. Article [8] proposed algorithms relevant to quadratic polynomial and β cumulative distribution function (c.d.f.) which accommodates possible poles at boundaries and in consequence reduces the bias at boundaries. Article [9] introduced a bias reduction method using estimated c.d.f. via smoothed kernel transformations. Article [10] introduced a two-stage multiplicative bias corrected estimator. Article [11] developed a skewing method to reduce the bias while the variance is only increased by a moderate constant factor. In addition, some recent works discussed approaches of obtaining smaller bias of the estimator via several other methods. Article [12] worked out a bias reduced kernel relative to the classical kernel estimator via Lipschitz condition. Article [13] introduced an adjusted kernel density estima- tor in which the kernel is adapted to the data but not fixed. This method naturally leads to an adaptive-choice of the smoothing parameters which can reduce the bias.

13 Read more

Nonparametric circular quantile regression

Nonparametric circular quantile regression

We should stress that in any practical application the choice of origin (“cut-point”) should be chosen dependent on u to minimise the width of the interval to be estimated. This adjustment is important to obtain meaningful inter- pretations, especially if the conditional mean can take values close to π. We note that the double-kernel estimator has two tuning parameters (λ and κ) whereas the circular check function estimator has only one. Maybe, it is for this rea- son that, in some simulation experiments with small samples, we have observed a slightly better performance for the double-kernel estimator in most settings. However, this comes with the cost of more computational effort, as well as the need to select good smoothing choices. This effort can be reduced by first selecting a good value of λ in the check function, and then using this value, together with an appropriate κ, in the double-kernel estimator. Cross-validation can be used in this process, though we have found a joint selection of both κ and λ to be problematic.

16 Read more

LQ-moments for statistical analysis of extreme events

LQ-moments for statistical analysis of extreme events

Statistical analysis of extremes is conducted for predicting large return periods events. LQ-moments that are based on linear combinations are reviewed for characterizing the upper quantiles of distributions and larger events in data. The LQ-moments method is presented based on a new quick estimator using five points quantiles and the weighted kernel estimator to estimate the parameters of the generalized extreme value (GEV) distribution. Monte Carlo methods illustrate the performance of LQ-moments in fitting the GEV distribution to both GEV and non-GEV samples. The proposed estimators of the GEV distribution were compared with conventional L-moments and LQ-moments based on linear interpolation quantiles for various sample sizes and return periods. The results indicate that the new method has generally good performance and makes it an attractive option for estimating quantiles in the GEV distribution.

11 Read more

A note on the asymptotic normality of the kernel deconvolution density estimator with logarithmic chi square noise

A note on the asymptotic normality of the kernel deconvolution density estimator with logarithmic chi square noise

Fan [3] studies the quadratic mean convergence rate of the kernel deconvolution estimator; it turns out that the convergence rate of the estimator depends heavily on the type of error distribution. In particular, it is determined by the tail behaviour of the modulus of the characteristic function of the error distribution: the faster the modulus function goes to zero in the tail, the slower the converge rate. The following lemma, which is from Van Es, Spreij, and Van Zanten [10], gives the tail behaviour of |φ k (t)|.

16 Read more

Minimum Density Hyperplanes

Minimum Density Hyperplanes

This paper introduces a novel approach to clustering and semi-supervised classification which directly identifies low-density hyperplanes in the finite sample setting. In this ap- proach the density on a hyperplane criterion proposed by Ben-David et al. (2009) is directly minimised with respect to a kernel density estimator that employs isotropic Gaussian ker- nels. The density on a hyperplane provides a uniform upper bound on the value of the empirical density at points that belong to the hyperplane. This bound is tight and propor- tional to the density on the hyperplane. Therefore, the smallest upper bound on the value of the empirical density on a hyperplane is achieved by hyperplanes that minimise the den- sity on a hyperplane criterion. An important feature of the proposed approach is that the density on a hyperplane can be evaluated exactly through a one-dimensional kernel density estimator, constructed from the projections of the data sample onto the vector normal to the hyperplane. This renders the computation of minimum density hyperplanes tractable even in high dimensional applications.

33 Read more

Nonparametric Confidence Interval for Quantiles

Nonparametric Confidence Interval for Quantiles

One of the ways for achieving the confidence interval for quantiles is direct use of a central limit theorem. In this approach, we require a good estimator of the quantile density function. In this paper, we consider the nonparametric estimator of the quantile density function from Soni et al. (2012) and we obtain confidence interval for quantiles. In the following, by using simulation, the coverage probability and mean square error of this confidence interval is calculated. Also, we compare our proposed approach with alternative approaches such as sectioning and jackknife.

16 Read more

A Berry-Esseen Type Bound for the Kernel Density Estimator of Length-Biased Data

A Berry-Esseen Type Bound for the Kernel Density Estimator of Length-Biased Data

in which (⋅) is a kernel function. is defined for > 0 and is defined for > for an arbitrary constant > 0. and are originally proposed and investigated respectively by Jones [8] and Bhattacharyya et al. [2]. In order to achieve the desired result, we need to present another versions of Jones and Bhattacharyya estimators of , which are denoted by and . These estimators are defined as bellow

8 Read more

Asymptotic Behaviors of Nearest Neighbor Kernel Density Estimator in Left-truncated Data

Asymptotic Behaviors of Nearest Neighbor Kernel Density Estimator in Left-truncated Data

We then kept the data ( Y i , T i ) such that Y i  T i . Using this scheme, m  10 independent samples of size n were generated. For each sample, plug-in estimates  n and for and were used respectively. The following figures represent the average of m density estimations and their the confidence bounds. The kernel function

13 Read more

Testing explosive bubbles with time varying volatility

Testing explosive bubbles with time varying volatility

This paper considers the problem of testing for an explosive bubble in …nancial data in the presence of time-varying volatility. We propose a weighted least squares- based variant of the Phillips, Wu and Yu (2011) test for explosive autoregressive behaviour. We …nd that such an approach has appealing asymptotic power proper- ties, with the potential to deliver substantially greater power than the established OLS-based approach for many volatility and bubble settings. Given that the OLS- based test can outperform the weighted least squares-based test for other volatility and bubble speci…cations, we also suggested a union of rejections procedure that succeeds in capturing the better power available from the two constituent tests for a given alternative. Our approach involves a nonparametric kernel-based volatility function estimator for computation of the weighted least squares-based statistic, together with the use of a wild bootstrap procedure applied jointly to both individ- ual tests, delivering a powerful testing procedure that is asymptotically size-robust to a wide range of time-varying volatility speci…cations.

31 Read more

A note on asymptotic normality of kernel deconvolution density estimator with logarithmic Chi-square noise: with application in volatility density estimation

A note on asymptotic normality of kernel deconvolution density estimator with logarithmic Chi-square noise: with application in volatility density estimation

financial econometrics to describe the evolution of financial returns. Model (23) incorpo- rates popular discrete-time SV models (e.g. Taylor (1982)) and discretized continuous- time SV model which assume the volatility process to be stationary as special cases (see e.g. Shephard (2005) for a review). Van Es, Spreij, and Van Zanten (2003) and Van Es, Spreij, and Van Zanten (2005) considered estimating the volatility density using kernel deconvolution estimator in this model.

22 Read more

Low Complexity Algorithm for Probability Density Estimation Applied in Big Data Analysis

Low Complexity Algorithm for Probability Density Estimation Applied in Big Data Analysis

A. Assenza and al (2008) summarizes, in [3], some probability density estimation methods and affirms that density estimation gives better results than traditional tools of data analysis like Principal Component Analysis. In the same way, Adriano Z. Zambom and al (2013) adds, in [4], that kernel density with smoothing is the most used approach. All those approaches, treats mathematical aspects but did not discussed implementation sides and computational complexity aspects. A. Sinha and al (2008) discussed in [5] algorithmic cost in time of those methods and optimized computational complexity of Kernel density estimator using clustring... Normally, the estimation of complexity in time must includes costs of all functions in global expression. Exponential function cost must be included in Kernel complexity estimation whatever its complexity class.

5 Read more

Essays on the econometric theory of rank regressions

Essays on the econometric theory of rank regressions

distance estimator of Ai and Chen [2003] also has a hidden curse of dimensionality, since it requires progressively stronger smoothness properties of the unknown functions when the dimension of the vector X grows. Other methods, such as the estimator by Ichimura [1993], may not have this problem. Un- fortunately, the second-order asymptotic properties of Ichimura’s estimator are not known. It is likely though that strong smoothness assumptions will be needed for it to have the rate of convergence of order O n 1=2 for the error between the …nite-sample distribution of the estimator and the asymptotic normal

174 Read more

Choice of Spectral Density Estimator in Ng Perron Test: Comparative Analysis

Choice of Spectral Density Estimator in Ng Perron Test: Comparative Analysis

The output presented in Table 1 shows that for the autoregressive estimator of spectral density, the Ng-Perron test statistics is below the critical value and unit root hypothesis should be considered rejected. On the other hand, for Kernel based estimator of spectral density, the Ng- Perron test statistics is far above the critical value and the null of unit root could not be rejected even at a loose significance level. Therefore the person applying the unit root test may be confused in the choice of result. Therefore this study is designed to compare the size and power properties of Ng-Perron test so that a practitioner may get some guidance for the right choice of spectral density estimator.

16 Read more

In-sample forecasting: structured models and reserving

In-sample forecasting: structured models and reserving

the one-dimensional components of a multiplicative separable hazard. Given a local lin- ear pilot estimator of the d-dimensional hazard, the backfitting algorithm is motivated from a least squares criterion and converges to the closest multiplicative function. We show that the one dimensional components are estimated with a one-dimensional con- vergence rate, and hence do not suffer from the curse of dimensionality. The setting is very similar to Linton, Nielsen, and Van de Geer (2003), but has two significant im- provements. First, our approach works without the use of higher order kernels. With them, one can theoretically derive nearly n −1/2 –consistency (with growing order), but they often fail to show good performance in practice. Second, the support of the mul- tivariate hazard does not need to be rectangular. In the provided in-sample forecasting application of reserving in general insurance, the support is indeed triangular.

135 Read more

A Simple Method for Predicting Distributions by Means of Covariates with Examples from Poverty and Health Economics

A Simple Method for Predicting Distributions by Means of Covariates with Examples from Poverty and Health Economics

target distribution by an m-fold mixture of models whose parameter functions are fitted using regression techniques. As a special case we can even regard the estimator in terms of a nonparametric kernel density in which the assumed con- ditional distribution is the kernel, and the scedasticity function determines the data-adaptive local bandwidth. From that perspective assuming homoscedasticity corresponds to using a common global bandwidth; the use of asymmetric condi- tional distributions corresponds to the case of applying special kernels typically used for boundary correction or asymmetric information (like the knn estima- tors). In each of these three interpretations the choice of the conditional density plays a relatively minor role, the scedasticity function is more important, and the choice of model for the mean regression mainly impacts on the variability of the final estimate.

32 Read more

Some Asymptotic Results of Kernel Density 
Estimator in Length-Biased Sampling

Some Asymptotic Results of Kernel Density Estimator in Length-Biased Sampling

is used to construct a density estimator. It should be noticed that applied the Epanechnikov kernel (4.1) satisfies the assumption (1). Bandwidth parameter is chosen based on the least square cross validation method discussed in [1]. According to the Figure 1, the Jones’ estimator can estimate the density function ( ) f  properly.

8 Read more

Nonparametric estimate remarks

Nonparametric estimate remarks

As mentioned above a bandwidth and a kernel (the order of the kernel) have big influence on quality of an estimate. It could not be excluded that using dif- ferent type of kernel function will bring better results (see Poměnková, 2004b; Poměnková, 2005; Koláček, 2005).

8 Read more

Show all 3309 documents...