least-mean-squares approach

Top PDF least-mean-squares approach:

Two stage weighted least squares estimator of the conditional mean of observation driven time series models

Two stage weighted least squares estimator of the conditional mean of observation driven time series models

We proposed a class of WLS estimators for the conditional mean of a time series, which do not require the whole knowledge of the cdf of the observations. The asymptotic and finite sample properties of these estimators have been studied. Compared to the QMLEs, the WLSE presents the advantages of: 1) being of higher efficiency in some situations; 2) be asymptotically efficient when the cdf belongs to the linear exponential family; 3) have a standard asymptotic normal distribution even when one or several coefficients of the conditional mean are equal to zero; 4) be explicit and do not require any optimisation routine in INARCH models. We applied our general results to standard count and duration models. We studied selection methods of the optimal WLSE based on the MSE and QLIK loss functions, and demonstrated the theoretical and empirical superiority of the QLIK-based approach.
Show more

44 Read more

TO IMPLEMENT LEAST MEAN SQUARES USING INCREASING THE SECURITY OF BIG DATA

TO IMPLEMENT LEAST MEAN SQUARES USING INCREASING THE SECURITY OF BIG DATA

R. Islam, R. Tian, L. M. Batten, and S.Versteeg [4] Collection of dynamic information requires that malware be executed in a controlled environment; the malware unpacks itself as a preliminary to the execution process. On the other hand, while execution of malware is not needed in order to collect static information, the file must first be unpacked manually. In this paper, we present the first classification method integrating static and dynamic features into a single test. Our approach improvess on previous results based on individual features and reduces by half the time needed to test such features separately .
Show more

6 Read more

Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery

Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank Matrix Recovery

ing a sequence of low complexity linear problems. The easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-p quasi-norm penalization to promote low-rankness and carries three major strengths, in particular for the matrix completion setting. First, we observe a re- markable global convergence behavior of the algorithm’s iterates to the low-rank matrix for relevant, interesting cases, for which any other state-of-the-art optimization approach fails the recovery. Secondly, HM-IRLS exhibits an empirical recovery probability close to 1 even for a number of measurements very close to the theoretical lower bound rpd 1 `d 2 ´rq,
Show more

49 Read more

A Simulation-based Portfolio Optimization Approach with Least Squares Learning

A Simulation-based Portfolio Optimization Approach with Least Squares Learning

We have applied the method to managing an equity port- folio invested across five global equity markets. For the case study shown in this paper, the views of an investor on future market returns is modelled and calibrated by a multi-factor mean-reverting process with eight risk factors, and auto- and cross asset correlation structures are also considered. Four investment styles are chosen in the test case, and a Least Square Monte Carlo approximation method has been devel- oped to calibrate the dynamic portfolio model. Through the test case, we have shown that the three dynamic investment styles outperform the benchmark portfolio for out-of-sample tests. Viewed on a mean-variance plane, the performance of the dynamic portfolios are located on a new efficient frontier whereas the benchmark static portfolio is less efficient with a higher risk premium. Some computational issues with the LSM model have also been discussed.
Show more

6 Read more

Partial Least Squares in Constructing Candidates Model Averaging Muhammad Arna Ramadhan, Bagus Sartono, Anang Kurnia

Partial Least Squares in Constructing Candidates Model Averaging Muhammad Arna Ramadhan, Bagus Sartono, Anang Kurnia

Model averaging has been developed as an alternative method in regression analysis when number of observations is smaller than number of explanatory variables (also known as high-dimensional regression). Main concept about this method is weighted average of several candidate models, in order to improve prediction accuracy. There are two steps in model averaging: construct several candidate models and determine weights for candidate models. Our research proposed partial least squares model averaging (PLSMA) as an approach to construct candidate models, while partial least squares (PLS) method was applied during that process to reduce and transform original explanatory variables become new variables that called components. The evaluation of PLSMA is conducted by measured Root Mean Squared Error of Prediction (RMSEP) with simulation data. Compared to other methods, PLSMA has given the smallest RMSEP, so this result indicates that this method had yielded more accurate prediction than other existing methods.
Show more

5 Read more

An alternative approach to approximating the moments of least squares estimators

An alternative approach to approximating the moments of least squares estimators

A recent summary of the work on asympotic approximation of moments in econometrics can be found in Ullah (2005). Two papers of interest include Phillips (2000), which presents new approximations for the bias and mean squared error in 2SLS estimation of a static simultaneous equation model 1 ,

24 Read more

A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of- arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS in- clude performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cram´er-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.
Show more

23 Read more

FPGA Implementation of Adaptive Weight Calculation Using QRD   RLS Algorithm

FPGA Implementation of Adaptive Weight Calculation Using QRD RLS Algorithm

Abstract - Adaptive weight calculation (AWC) is required in many communication applications including adaptive beamforming, equalization, predistortion and multiple-input multiple-output (MIMO) systems. These applications involve solving over-determined systems of equations in many cases. In general, the least squares approach, e.g. Least Mean Squares (LMS), Normalized LMS (NLMS) and Recursive Least Squares (RLS), is used to find an approximate solution to these kinds of system of equations. Among them, RLS is most commonly used due to its good numerical properties and fast convergence rate. Applying QR decomposition (QRD) to perform adaptive weight calculation based on RLS avoids this problem and leads to more accurate results and efficient architectures. QR decomposition is a method for solving a set of simultaneous equations, for unknown weights, which define the beam shape. The QR decomposition technique for adaptive weight calculation is particularly suited to implementation in FPGA and FPGA cores are now available that reduce the system development time.
Show more

9 Read more

A Least-squares Approach to Direct Importance Estimation

A Least-squares Approach to Direct Importance Estimation

Table 2: Mean test error averaged over 100 trials for covariate shift adaptation in regression and classification. The numbers in the brackets are the standard deviation. All the error values are normalized by that of ‘Uniform’ (uniform weighting, or equivalently no importance weighting). For each data set, the best method in terms of the mean error and comparable ones based on the Wilcoxon signed rank test at the significance level 1% are described in bold face. The upper half corresponds to regression data sets taken from DELVE (Ras- mussen et al., 1996), while the lower half correspond to classification data sets taken from IDA (R¨atsch et al., 2001). All the methods are implemented using the MATLAB R environ-
Show more

55 Read more

Overview of total least squares methods

Overview of total least squares methods

A lot of common problems in system identification and signal processing can be reduced to special types of block- Hankel and block-Toeplitz structured total least squares problems. In the field of signal processing, in particular in-vivo magnetic resonance spectroscopy, and audio coding, new state-space based methods have been derived by making use of the total least squares approach for spectral estimation with extensions to decimation and multichannel data quantification [35, 36]. In addition, it has been shown how to extend the least mean squares algorithm to the errors-in-variables context for use in adaptive signal processing and various noise environments. Finally, total least squares applications also emerge in other fields, including information retrieval [21], shape from moments [69], and computer algebra [96, 47].
Show more

24 Read more

Iterative Least Squares Estimator of Binary Choice Models: a Semi Parametric Approach

Iterative Least Squares Estimator of Binary Choice Models: a Semi Parametric Approach

The simulation results indicate that the estimator is, 1 easy-tocompute and fast, 2 insensitive to initial estimates, 3 appears to be \/-consistent and asymptotically normal, and, 4 bett[r]

32 Read more

On weighted structured total least squares

On weighted structured total least squares

is a solution technique for an overdetermined system of equations AX ≈ B, A ∈ IR m × n , B ∈ IR m × d . It is a natural generalization of the least squares approximation method when the data in both A and B is perturbed. The method has been generalized in two directions:

8 Read more

Consistent least squares fitting of ellipsoids

Consistent least squares fitting of ellipsoids

We point out several papers in which the ellipsoid fitting problem is considered. Gander et. al. [GGS94] consider algebraic and geometric fit- ting methods for circles and ellipses and note the inadequacy of the alge- braic fit on some specific examples. Later on, the given examples are used as benchmarks for the algebraic fitting methods. Fitting methods, specific for ellipsoids, as opposed to the more general conic sections are first pro- posed in [FPF99]. The methods incorporate the ellipticity constraint into the normalizing condition and thus give better results when an elliptic fit is desired. In [Nie01] a new algebraic fitting method is proposed that does not have as singularity the special case of a hyperplane fitting; if the best fitting manifold is affine the method coincides with the total least squares method. Numerical methods for the orthogonal fitting problem are devel- oped in [Sp¨a97a].
Show more

18 Read more

Performance Analysis of Adaptive Filters for Denoising of ECG Signals

Performance Analysis of Adaptive Filters for Denoising of ECG Signals

E e n , where e(n) is the error signal. In Fig. 2, the primary input signal is ECG signal corrupted by noise, reference signal contains noise alone, y(n) is the filtered output, d(n) is the desired ECG signal, and e(n) is the error signal. The filtered reference noise signal is subtracted from the primary input to produce the system output which is considered to be the best least squares estimate of the primary signal. Depending on the error signal the LMS adaptive filter updates the weight of the filter to obtain denoised ECG [20]. The LMS algorithm produces the least mean square of this error signal e(n) by changing the filter tap weight w(n). The iterative nature of LMS filter’s coefficient update gives a smooth response of instantaneous gradient to obtain a more reasonable estimate of true gradient [21]. For each iteration, the three basic operations of LMS algorithm are:
Show more

12 Read more

Adaptive Filter using FPGA For Audio Enhancement

Adaptive Filter using FPGA For Audio Enhancement

In the figure 2, our implementation flow is shown. Here our input data is audio signal and audio with noise signal. We are giving input audio signal that is wav file from Matlab to the filtering process which is performed on spartan6 Evaluation kit. This input is given sample by sample to the Least Mean Square algorithm. Here weights are calculated and filtering process is done based on these calculated weights. Before this process we are adding random noise to the signal. Then this filtered signal is send back to the matlab and we get audio signal which is intelligible.
Show more

5 Read more

Floating point error analysis of recursive least squares and least means squares adaptive filters

Floating point error analysis of recursive least squares and least means squares adaptive filters

This sequence is a zero mean white independent random process which has a variance related to signal statistics, the weight vector covariance, and the floating point errorso The calculat[r]

27 Read more

M-Government Adoption Factors in the UAE: a Partial Least Squares Approach

M-Government Adoption Factors in the UAE: a Partial Least Squares Approach

With the far-reaching reception of the Internet and mobile devices, more and more governments are using mobile government (mGov) technology to supply services electronically to the public and other significant partners. Governments are transferring to mobile-based technology to facilitate better interaction with citizens and to enhance the quality of services (Abdelghafar & Magdy, 2012). The mGov approach provides access to several areas of government services; for example, education, climate gauging, medical services, payment services, and metro services, among others. Using smartphones, citizens can access mGov from anywhere at any time, thus saving them time and expense. The use of mGov has the further advantage of optimizing transparency between government and its citizens by cutting through bureaucratic structures and procedures (Alsenaidy & Ahmad, 2012).
Show more

28 Read more

Deformation analysis with Total Least Squares

Deformation analysis with Total Least Squares

This study focuses on the use of TLS approach for geode- tic deformation analysis. For comparison a traditional ap- proach, namely similarity transformation has also been ap- plied on the same data set. The big difference in the es- timated parameters using LS and TLS (see Table 5) comes from the errors and the covariance of the coordinates of the points which were involved in the design matrix. However, the main part of the difference is due to the different covari- ance of the both system coordinates, which were employed as relative scaling between the observation vector and the er- roneous columns of the design matrix. Thus, we can con- clude that the transformation parameters of a Helmert trans- formation problem are strongly sensitive to the accuracy of the coordinates of the identical points.
Show more

7 Read more

RLScore: Regularized Least-Squares Learners

RLScore: Regularized Least-Squares Learners

RankRLS method implements efficient algorithms for both minimizing pairwise ranking losses and computing cross-validation estimates for ranking. The method has been shown to be highly competitive compared to ranking support vector machines (Pahikkala et al., 2009). Unsupervised variants of RLS classification inspired by the maximum margin clustering approach have also been developed (Pahikkala et al., 2012a).

5 Read more

Theory of Errors and Least Squares Adjustment

Theory of Errors and Least Squares Adjustment

Both absolute error " and relative error de…ned above are describing an individual measurement error. Most measurement errors are random errors that behave in a random way. Therefore, in practice it is very di¢ cult or impossible to describe each individual error which occurs under a speci…c measurement condition (i.e. under a speci…c physical environment, at a speci…c time, with a speci…c surveyor, using a speci…c instrument, etc). However, for simplicity we often prefer to use one or several simple index numbers to judge the quality of the obtained measurements (how good or how bad the measurements are). An old and well known approach is to de…ne and use statistical error indices.
Show more

14 Read more

Show all 10000 documents...