least-squares optimization techniques

Top PDF least-squares optimization techniques:

New Intelligent Classification Techniques for Diagnosis of Diabetes Mellitus based on Modified PSO

New Intelligent Classification Techniques for Diagnosis of Diabetes Mellitus based on Modified PSO

ABSTRACT: An overview of some of the classification techniques appearing in this paper for Diabetes diagnoses and treatment. Hybrid optimization techniques are in the form Artificial Neural networks, Genetic Algorithm, Fuzzy classifier, Support Vector Machine, Least Squares Support Vector Machine, Particle Swarm Optimization. All the techniques are analyzed using Pima Indians Diabetes Data set from UCI repository of machine learning databases. This review aims to conclude best optimization technique for Diabetes Mellitus Diagnosis
Show more

7 Read more

Least Median of Squares Estimation by Optimization Heuristics with an Application to the CAPM and Multi Factor Models

Least Median of Squares Estimation by Optimization Heuristics with an Application to the CAPM and Multi Factor Models

Heuristic optimization techniques have been successfully applied to a vari- ety of problems in statistics and economics for well over a decade (see Gilli et al. (2008) and Gilli and Winker (2008) for recent overviews). However, appli- cations to estimation problems are still rare. Fitzenberger and Winker (2007) consider Threshold Accepting (TA) for censored quantile regression, a prob- lem similar to the LMS estimator. 2 Maringer and Meyer (2008) and Yang et al. (2007) also use TA for model selection and estimation of smooth transi- tion autoregressive models. In contrast, several optimization heuristics have been used in other fields of research in finance, e.g., portfolio optimization (Dueck and Winker (1992), Maringer (2005), Winker and Maringer (2007a), Specht and Winker (2008)) or credit risk bucketing (Krink et al. 2007).
Show more

23 Read more

Traffic Flow Forecasting by a Least Squares Support Vector Machine with a Fruit Fly Optimization Algorithm

Traffic Flow Forecasting by a Least Squares Support Vector Machine with a Fruit Fly Optimization Algorithm

In order to improve the accuracy of traffic flow prediction, scholars and practitioners have proposed many methods over the past several decades, such as time series techniques and regression models [2–6]. In recent years, many artificial intelligence forecasting techniques have been applied to traffic flow forecasting to improve the accuracy of forecasting. Yan Chen et al. [7] proposed a kind of intelligent forecasting method based on particle swarm optimization algorithm, which improves the stability and reliability of the forecast. Wei-Chiang Hong et al. [8] proposed a hybrid model combined with differential evolution algorithm for traffic flow, which was shown to outperform the SVR default parameter prediction support vector regression, regression forecast model and BP artificial neural network (BPNN). Li et al. [9] implemented a method for chaotic time series of optimized BP neural network based on particle swarm optimization (PSO), and the practical application of this method is indicated. Xu et al. [10] presented a short-term traffic forecasting model which combines support vector regression (SVR) model with continuous ant colony optimization (SVRCACO) to forecast traffic flow. These methods improve the accuracy of traffic flow forecast to a certain extent.
Show more

10 Read more

On the computation of the structured total least squares estimator

On the computation of the structured total least squares estimator

Unlike the methods mentioned above, which solve the STLS problem in its original for- mulation (4), the proposed methods solve an equivalent optimization problem, derived by analytically minimizing (4) over p , for a xed X . A similar approach, using a dierent parameterization of the structure, is taken in the derivation of the so-called constrained total least squares (CTLS) problem [17]. However, Reference [17] is restricted to univariate prob- lems and does not use the best optimization techniques in terms of computational eciency and robustness (very good initial estimates are needed). Another STLS problem formulation is based on the Riemannian singular value decomposition [18], where the derived equivalent problem is interpreted as a non-linear singular value decomposition problem.
Show more

18 Read more

Optimization Method of Power Allocation in OFDM Using Least Squares Method

Optimization Method of Power Allocation in OFDM Using Least Squares Method

The next-generation wireless networks are expected to produce broadband transmission services like voice, internet browsing, video conference, etc. With numerous Quality of Service (Quos) necessities. Multicast service over wireless networks is a very important and difficult goal destined to several transmission applications like audio/video clips, Mobile TV and interactive game There are 2 key traffics, namely, unit forged traffics and multicast traffics, in wireless transmission communications. Current studies principally target unit-cast traffics. particularly, dynamic resource allocation has been known together of the foremost economical techniques to realize higher Quos and better system spectral potency in uncast wireless networks. moreover, a lot of attention is paid to the uncast OFDM systems. Orthogonal Frequency
Show more

6 Read more

Optimal Dictionary for Least Squares Representation

Optimal Dictionary for Least Squares Representation

From (20) we can conclude that the optimal value, if it exists, of problem (7) is bounded be- low by the optimal value, if it exists, of the one given in (21). Our strategy is to demonstrate that optimization problem (21) admits a solution, and we shall furnish a feasible solution of (7) that achieves a value of the objective function that is equal to the optimal value of the problem (21). This will solve (7).

28 Read more

Distributed Learning with Regularized Least Squares

Distributed Learning with Regularized Least Squares

Remark 12 Corollary 11 shows that distributed learning with the least squares regulariza- tion scheme in a RKHS can achieve the optimal learning rates in expectation, provided that m satisfies the restriction (13). It should be pointed out that our error analysis is carried out under regularity condition (9) with 1/2 < r ≤ 1 while the work in (Zhang et al., 2015) focused on the case with r = 1/2. When r approaches 1/2, the number m of local proces- sors under the restriction (13) reduces to 1, which corresponds to the non-distributed case. In our follow-up work, we will consider to relax the restriction (13) in a semi-supervised learning framework by using additional unlabelled data, as done in (Caponnetto and Yao, 2010). The main contribution of our analysis for distributed learning in this paper is to remove an eigenfunction assumption in (Zhang et al., 2015) by using a novel second order decomposition for a difference of operator inverses.
Show more

31 Read more

Sparse least trimmed squares regression.

Sparse least trimmed squares regression.

Even though the resulting estimates are not sparse, prediction accuracy is improved by shrinking the coefficients, and the computational issues with high-dimensional robust estimators are overcome due to the regularization. Another possible choice for the penalty function is the smoothly clipped absolute deviation penalty (SCAD) proposed by Fan and Li (2001). It sat- isfies the mathematical conditions for sparsity but results in a more difficult optimization problem than the lasso. Still, a robust version of SCAD can be obtained by optimizing the associated objective function over trimmed samples, instead of over the full sample.
Show more

21 Read more

An Introduction to Partial Least Squares Regression

An Introduction to Partial Least Squares Regression

the first ten PLS factors, for both the factors and the responses. Notice that the first five PLS factors ac- count for almost all of the variation in the responses, with the fifth factor accounting for a sizable proportion. This gives a strong indication that five PLS factors are appropriate for modeling the five component amounts. The cross-validation analysis confirms this: although the model with nine PLS factors achieves the absolute minimum predicted residual sum of squares (PRESS), it is insignificantly better than the model with only five factors.

8 Read more

Theory of Errors and Least Squares Adjustment

Theory of Errors and Least Squares Adjustment

Theory of errors and least squares adjustment is an important subject within the geomatics programme o¤ered at KTH. This is due to the fact that surveying and mapping (or production of spatial data) often requires mathematical processing of measurement data. Furthermore, the general methodology of spatial data processing is essentially the same as that for data processing in other science and engineering …elds, even though data collection procedures and data types can be di¤erent. Theory of errors is related to and comparable with what is called estimation theory used in automatic control and signal processing. Therefore, studying theory of errors can be helpful for solving general data processing problems in other scienti…c and engineering …elds.
Show more

14 Read more

Teaching Least Squares in Matrix Notation

Teaching Least Squares in Matrix Notation

The least squares method, a fundamental piece of knowledge for students of all scientific tracks, is often introduced considering the simple linear regression with only two parameters to be determined. However, the availability of ever more large data sets prompts even undergraduate students to a sounder and wider knowledge of linear regression. Here, we have used the linear algebra formal- ism to compact the main results of the least squares method, encompassing ordi- nary and weighted least squares, goodness of fit indicators, and eventually a basic equation of re-sampling, which could be used to stimulate interested students in an even broader knowledge of data analysis. The compactness of the equations reported above allow their introduction at the undergraduate level, provided that basic linear algebra has been previously introduced.
Show more

13 Read more

RLScore: Regularized Least-Squares Learners

RLScore: Regularized Least-Squares Learners

RLScore is implemented as a Python module that depends on NumPy (van der Walt et al., 2011) for basic data structures and linear algebra, SciPy (Jones et al., 2001–) for sparse matrices and optimization methods, and Cython (Behnel et al., 2011) for implementing low-level routines in C-language. The aim of the software is to provide high quality im- plementations of algorithms developed by the authors that combine efficient training with automated performance evaluation and model selection methods.

5 Read more

Some aspects of progeny testing Southdown rams : a thesis presented in part fulfilment of the requirements for the degree of Master of Agricultural Science in Massey University of Manawatu, New Zealand

Some aspects of progeny testing Southdown rams : a thesis presented in part fulfilment of the requirements for the degree of Master of Agricultural Science in Massey University of Manawatu, New Zealand

Least squares means, least squares deviations of sire groups from the means, least squares differences due to birthrank and sex, and partial regression coefficients for the characteristi[r]

14 Read more

Least squares approximations of power series

Least squares approximations of power series

In this paper we obtain analogs to (1.4) and (1.6) for power series f defined on the open interval ( − 1, 1). Such functions f (especially without closed forms) arise, for ex- ample, in solutions to differential equations. It will be necessary to first extend the above least squares polynomial. This is accomplished in Section 2 by replacing the integral in (1.3) by a sum in terms of Maclaurin coefficients of f and inversion coefficients of expan- sions of monomials as linear combinations of ultraspherical polynomials. After proving key properties of the latter coe ffi cients in Section 3, we then derive uniform or pointwise estimates to f with these least squares extensions.
Show more

20 Read more

A novel interpretation of least squares solution

A novel interpretation of least squares solution

We show that the well-known least squares LS solution of an overdetermined system of linear equations is a convex combination of all the non-trivial solutions weighed by the squares of t[r]

6 Read more

Image magnification by least squares surfaces

Image magnification by least squares surfaces

This paper continues as follows. In the second part, quadratic surfaces and the theory of least squares will be discussed. In the third part, the least square planes, suggested algorithms, and evaluation parameters will be proposed. The fourth section compares the results of implementation by other methods. In the last section, conclusions and recommendations will be presented.

15 Read more

05_Linear_Regression_1.pdf

05_Linear_Regression_1.pdf

Maximum Likelihood and Least Squares Geometry of Least Squares Sequential Learning Regularized Least Squares Multiple Outputs Loss Function for Regression The Bias-Variance Decomposition[r]

42 Read more

3D Deformation Using Moving Least Squares

3D Deformation Using Moving Least Squares

In our work, we explore an alternative “local” approach to- wards feature preserving deformation. Our work mainly builds upon the 2d image deformation technique introduced by Schae- fer et al. [Schaefer et al. 2006], which solves for the optimal rigid transformation that maps a set of weighted point handles to their deformed positions, at each image domain point. The image do- main point is than transformed by the optimal rigid transformation at that point. Because an optimal transformation is calculated for every point to be deformed, it is called Moving Least Square. The work-flow of deforming a model using our method is as follow- ing. The user first defines a set of point handles around the model that he wants to deform. He can then drag the point handles around, and the model will deform in a smooth and realistic manner such that the region close to a handle will move to the new position of the handle.
Show more

7 Read more

Extremiles: a new perspective on asymmetric least squares

Extremiles: a new perspective on asymmetric least squares

Despite their statistical virtues and all their nice axiomatic properties as spectral risk measures and concave distortion risk measures, extremiles have an apparent limitation when applied to distributions with infinite mean. This should not be considered to be a serious disadvantage however, at least in financial and actuarial applications, since the definition of a coherent risk measure for distributions with an infinite first moment is not clear, see the discussion in Section 3 of Neˇslehov´ a et al. (2006). Whether operational risk models in which losses have an infinite mean make sense in the first place has also recently been questioned by Cirillo and Taleb (2016).
Show more

41 Read more

Extremiles: a new perspective on asymmetric least squares

Extremiles: a new perspective on asymmetric least squares

Apply finally Theorem 2.3.9 in de Haan and Ferreira 2006, p.48 to construct by induction an increasing sequence ptp q tending to infinity such that for any positive integer p:.. rn q as [r]

26 Read more

Show all 10000 documents...