Abstract—Ground penetrating radar is an eﬀective nondestructive method for exploring subsurface object information by exploiting the diﬀerences in electromagnetic characteristics. However, this task is negatively aﬀected by the existence of ground clutter and noise especially if the object is weak or/and shallowly buried. Therefore, this paper proposes a novel method for suppressing the clutter and background noise simultaneously in both ﬂat and rough surfaces. First, the ground clutter is removed mainly by applying a simpliﬁed **least** **square** ﬁtting background method, which remains the residual random noise signal. The remaining signal is then decomposed by singular value decomposition, which assumes that the decomposed signal contains four main components including strong target, weak target, very weak target, and accumulated noise signals. The powered singular values and their diﬀerences are clustered by K-means to extract the target signal components. The simulation results indicate that the proposed method is able to enhance the target signal with satisfactory results under both ﬂat and rough surfaces as well as in a high-level background noise. Besides, this method also shows its superiority to the latest existing proposed methods.

Show more
10 Read more

There are many disadvantages affecting the accuracy of surface topography measure- ment and analysis. One of them are the errors obtained during data processing. Usu- ally surface topographies of car engine parts are studied after form removal. Many algorithms and procedures were developed and suggested. However, the selection of reference plane with accordance to surface topography measurements of cylindrical elements was not fully recognized. In this paper **least** **square** **fitting** methods (cylinder, polynomial) and commercial filters (Gaussian filter, Gaussian regression filter and ro- bust Gaussian regression filter) for areal form removal were compared and proposed. Three types of surfaces: cylinder liners after plateau-honing, plateau-honed cylinder liners with oil pockets created by burnishing techniques and turned piston skirts were analyzed. Distortion of surface topography parameters (from ISO 25178 standard) according to improper selection of reference plane was also taken into consideration. It was assumed that **least** squares fitted cylinder plane gave better results for both of type cylinder liners according to commonly used algorithm. However, for piston skirt surfaces the obtained results were very similar. For **least** squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge- filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.

Show more
12 Read more

From Figure 5, an obvious diffusion distance-dependence of the p.d.f curve shape is observed, at small X*, the width of the p.d.f curve increases along with increasing X*, and the shape tends to be flatter; at large X*, the width of the p.d.f curve decreases along with increasing X*, and the shape tends to be sharper. It is inferred that there is a possible turning point, maybe, between X* = 25 to 40. The reason for this tendency is that, when the dye is injected into the flow, it is much connected, and thus the orienta- tion by **least** **square** **fitting** tends to be more around zero or small angle. Towards downstream, it spreads out and the dye patch is much separated and forms spot-like distribution, the orientation angle preference becomes less obvious. While after the turning point, the influence of the side walls cannot be neglected, and the dye spread widely almost across the whole between-wall distances. Dye patch appears to cover a large area, which makes the orientation angle have much larger probability around zero degree. This result also reflects the complexity of the scalar field, and the inverse esti- mation of source location of the dye release in turbulent flow has many limitations.

Show more
15 Read more

The creep crack growth rate (CCGR) behavior of type 316LN stainless steel (SS) is analyzed statistically. The CCGR data is obtained from the CCGR tests, which are conducted with various applied loads of 5500N to 7000N at 600 o C. The CCGR is characterized as a function of the C * fracture parameter of da/dt = B[C * ] q . In order to logically obtain the B and q values, the three methods of the **least** **square** **fitting** method (LSFM), a mean value method (MVM) and a probabilistic distribution method (PDM) are applied. All of the CCGR lines of the three methods are in good accord with the experimental data. The PDM is most useful in predicting the CCGR equation because the CCGR line locates at medium between the other two lines and can be treated with a probabilistic analysis. Both the B and q coefficients follow a lognormal distribution, even though the B values are located in a broad range. The probability variables, P (B, q) are discussed in a case of the standard deviation of 1σ. Fracture morphologies of the intergrannular crack with microvoids are examined to verify the creep crack.

Show more
Abstract—The **least** **square** method has been widely applied in many ﬁelds. However, when the approach is used for antenna array pattern synthesis, it is not excellent. In this paper, the **least** **square** method is used to synthesize antenna array pattern, and its performance is reviewed. Then contraposing to the shortcoming of the **least** **square** method, a new steerable **least** **square** (SLS) method is put forward. For an antenna array whose manifold matrix has been determined, the projection matrix equation can be derived from array manifold matrix easily. In order to get premium solution of array element excitation, a novel projection matrix equation with adjustable matrices is adopted. The results of simulations show that the pattern synthesized by the traditional **least** **square** method ﬁts the targeted pattern badly and is worse in the key performance indicators of peak level of side-lobe and null beam level than the targeted pattern; however, the pattern synthesized by the new SLS method ﬁts the targeted pattern well in zero point and local peak distribution and is better in the key performance indicators of peak level of side-lobe and null beam level than the targeted pattern.

Show more
11 Read more

A smart antenna system are capable of efficiently utilizing the radio spectrum and is a promise for an effective solution to the present wireless systems problems while achieving reliable and robust high speed high data rate transmission [3]. The LMS and NLMS are two adaptive beamforming algorithms use the estimate of the gradient vector from the available data. Adaptive beamforming is a technique in which an array of antennas is exploited to achieve maximum reception in a specified direction by estimating the signal arrival from a desired direction (in the presence of noise) while signals of the same frequency from other directions are rejected. This is achieved by varying the weights of each of the sensors (antennas) used in the array. The word adaptive array was first coined by Van Atta in 1959, to describe a self phased array. Adaptive algorithms form the heart of the array processing network [5]. These algorithms make successive corrections to the weight vector in the direction of the negative of the gradient vector which finally concludes to minimum mean **square** error (MMSE) [11-13].

Show more
Channel equalization is the process of reducing amplitude, frequency and phase distortion in a channel with the intent of improving transmission performance [11]. Adaptive equalization is a technique that automatically adapts to the time varying properties of the communication channel [8]. LMS, NLMS and RLS are such popular technique that can be used for adaptive channel equalization. Every tone at each receiver antenna is associated with multiple channel parameters, which makes channel estimation difficult [9]. 2.4 **Least** Mean **Square** (LMS) Adaptive Filter Algorithm LMS algorithm has become one of the most widely used algorithms in adaptive filtering [12]. This type of algorithm

Show more
developing. The objective of this study is to examine local resources that the Community Learning Center (Pusat Kegiatan Belajar Masyarakat or PKBM) can benefit from; this is to improve the competitiveness of such resources. This study was conducted in Gorontalo Province, Indonesia. The explanatory survey method was employed to explore the issue. This present study employed partial **least** **square** method to examine the interrelation among resources, strategy, and the superiority of the institution. The results reveal that the local resources measured by natural resources, human resources, and cultural resources are scarce, irreplaceable, satisfying the needs of the community education, as well as easy to organize. On the other hand, the level of the excellence of the institution is considerably high based on the absorption of graduates. The result of verification test shows that the local resources within the environment are able to improve its excellence by applying the strategy of community learning center.

Show more
In **fitting** of a curve by the method of **least** squares, the parameters of the curve are estimated by solving the normal equations which are obtained by applying the principle of **least** squares with respect to all the parameters associated to the curve jointly (simultaneously). However, for a curve of higher degree polynomial and / or for a curve having many parameters, the calculation involved in the solution of the normal equations becomes more complicated as the number of normal equations then becomes larger. Moreover, In many situations, it is not possible to obtain normal equations by applying the principle of **least** squares with respect to all the parameters jointly. These lead to think of searching for some other method of estimating the parameters. For this reason, a new method of **fitting** of a curve has been framed of which is based on the application of the principle of **least** squares separately for each of the parameters associated to the curve. This paper is based on this method with respect to the **fitting** of a polynomial curve.

Show more
11 Read more

This paper consists in the comparison of two adaptive **least** **square** approximation algorithms for **fitting** scattered data. The first part will present the Knot Insertion and Knot Removal algorithms. The second part consists in three experiments. For the experiments we consider the Franke’s test functions. All the fits are evaluated on a grid of 40 x 40 equally spaced points in the unit **square**. We will use Matlab for the evaluation of each approximation fit with Gaussian basic function with a shape parameter ε = 5.5 . The data sites used in this paper can be downloaded at 23TU http://www.math.unipd.it/~demarchi/TAA2010/ U23T .

Show more
MIMO-OFDM is commonly used communication system due to its high transmission rate and robustness against multipath fading. In MIMO-OFDM, channel estimation plays a major role. It refers to estimation of transmitted signal bits using the corresponding received signal bits. Among the different channel estimation methods, **Least** **Square** (LS), **Least** **Square**-Modified (LS-Mod) and Minimum Mean **Square** Error (MMSE) methods are commonly used. In this project, we use 16QAM Modulation and AWGN channel model are implementing by using LS, LS- Modified and MMSE algorithms. In LS estimation, procedure is simple but it has high Mean **Square** Error. In low SNR, MMSE is better than that of LS, but its main problem is its high computational complexity and LS-Modified is considered to be the best among the three channel estimation methods. The system is simulated in MATLAB and analysed in terms of Bit Error Rate with Signal to Noise Ratio.

Show more
The results obtained with and **Least** **Square** method (LS) for circularity at each z level is tabulated table1 and the values are compared with the values obtained with power inspect software. The graph for L1 norm and the **Least** **Square** method are plotted at z levels of 20mm, 40mm, 50mm and the normal distribution curve for the **Least** **Square** method at same z levels is plotted. The straightness of the centers of the circle at all z levels is calculated. Circularity at different diameters are calculated using the **Least** **Square** method are tabulated table 2 and the values are compared with the Power Inspect software values also.

Show more
Now we continue in the same way as we did for the previous level of the tree. There are three possible outcomes at each node. Outcome one is that the interval of possible exceptions is empty (3 - p − 1 node from above) and so there is no further branching from the node. Outcome two is when the number of possible exceptions in the interval is too large to search exhaustively (3 | p − 1 node from above) and so the node branches into two nodes. Outcome three is that the number of possible exceptions in the interval is small enough to search exhaustively. In this case we take all the integers of the correct form (with the right divisors and non-divisors for that node) from the interval and eliminate those that are not prime and that do not have ω(p − 1) = 13. Then for the remaining list of possible exceptions we verify that each prime p in the list has a **square**-free primitive root less than p 0.9 .

Show more
77 Read more

This paper presents a non-linear governing differential equation for a confined seepage problem under non-homogeneous and anisotropic conditions. This non-linear performance is introduced by the governing equation based on actual material behavior and solving the resulting non-linear differential equation numerically using the **least** **square** finite element formulation. This method was used to solve several seepage problems to examine the accuracy of the results. The solutions show good accuracy and convergence. The advantage of this method is its capability to solve non- linear problems compared to routine methods with constant coefficients in order to increase the accuracy of the solution ; eight nodal (isoperimetric) elements were used. Some clear conclusions can be drawn from this study. a) Generally, results of head changes (i.e., flow net analyses) either by first or second conditions for variable permeability conditions compare favorably to the case when the permeability is assumed to be constant and very little difference is observed.

Show more
10 Read more

Example 4.2 We consider the shape design of a car. For example, the hood (a part of a car). A set of data are given in Fig. 3 which are used to construct the hood. The data can be divided into two groups. One is marked with triangle which means the shape must go through the data (For instance, the edge of the hood). That means we need to interpolate these data. The other is marked with cycle which need to be **least** squares fit. We first use the spline space

In this paper, compare the adaptive filters which include NLMS (Normalised **Least** Mean **Square**) special case of LMS algorithm, RLS(Recursive **Least** Mean **Square**) and Kalman Filters and find the minimum error algorithm But this minimum error algorithm has problem of complicity and stability. So we need to work on other research for better performance.

We present a flexible construction method that can tune the mean vector and diago- nal covariance matrix of individual regressor by incrementally minimizing the training mean **square** error in an orthogonal forward selection procedure. To incrementally ap- pend regressor one by one, a weighted optimization search algorithm is developed, which is based on the idea from boosting [9]–[11]. Because kernel means are not restricted to the training input data and each regressor has an individually tuned diagonal covariance matrix, our method can produce very sparse models that generalize well.

The relevant entries in table 4.3 shows that estimate of regression from the robust regression **fitting** method (**Least** Median **Square**) have uniformly higher residual and standard errors than those from the regression estimates of the ordinary **least** squares. This is an indication that ordinary **least** squares provides optimum estimates when the data set is free from outlier and is normally distributed, this is supported by Gauss who introduced the normal (or Gaussian) distribution as the error distribution for which ordinary **least** squares is optimal (see the citations in Huber 1972 and Le Cam 1986).

Show more
10 Read more

Summary. A parameter estimation problem for ellipsoid **fitting** in the pres- ence of measurement errors is considered. The ordinary **least** squares estima- tor is inconsistent, and due to the nonlinearity of the model, the orthogonal regression estimator is inconsistent as well, i.e., these estimators do not con- verge to the true value of the parameters, as the sample size tends to infinity. A consistent estimator is proposed, based on a proper correction of the ordinary **least** squares estimator. The correction is explicitly given in terms of the true value of the noise variance.

Show more
18 Read more

The regression equations and regression coefficients are as shown in the above table. The obtained complex dielectric permittivity value were correlated with the physical properties of the agricultural soil under study. **Least** **square** **fitting** technique was used to draw a line and cover maximum points through the line. The regression equation and the regression coefficient is as shown in the table above.