ECE 3040
Lecture 18: Curve Fitting by Least-Squares-Error Regression
ยฉ Prof. Mohamad HassounThis lecture covers the following topics:
๏ท
Introduction
๏ท
Linear least-squares-Error (LSE) regression: The straight-line model
๏ท
Linearization of nonlinear models
๏ท
General linear LSE regression and the polynomial model
๏ท
Polynomial regression with Matlab:
polyfit
๏ท
Non-linear LSE regression
๏ท
Numerical solution of the non-linear LSE optimization problem:
Gradient search and Matlabโs
fminsearch
function
๏ท
Solution of differential equations based on LSE minimization
๏ท
Appendix: Explicit matrix formulation for the quadratic regression
problem
Introduction
In the previous lecture, polynomial and cubic spline interpolation methods were introduced for estimating a value between a given set of precise data points. The idea was to (interpolate) โfitโ a function to the data points so as to perfectly pass through all data points. Many engineering and scientific observations are made by conducting experiments in which physical quantities are measured and recorded as inexact (noisy) data points. In this case, the objective would be to find the best-fit analytic curve (model) that approximates the underlying functional relationship present in the data set. Here, the best-fit curve is not required to pass through the data points, but it is required to capture the shape (general trend) of the data. This curve fitting problem is referred to as regression. The following sections present formulations for the regression problem and provide solutions.
The following figure compares two polynomials that attempt to fit the shown data points. The blue curve is the solution to the interpolation problem. The green curve is the solution (we seek) to the linear regression problem.
Linear Least-Squares-Error (LSE) Regression:
The Straight-Line Model
The regression problem will first be illustrated for fitting the linear model (straight-line), ๐ฆ(๐ฅ) = ๐1๐ฅ + ๐0, to a set of ๐ paired experimental observations:
(๐ฅ1, ๐ฆ1), (๐ฅ2, ๐ฆ2), โฆ , (๐ฅ๐, ๐ฆ๐). So, the idea here is to position the straight-line (i.e., to determine the regression coefficients ๐0 and ๐1) so that some error measure of fit is minimized. A common error measure is the sum-of-the-squares (SSE) of the
residual errors ๐๐ = ๐ฆ๐ โ ๐ฆ(๐ฅ๐), ๐ธ(๐0, ๐1) = โ ๐๐2 ๐ ๐=1 = โ[๐ฆ๐ โ ๐ฆ(๐ฅ๐)]2 ๐ ๐=1 = โ[๐ฆ๐ โ (๐1๐ฅ๐ + ๐0)]2 ๐ ๐=1
The residual error ๐๐ is the discrepancy between the measured value, ๐ฆ๐, and the approximate value ๐ฆ(๐ฅ๐) = ๐0 + ๐1๐ฅ๐, predicted by the straight-line regression model. The residual error for the ๐th data point is depicted in the following figure.
A solution can be obtained for the regression coefficients, {๐0, ๐1}, that minimizes ๐ธ(๐0, ๐1). This criterion, ๐ธ, which is called least-squares-error (LSE) criterion, has a number of advantages, including that it yields a unique line for a given data set. Differentiating ๐ธ(๐0, ๐1) with respect to each of the unknown regression model coefficients, and setting the result to zero lead to a system of two linear equations,
๐ ๐๐0๐ธ(๐0, ๐1) = 2 โ(๐ฆ๐ โ ๐1๐ฅ๐ โ ๐0)(โ1) ๐ ๐=1 = 0 ๐ ๐๐1๐ธ(๐0, ๐1) = 2 โ(๐ฆ๐ โ ๐1๐ฅ๐ โ ๐0)(โ๐ฅ๐) ๐ ๐=1 = 0
After expanding the sums, we obtain
โ โ ๐ฆ๐ + ๐ ๐=1 โ ๐0 ๐ ๐=1 + โ ๐1๐ฅ๐ ๐ ๐=1 = 0 โ โ ๐ฅ๐๐ฆ๐ ๐ ๐=1 + โ ๐0๐ฅ๐ ๐ ๐=1 + โ ๐1๐ฅ๐2 ๐ ๐=1 = 0
Now, realizing that โ๐๐=1๐0 = ๐๐0, and that multiplicative quantities that do not depend on the summation index ๐ can be brought outside the summation (i.e., โ๐๐=1๐๐ฅ๐ = ๐ โ๐๐=1๐ฅ๐), we may rewrite the above equations as
These are called the normal equations. We can solve for ๐1 using Cramerโs rule and for ๐0 by substitution (Your turn: Perform the algebra) to arrive at the following LSE solution: ๐1โ = ๐ โ ๐ฅ๐๐ฆ๐ โ โ ๐ฅ๐โ ๐ฆ๐ ๐ ๐=1 ๐ ๐=1 ๐ ๐=1 ๐ โ๐๐=1๐ฅ๐2 โ (โ๐๐=1๐ฅ๐)2 ๐0โ = โ ๐ฆ๐ ๐ ๐=1 ๐ โ ๐1 โ โ ๐ฅ๐ ๐ ๐=1 ๐
The value ๐ธ(๐0โ, ๐1โ) represents the LSE value and will be referred to as ๐ธ๐ฟ๐๐ธ and expressed as
๐ธ๐ฟ๐๐ธ = โ(๐ฆ๐ โ ๐1โ๐ฅ๐ โ ๐0โ)2 ๐
๐=1
Any other straight-line will lead to an error ๐ธ(๐0, ๐1) > ๐ธ๐ฟ๐๐ธ.
Let the value of the sum-of-the-square of the difference between the ๐ฆ๐ values and their average value, ๐ฆฬ = โ ๐ฆ๐
๐ ๐=1 ๐ , be ๐ธ๐ = โ(๐ฆ๐ โ ๐ฆฬ )2 ๐ ๐=1
Then, the (positive) difference ๐ธ๐ โ ๐ธ๐ฟ๐๐ธ represents the improvement (where the smaller ๐ธ๐ฟ๐๐ธ is, the better) due to describing the data in terms of a straight-line, rather than as an average value (a straight-line with zero slope and ๐ฆ-intercept equals to ๐ฆฬ ). The coefficient of determination, ๐2, is defined as the relative error between ๐ธ๐ and ๐ธ๐ฟ๐๐ธ,
๐2 = ๐ธ๐ โ ๐ธ๐ฟ๐๐ธ
๐ธ๐ = 1 โ ๐ธ๐ฟ๐๐ธ
๐ธ๐
For perfect fit, where the regression line goes through all data points, ๐ธ๐ฟ๐๐ธ = 0 and ๐2 = 1, signifying that the line explains 100% of the variability in the data. On the other hand for ๐ธ๐ = ๐ธ๐ฟ๐๐ธ we obtain ๐2 = 0, and the fit represents no improvement over a simple average. A value of ๐2 between 0 and 1 represents the extent of
improvement. So, ๐2 = 0.8 indicates that 80% of the original uncertainty has been explained by the linear model. Using the above expressions for ๐ธ๐ฟ๐๐ธ, ๐ธ๐, ๐0โ and ๐1โ one may derive the following formula for the correlation coefficient, ๐, (your turn: Perform the algebra)
๐ = โ๐ธ๐ โ ๐ธ๐ฟ๐๐ธ
๐ธ๐ =
๐ โ(๐ฅ๐๐ฆ๐) โ (โ ๐ฅ๐)(โ ๐ฆ๐)
โ๐ โ ๐ฅ๐2 โ (โ ๐ฅ๐)2 โ๐ โ ๐ฆ
๐2 โ (โ ๐ฆ๐)2
Example. Fit a straight-line to the data provided in the following table. Find ๐2.
x 1 2 3 4 5 6 7
y 2.5 7 38 55 61 122 110
Solution. The following Matlab script computes the linear regression coefficients, ๐0โ and ๐1โ, for a straight-line employing the LSE solution.
x=[1 2 3 4 5 6 7];
y=[2.5 7 38 55 61 122 110]; n=length(x);
a1=(n*sum(x.*y)-sum(x)*sum(y))/(n*sum(x.^2)-(sum(x)).^2) a0=sum(y)/n-a1*sum(x)/n
The solution is ๐1โ = 20.5536 and ๐0โ = โ25.7143. The following plot displays the data and the regression model, ๐ฆ(๐ฅ) = 20.5536๐ฅ โ 25.7143.
The following script computes the correlation coefficient, ๐. x=[1 2 3 4 5 6 7];
y=[2.5 7 38 55 61 122 110]; n=length(x);
r=(n*sum(x.*y)-sum(x)*sum(y))/((sqrt(n*sum(x.^2)- ... (sum(x))^2))*(sqrt(n*sum(y.^2)-(sum(y))^2)))
The script returns ๐ = 0.9582 (so, ๐2 = 0.9181). These results indicate that about 92% of the variability in the data has been explained by the linear model.
A word of caution: Although the coefficient of determination provides a convenient measure of the quality of fit, you should be careful not to rely on it completely. For, it is possible to construct data sets that will similar ๐2 values, while the regression line is not well positioned for some sets. A good practice would be to visually inspect the plot of the data along with the regression curve. The following example illustrates these ideas.
Example. Anscombe's quartet comprises four datasets that have ๐2 โ 0.666, yet appear very different when graphed. Each dataset consists of eleven (๐ฅ๐, ๐ฆ๐) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data before analyzing it and the effect of outliers on statistical properties. Notice that if we are to ignore the outlier point in the third data set, then the regression line would be perfect, with ๐2 = 1.
Your turn: Employ linear regression to generate the above plots and determine ๐2 for each of the Anscombeโs data sets.
Linearization of Nonlinear Models
The straight-line regression model is not always suitable for curve fitting. The choice of regression model is often guided by the plot of the available data, or can be guided by the knowledge of the physical behavior of the system that generated the data. In general, polynomial or other nonlinear models are more suitable. A nonlinear regression technique (introduced later) is available to fit complicated
nonlinear equations to data. However, some basic nonlinear functions can be readily transformed into linear functions in their regression coefficients (we will refer to such functions as transformable or linearizable). Here, we can take advantage of the LSE regression formulas, which we have just derived, to fit the transformed equations to the data.
One example of a linearizable linear model is the exponential model, ๐ฆ(๐ฅ) = ๐ผ๐๐ฝ๐ฅ, where ๐ผ and ๐ฝ are constants. This equation is very common in engineering (e.g., capacitor transient voltage) and science (e.g., population growth or radioactive decay). We can linearize this equation by simply taking its natural logarithm to yield: ln(๐ฆ) = ln(๐ผ) + ๐ฝ๐ฅ. Thus, if we transform the ๐ฆ๐ values in our data, by taking their natural logarithms, and define ๐0 = ln(๐ผ) and ๐1 = ๐ฝ we arrive at the equation of a straight-line (of the form ๐ = ๐0 + ๐1๐ฅ). Then, we can readily use the formulas for the LSE solution, (๐0โ, ๐1โ), derived earlier. The final step would be to set ๐ผ = ๐๐0โ and ๐ฝ = ๐
1โ and arrive at the regression solution,
๐ฆ(๐ฅ) = ๐ผ๐๐ฝ๐ฅ = (๐๐0โ)๐๐1โ๐ฅ
A second common linearizable nonlinear regression model is the power model, ๐ฆ(๐ฅ) = ๐ผ๐ฅ๐ฝ. We can linearize this equation by simply taking its natural logarithm to yield: ln(๐ฆ) = ln(๐ผ) + ๐ฝln (๐ฅ) (which is a linear model of the form ๐ = ๐0 + ๐1๐). In this case, we need to first transform the ๐ฆ๐ and ๐ฅ๐ values into ln(๐ฆ๐) and ln(๐ฅ๐), respectively, and then apply the LSE solution to the transformed data.
Other useful linearizable models include: ๐ฆ(๐ฅ) = ๐ผ ln(๐ฅ) + ๐ฝ (logarithmic function), ๐ฆ(๐ฅ) = 1
๐ผ๐ฅ+๐ฝ (reciprocal function), and ๐ฆ(๐ฅ) = ๐ผ๐ฅ
๐ฝ+๐ฅ
linearizable and provide their corresponding linearized form and their change of variable formulas, respectively.
Example. Fit the exponential model and the power model to the data in the following table. Compare the fit quality to that of the straight-line model.
๐ฅ 1 2 3 4 5 6 7 8
๐ฆ 2.5 7 38 55 61 122 83 143
Solution. Matlab script (linear.m) for the linear model, ๐ฆ = ๐0 + ๐1๐ฅ: x=[1 2 3 4 5 6 7 8]; y=[2.5 7 38 55 61 122 83 143]; n=length(x); a1=(n*sum(x.*y)-sum(x)*sum(y))/(n*sum(x.^2)-(sum(x)).^2); a0=sum(y)/n-a1*sum(x)/n; r=(n*sum(x.*y)-sum(x)*sum(y))/((sqrt(n*sum(x.^2)-(sum(x))^2))*(... sqrt(n*sum(y.^2)-(sum(y))^2))); a1, a0, r^2
Result 1. Linear model solution: ๐ฆ = 19.3036๐ฅ โ 22.9286, ๐2 = 0.8811. Matlab script (exponential.m) for the exponential model, ๐ฆ = ๐ผ๐๐ฝ๐ฅ:
x=[1 2 3 4 5 6 7 8]; y=[2.5 7 38 55 61 122 83 143]; ye=log(y); n=length(x); a1=(n*sum(x.*ye)-sum(x)*sum(ye))/(n*sum(x.^2)-(sum(x)).^2); a0=sum(ye)/n-a1*sum(x)/n; r=(n*sum(x.*ye)-sum(x)*sum(ye))/((sqrt(n*sum(x.^2)-(sum(x))^2))... *(sqrt(n*sum(ye.^2)-(sum(ye))^2))); alpha=exp(a0), beta=a1, r^2
Result 2. Exponential model solution: ๐ฆ = 3.4130๐0.5273๐ฅ, ๐2 = 0.8141. Matlab script (power_eq.m) for the power model, ๐ฆ = ๐ผ๐ฅ๐ฝ:
x=[1 2 3 4 5 6 7 8]; y=[2.5 7 38 55 61 122 83 143]; xe=log(x); ye=log(y); n=length(x); a1=(n*sum(xe.*ye)-sum(xe)*sum(ye))/(n*sum(xe.^2)-(sum(xe)).^2); a0=sum(ye)/n-a1*sum(xe)/n; r=(n*sum(xe.*ye)-sum(xe)*sum(ye))/((sqrt(n*sum(xe.^2)-
โฆ
(sum(xe))^2))*(sqrt(n*sum(ye.^2)-(sum(ye))^2))); alpha=exp(a0), beta=a1, r^2Result 3. Power model solution: ๐ฆ = 2.6493๐ฅ1.9812, ๐2 = 0.9477
From the results for ๐2, the power model has the best fit. The following graph compares the three models. By visually inspecting the plot we see that, indeed, the power model (red; ๐2 = 0.9477) is a better fit compared to the linear model (blue; ๐2 = 0.8811) and to the exponential model (green; ๐2 = 0.8141). Also, note that the straight-line fits the data better than the exponential model.
Your turn: Repeat the above regression problem employing: (a) The logarithmic function, ๐ฆ = ๐ผ ln(๐ฅ) + ๐ฝ; (b) The reciprocal function, ๐ฆ = 1
๐ผ๐ฅ+๐ฝ ; (c) The
saturation-growth-rate function, ๐ฆ(๐ฅ) = ๐ผ๐ฅ
General Linear LSE Regression and the Polynomial Model
For some data sets, the underlying model cannot be captured accurately with a straight-line, exponential, logarithmic or power models. A model with a higher degree of nonlinearity (i.e., with added flexibility) is required. There are a number of higher order functions that can be used as regression models. One important regression model would be a polynomial. A general LSE formulation is presented next. It extends the earlier linear regression analysis to a wider class of nonlinear functions, including polynomials. (Note: when we say linear regression, we are referring to a model that is linear in its regression parameters, ๐๐, not ๐ฅ.)
Consider the general function in z,
๐ฆ = ๐๐๐ง๐ + ๐๐โ1๐ง๐โ1 + โฏ + ๐1๐ง1 + ๐0 (1) where the ๐ง๐ represents a basis function in ๐ฅ. It can be easily shown that if the basis functions are chosen as ๐ง๐ = ๐ฅ๐, then the above model is that of an ๐-degree
polynomial,
๐ฆ = ๐๐๐ฅ๐ + ๐๐โ1๐ฅ๐โ1 + โฏ + ๐1๐ฅ + ๐0
There are many classes of functions that can be described by the above general function in Eqn. (1). Examples include:
๐ฆ = ๐0 + ๐1๐ฅ, ๐ฆ = ๐0 + ๐1cos(๐ฅ) + ๐2sin (2๐ฅ) and ๐ฆ = ๐0 + ๐1๐ฅ + ๐2๐โ๐ฅ2
One example of a function that canโt be represented by the above general function is the radial-basis-function (RBF)
๐ฆ = ๐0 + ๐1๐๐2(๐ฅโ๐3)2
In other words, this later function is not transformable into a linear regression model, as was the case (say) for the exponential function, ๐ฆ = ๐ผ๐๐ฝ๐ฅ. The
regression with such non-transformable functions is known as nonlinear regression and is considered later in this lecture.
In the following formulation of the LSE regression problem we restrict the model of the regression function to the polynomial
๐ฆ = ๐๐๐ฅ๐ + ๐๐โ1๐ฅ๐โ1 + โฏ + ๐1๐ฅ + ๐0
Given ๐ data points {(๐ฅ1, ๐ฆ1), (๐ฅ2, ๐ฆ2), โฆ , (๐ฅ๐, ๐ฆ๐)} we want to determine the
regression coefficients by solving the system of ๐ equations with ๐ + 1 unknowns: ๐๐๐ฅ1๐ + ๐๐โ1๐ฅ1๐โ1 + โฏ + ๐1๐ฅ1 + ๐0 = ๐ฆ1
๐๐๐ฅ2๐ + ๐๐โ1๐ฅ2๐โ1 + โฏ + ๐1๐ฅ2 + ๐0 = ๐ฆ2 .
๐๐๐ฅ๐๐ + ๐๐โ1๐ฅ๐๐โ1 + โฏ + ๐1๐ฅ๐ + ๐0 = ๐ฆ๐
The above system can be written in matrix form as Za = y,
[ ๐ฅ1๐ ๐ฅ1๐โ1 โฏ ๐ฅ1 1 ๐ฅ2๐ ๐ฅ2๐โ1 โฏ ๐ฅ2 1 โฎ โฎ โฑ โฎ โฎ ๐ฅ๐โ1๐ ๐ฅ๐โ1๐โ1 โฏ ๐ฅ๐โ1 1 ๐ฅ๐๐ ๐ฅ๐๐โ1 โฏ ๐ฅ๐ 1][ ๐๐ ๐๐โ1 โฎ ๐1 ๐0 ] = [ ๐ฆ1 ๐ฆ2 โฎ ๐ฆ๐โ1 ๐ฆ๐ ]
where Z is an ๐x(๐ + 1) rectangular matrix formulated from the ๐ฅ๐ data values with ๐ง๐๐ = ๐ฅ๐๐โ๐+1 (๐ = 1, 2, โฆ , ๐; ๐ = 1, 2, โฆ , ๐ + 1), a is an (๐ + 1) column vector of unknown regression coefficients and y is an ๐ column vector of ๐ฆ๐ data values. For regression problems, the above system is over-determined (๐ > ๐ + 1) and, therefore, there is generally no solution โaโ that satisfies Za = y. So, we seek the LSE solution, a*; the solution which minimizes the sum-of-squared-error (SSE) criterion
E(a) = ||y - Za||2 = โ๐๐=1(๐ฆ๐ โ โ๐+1๐=1 ๐ง๐๐ ๐๐โ๐+1) 2
where ||.|| denotes the vector norm. As we did earlier in deriving the straight-line regression coefficients ๐0 and ๐1, we set all partial derivatives ๐
๐๐๐ ๐ธ(a) to zero and
solve the resulting system of (๐ + 1) equations:
๐ ๐๐0๐ธ(๐0, ๐1, โฆ , ๐๐) = โ2 โ (๐ฆ๐ โ โ ๐ฅ๐ ๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) ๐ฅ๐0 ๐ ๐=1 = 0 ๐ ๐๐1๐ธ(๐0, ๐1, โฆ , ๐๐) = โ2 โ (๐ฆ๐ โ โ ๐ฅ๐ ๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) ๐ ๐=1 ๐ฅ๐1 = 0 ๐ ๐๐2๐ธ(๐0, ๐1, โฆ , ๐๐) = โ2 โ (๐ฆ๐ โ โ ๐ฅ๐ ๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) ๐ ๐=1 ๐ฅ๐2 = 0 . . ๐ ๐๐๐๐ธ(๐0, ๐1, โฆ , ๐๐) = โ2 โ (๐ฆ๐ โ โ ๐ฅ๐ ๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) ๐ ๐=1 ๐ฅ๐๐ = 0
This system can be rearranged as (note: ๐ฅ๐0 = 1)
โ (๐ฆ๐ โ โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = 0 ๐ ๐=1 โ (๐ฅ๐๐ฆ๐ โ ๐ฅ๐ โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = 0 ๐ ๐=1 โ (๐ฅ๐2๐ฆ๐ โ ๐ฅ๐2 โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = 0 ๐ ๐=1 . . โ (๐ฅ๐๐๐ฆ๐ โ ๐ฅ๐๐ โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = 0 ๐ ๐=1
or, โ ( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฆ๐ ๐ ๐=1 โ ๐ฅ๐( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = โ ๐ฅ๐๐ฆ๐ ๐ ๐=1 ๐ ๐=1 โ ๐ฅ๐2( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฅ๐2๐ฆ๐ ๐ ๐=1 . . โ ๐ฅ๐๐( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฅ๐๐๐ฆ๐ ๐ ๐=1
which by setting ๐ง๐๐ = ๐ฅ๐๐โ๐+1 (๐ = 1, 2, โฆ , ๐; ๐ = 1, 2, โฆ , ๐ + 1) can then be expressed in matrix form as (Your turn: derive it)
ZT(Za) = ZTy or (ZTZ)a= ZTy
(Refer to the Appendix for an explicit representation of the above equation for the case of a quadratic regression polynomial.)
The matrix ZTZ is a (๐ + 1)x(๐ + 1) square matrix (recall that ๐ is the degree of
the polynomial model being used). Generally speaking, the inverse of ZTZ does exist for the above regression formulation. Multiplying both sides of the equation by (ZTZ)-1 leads to the LSE solution for the regression coefficient vector a,
Ia* = a* = [(ZTZ)-1 ZT]y
Where I is the identity matrix. Matlab offers two ways for solving the above system of linear equations: (1) using the left-division operator a = (Zโฒ*Z)\(Zโฒ*y), where โโฒโ is the Matlab transpose operator, or (2) using a = pinv(Z)*y, where pinv is the built-in
The coefficient of determination, ๐2, for the above polynomial regression formulation is given by (for ๐ โซ ๐)
๐2 = 1 โโ (๐ฆ๐ โ ๐ฆฬ๐) 2 ๐ ๐=1 โ๐ (๐ฆ๐ โ ๐ฆฬ )2 ๐=1
where ๐ฆฬ๐ is the ๐th component of the prediction vector Za*, and ๐ฆฬ is the mean of the ๐ฆ๐ values. Matlab can conveniently compute ๐2 as,
1-sum((y-Z*a).^2)/sum((y-mean(y)).^2)
Example. Employ the polynomial LSE regression formulation to solve for a cubic curve fit for the following data set. Also, compute ๐2.
๐ฅ 1 2 3 4 5 6 7 8
๐ฆ 2.5 7 38 55 61 122 83 145
Solution. A cubic function is a third-order polynomial (๐ = 3) with the four coefficients ๐0, ๐1, ๐2 and ๐3. The number of data points is 8, therefore,
๐ = 8 and the Z matrix is ๐x(๐ + 1) = 8x4. The matrix formulation (Za = y) for this linear regression problem is
[ 1 1 1 1 8 4 2 1 27 9 3 1 64 16 4 1 125 25 5 1 216 36 6 1 343 49 7 1 512 64 8 1] [ ๐3 ๐2 ๐1 ๐0 ] = [ 2.5 7 38 55 61 122 83 145]
Solving using the pinv function, a = pinv(Z)*y, (y must be a column vector)
Alternatively, we may use the left-division operator and obtain the same result:
Therefore, the cubic fit solution is ๐ฆ = 0.029๐ฅ3 โ 0.02๐ฅ2 + 17.6176๐ฅ โ 19.2857 whose plot is shown below. Note that the contribution of the cubic and quadratic terms is very small compared to the linear part of the solution, for 0 โค ๐ฅ โค 8. That is why the plot of the cubic fit model is close to linear.
which indicates that the cubic regression model explains 88% of the variability in the data. This result has a similar quality to that of the straight-line regression model (computed in an earlier example).
The following is a snapshot of a session with the โBasic Fitting toolโ (introduced in the previous lecture) applied to data in the above example. It computes and
compares a cubic fit to a 5th-degree and a 6th-degree polynomial fit.
Your turn. Employ the linear LSE regression formulation to fit the following data set employing the model: ๐ฆ(๐ฅ) = ๐0 + ๐1cos(๐ฅ) + ๐2sin (2๐ฅ). Also, determine ๐2 and plot the data along with ๐ฆ(๐ฅ). Hint: First determine the 10x3 matrix Z required for the Za = y formulation.
๐ฅ 1 2 3 4 5 6 7 8 9 10
Polynomial Regression with Matlab:
Polyfit
The Matlab polyfit function was introduced in the previous lecture for solving polynomial interpolation problems (๐ + 1 = ๐, same number of equations as unknowns). This function can also be used for solving ๐-degree polynomial regression given ๐ data points (๐ + 1 < ๐, more equations than unknowns). The syntax of polyfit call is p=polyfit(x,y,m), where x and y are the vectors of the
independent and dependent variables, respectively, and m is the order of the regression polynomial. The function returns a row vector, p, that contains the polynomial coefficients.
Example. Here is a solution to the straight-line regression problem (first example encountered in this lecture):
Example. Use polyfit to solve for the cubic regression model encountered in the example from the previous section.
Solution:
Note that this solution is identical to the one obtained using the pseudo-inverse-based solution.
Non-Linear LSE Regression
In some engineering applications, nonlinear models are required to be used to fit a given data set. The above general linear regression formulation can handle such regression problems as long as the nonlinear model is transformable into an
equivalent linear function in the unknown coefficients. However, in some cases, the models are not transformable. In this case, we have to come up with an appropriate set of equations whose solution leads to the LSE solution.
As an example, consider the nonlinear model ๐ฆ(๐ฅ) = ๐0(1 โ ๐๐1๐ฅ). This equation
canโt be manipulated into a linear regression formulation in the ๐0 and ๐1
coefficients. The LSE formulation (for this model with ๐ data points) takes the form
๐ธ(๐0, ๐1) = โ(๐ฆ๐ โ ๐ฆ(๐ฅ๐))2 ๐ ๐=1 = โ(๐ฆ๐ โ ๐0(1 โ ๐๐1๐ฅ๐))2 ๐ ๐=1 ๐ ๐๐0๐ธ(๐0, ๐1) = 2 โ(๐ฆ๐ โ ๐0(1 โ ๐ ๐1๐ฅ๐)) ๐ ๐=1 (โ1 + ๐๐1๐ฅ๐) = 0 ๐ ๐๐1๐ธ(๐0, ๐1) = 2 โ(๐ฆ๐ โ ๐0(1 โ ๐ ๐1๐ฅ๐)) ๐ ๐=1 (๐0๐ฅ๐๐๐1๐ฅ๐) = 0
This set of two nonlinear equations need to be solved for the two coefficients, ๐0 and ๐1. Numerical algorithms such as Newtonโs iterative method for solving a set of two nonlinear equations, or Matlabโs built-in fsolve and solve functions can be used to solve this system of equations, as shown in the next example.
Example. Employ the regression function ๐ฆ(๐ฅ) = ๐0(1 โ ๐๐1๐ฅ) to fit the following
data.
x โ2 0 2 4
Here, we have ๐ = 4, and the system of nonlinear equations to be solved is given by โ(๐ฆ๐ โ ๐0(1 โ ๐๐1๐ฅ๐)) 4 ๐=1 (โ1 + ๐๐1๐ฅ๐) = 0 โ(๐ฆ๐ โ ๐0(1 โ ๐๐1๐ฅ๐)) 4 ๐=1 (๐0๐ฅ๐๐๐1๐ฅ๐) = 0
Substituting the data point values in the above equations leads to
(1 โ ๐0(1 โ ๐โ2๐1))(โ1 + ๐โ2๐1) + (โ4 โ ๐0(1 โ ๐2๐1))(โ1 + ๐2๐1) + (โ12 โ ๐0(1 โ ๐4๐1))(โ1 + ๐4๐1) = 0
(1 โ ๐0(1 โ ๐โ2๐1))(โ2๐0๐โ2๐1) + (โ4 โ ๐0(1 โ ๐2๐1))(2๐0๐2๐1) + (โ12 โ ๐0(1 โ ๐4๐1))(4๐0๐4๐1) = 0
After expansion and combining terms, we get 15 + (๐โ2๐1 โ 4๐2๐1 โ 12๐4๐1) + ๐
0(3 + ๐โ4๐1 โ 2๐โ2๐1 โ 2๐2๐1 โ ๐4๐1 + ๐8๐1) = 0
2๐0[โ๐0๐โ4๐1 + (๐
0โ 1)๐โ2๐1 โ (๐0+ 4)๐2๐1 โ (๐0 + 24)๐4๐1 + 2๐0๐8๐1] = 0
Matlab solution using function solve: syms a0 a1
f1=15+(exp(-2*a1)-4*exp(2*a1)-12*exp(4*a1))+a0*(3+exp(-4*a1)... -2*exp(-2*a1)-2*exp(2*a1)-exp(4*a1)+exp(8*a1));
f2=2*a0*(-a0*exp(-4*a1)+(a0-1)*exp(-2*a1)-(a0+4)*exp(2*a1)... -(a0+24)*exp(4*a1)+2*a0*exp(8*a1));
Matlab returns a set of four solutions to the above minimization problem. The first thing we notice is that for nonlinear regression, minimizing LSE may lead to multiple solutions (multiple minima). The solutions for this particular problem are:
1. ๐0 = ๐1 = 0, which leads to: ๐ฆ = 0(1 โ ๐0) = 0, or ๐ฆ = 0 (the ๐ฅ-axis). 2. ๐0 = 0 and ๐1 โ โ1.3610 + 1.5708๐, which leads to: ๐ฆ = 0
3. ๐0 = 0 and ๐1 โ 0.1186 + 1.5708๐, which leads to: ๐ฆ = 0.
4. ๐0 โ 2.4979 and ๐1 โ 0.4410, which leads to ๐ฆ(๐ฅ) = 2.4979(1 โ ๐0.441๐ฅ).
The solutions ๐ฆ(๐ฅ) = 0 and ๐ฆ(๐ฅ) = 2.4979(1 โ ๐0.441๐ฅ) are plotted below. It is obvious that the optimal solution (in the LSE sense) is ๐ฆ(๐ฅ) = 2.4979(1 โ ๐0.441๐ฅ).
Your turn: Solve the above system of two nonlinear equations employing Matlabโs
fsolve.
Your turn: Fit the exponential model, ๐ฆ = ๐ผ๐๐ฝ๐ฅ, to the data in the following table employing nonlinear least squares regression. Then, linearize the model and
determine the model coefficients by employing linear least squares regression (i.e., use the formulas derived in the first section or polyfit). Plot the solutions.
๐ฅ 0 1 2 3 4
๐ฆ 1.5 2.5 3.5 5.0 7.5
Ans. Nonlinear least squares fit:
๐ฆ = 1.61087๐0.38358๐ฅ
Numerical Solution of the Non-Linear LSE Optimization
Problem: Gradient Search and Matlabโs
fminsearch
Function
In the above example, we were lucky in the sense that the (symbolic-based) solve function returned the optimal solution for the optimization problem at hand. In more general non-linear LSE regression problems the models employed are complex and normally have more than two unknown coefficients. Here, solving (symbolically) for the partial derivatives of the error function becomes tedious and impractical.
Therefore, one would use numerically-based multi-variable optimization algorithms to minimize ๐ธ(๐) = ๐ธ(๐0, ๐1, ๐2, โฆ ), which are extensions of the ones considered in Lectures 13 and 14.
One method would be to extend the gradient-search minimization function
grad_optm2 to handle a function with two variables. Recall that this version of the function approximates the gradients numerically, therefore there is no need to
determine the analytical expressions for the derivatives. For the case of two variables ๐0 and ๐1, the gradient-descent equations are
๐0(๐ + 1) = ๐0(๐) + ๐ ๐
๐๐0๐ธ(๐0, ๐1)
๐1(๐ + 1) = ๐1(๐) + ๐ ๐
๐๐1๐ธ(๐0, ๐1)
where โ1 < ๐ < 0. Upon using the simple backward finite-difference approximation for the derivatives, we obtain
๐0(๐ + 1) = ๐0(๐) + ๐๐ธ[๐0(๐), ๐1(๐)] โ ๐ธ[๐0(๐ โ 1), ๐1(๐)] ๐0(๐) โ ๐0(๐ โ 1)
๐1(๐ + 1) = ๐1(๐) + ๐๐ธ[๐0(๐), ๐1(๐)] โ ๐ธ[๐0(๐), ๐1(๐ โ 1)] ๐1(๐) โ ๐1(๐ โ 1)
The following is a Matlab implementation (function grad_optm2d) of these iterative formulas.
The above function [with ๐0(0) = 2, ๐1(0) = 0.5 and ๐ = โ10โ4] returns the
same solution that was obtained above with solve [with, a minimum error value of ๐ธ(2.4976, 0.4410) = 0.4364]:
Matlab has an important built-in function for numerical minimization of nonlinear multivariable functions. The function name is fminsearch. The (basic) function call syntax is [a,fa] = fminsearch(f,a0), where f is an anonymous function, a0 is a vector of initial values. The function returns a solution vector โaโ and the value of the
function at that solution, fa. Here is an application of fminsearch to solve the above non-linear regression problem [note how the unknown coefficients are represented as the elements of the vector, ๐0 = a(1), ๐1 = a(2), and are initialized at [0 0] for this problem].
A more proper way to select the initial search vector a = [๐0 ๐1] for the above
optimization problem is to solve a set of ๐ nonlinear equations that is obtained from forcing the model to go through ๐ points (selected randomly from the data set). Here, ๐ is the number of unknown model parameters. For example, for the above problem, we solve the set of two nonlinear equations
๐ฆ๐ โ ๐0(1 โ ๐๐1๐ฅ๐) = 0
๐ฆ๐ โ ๐0(1 โ ๐๐1๐ฅ๐) = 0
where (๐ฅ๐, ๐ฆ๐) and (๐ฅ๐, ๐ฆ๐) are two distinct points selected randomly from the set of points being fitted. A numerical nonlinear equation solver can be used, say Matlabโs fsolve, as shown below [here, the end points (โ2,1) and (4, โ12) were selected].
Your turn: The height of a person at different ages is reported in the following table.
x (age) 0 5 8 12 16 18
y (in) 20 36.2 52 60 69.2 70
Determine the parameters ๐, ๐ and ๐ so that the following regression model is optimal in the LSE sense.
๐ฆ(๐ฅ) = ๐
1 + ๐๐โ๐๐ฅ
Ans. ๐ฆ(๐ฅ) = 74.321
1+2.823๐โ0.217๐ฅ
Your turn: Employ nonlinear LSE regression to fit the function
๐ฆ(๐ฅ) = ๐พ
โ๐ฅ4 + (๐2 โ 2๐)๐ฅ2 + ๐2
to the data
๐ฅ 0 0.5 1 2 3
๐ฆ 0.95 1.139 0.94 0.298 0.087
Plot the data points and your solution for ๐ฅ โ [0 6].
Ans. ๐ฆ(๐ฅ) = 0.888
As mentioned earlier, different initial conditions may lead to different local minima of the nonlinear function being minimized. For example, consider the function of two variables that exhibits multiple minima (refer to the plot):
๐(๐ฅ, ๐ฆ) = โ0.02 sin(๐ฅ + 4๐ฆ) โ 0.2 cos(2๐ฅ + 3๐ฆ) โ 0.3 sin(2๐ฅ โ ๐ฆ) + 0.4cos (๐ฅ โ 2๐ฆ)
A contour plot can be generated as follows (the local minima are located at the center of the blue contour lines):
The following are the local minima discovered by function grad_optm2d for the indicated initial conditions:
The same local minima are discovered by fminsearch when starting from the same initial conditions:
Note that for this limited set of searches, the solution with the smallest SSE value is (๐ฅโ, ๐ฆโ) = (0.0441, โ1.7618), which represents a more optimal solution.
Your turn (Email your solution to your instructor one day before Test 3). Fit the following data
๐ฅ 1 2 3 4 5 6 7 8 9 10
๐ฆ 1.17 0.93 โ0.71 โ1.31 2.01 3.42 1.53 1.02 โ0.08 โ1.51 employing the model
๐ฆ(๐ฅ) = ๐0 + ๐1cos(๐ฅ + ๐1) + ๐2cos (2๐ฅ + ๐2)
This problem can be solved employing nonlinear regression (think solution via fminsearch), or it can be linearized which allows you to use linear regression (think solution via pseudo-inverse). Hint: cos(๐ฅ + ๐) = cos(๐ฅ) cos(๐) โ sin(๐ฅ) sin(๐). Plot the data points and ๐ฆ(๐ฅ) on the same graph.
In practice, a nonlinear regression model can have hundreds or thousands of coefficients. Examples of such models are neural networks and radial-basis-function models that often involve fitting multidimensional data sets, where each ๐ฆ value depends on many variables,
๐ฆ(๐ฅ1, ๐ฅ2, ๐ฅ3โฆ). Numerical methods such as gradient-based optimization methods are often used to solve for the regression coefficients associated with those high-dimensional models. For a reference, check Chapters 5 and 6 in the following textbook:
Solution of Differential Equations Based on LSE Minimization
Consider the second-order time-varying coefficient differential equation
๐ฆฬ(๐ฅ) + 1
5๐ฆฬ(๐ฅ) + 9๐ฅ
2๐ฆ(๐ฅ) = 0 with ๐ฆ(0) = 1 and ๐ฆฬ(0) = 2
defined over the interval ๐ฅ โ [0 1].
We seek a polynomial ๐ฆฬ(๐ฅ) = ๐0 + ๐1๐ฅ + ๐2๐ฅ2 + ๐3๐ฅ3 + ๐4๐ฅ4 that approximates the solution, ๐ฆ(๐ฅ). In general, ๐ฆฬ(๐ฅ) does not have to be a polynomial. By applying the initial conditions to ๐ฆฬ(๐ฅ) we can solve for ๐0 and ๐1
๐ฆฬ(0) = ๐0 = ๐ฆ(0) = 1 and ๐๐ฆฬ(0)
๐๐ฅ = ๐1 = ๐ฆฬ(0) = 2
Now, we are left with the problem of estimating the remaining polynomial coefficients ๐2, ๐3, ๐4 such that the residual
๐(๐ฅ, ๐2, ๐3, ๐4) = ๐ 2๐ฆฬ(๐ฅ) ๐๐ฅ2 + 1 5 ๐๐ฆฬ(๐ฅ) ๐๐ฅ + 9๐ฅ 2๐ฆฬ(๐ฅ)
is as close to zero as possible for all ๐ฅ โ [0 1]. We will choose to minimize the integral of the squared residual,
๐ผ(๐2, ๐3, ๐4) = โซ [๐(๐ฅ, ๐2, ๐3, ๐4)]2๐๐ฅ
1 0
First, we compute the derivatives, ๐๐ฆฬ(๐ฅ)
๐๐ฅ and ๐2๐ฆฬ(๐ฅ) ๐๐ฅ2 , ๐๐ฆฬ(๐ฅ) ๐๐ฅ = 2 + 2๐2๐ฅ + 3๐3๐ฅ 2 + 4๐ 4๐ฅ3 ๐2๐ฆฬ(๐ฅ) ๐๐ฅ2 = 2๐2 + 6๐3๐ฅ + 12๐4๐ฅ 2
๐(๐ฅ, ๐2, ๐3, ๐4) = ๐ 2๐ฆฬ(๐ฅ) ๐๐ฅ2 + 1 5 ๐๐ฆฬ(๐ฅ) ๐๐ฅ + 9๐ฅ 2๐ฆฬ(๐ฅ) = 2๐2 + 6๐3๐ฅ + 12๐4๐ฅ2 +1 5(2 + 2๐2๐ฅ + 3๐3๐ฅ 2 + 4๐ 4๐ฅ3) + 9๐ฅ2(1 + 2๐ฅ + ๐2๐ฅ2 + ๐3๐ฅ3 + ๐4๐ฅ4) = 2 5 + 9๐ฅ 2 + 18๐ฅ3 + ๐ 2(2 + 2 5๐ฅ + 9๐ฅ 4) + ๐ 3(6๐ฅ + 3 5๐ฅ 2 + 9๐ฅ5) + ๐4(12๐ฅ2 +4 5๐ฅ 3 + 9๐ฅ6) or, ๐(๐ฅ, ๐2, ๐3, ๐4) = 2 5 + 9๐ฅ 2 + 18๐ฅ3 + ๐ 2(2 + 2 5๐ฅ + 9๐ฅ 4) + ๐3(6๐ฅ +3 5๐ฅ 2 + 9๐ฅ5) + ๐ 4(12๐ฅ2 + 4 5๐ฅ 3 + 9๐ฅ6)
The following Matlab session shows the results of using fminsearch to solve for the coefficients ๐2, ๐3, ๐4 that minimize the error function
๐ผ(๐2, ๐3, ๐4) = โซ [๐(๐ฅ, ๐2, ๐3, ๐4)]2๐๐ฅ
1 0
Note: the first component a(1) in the solution vector โaโ is redundant; it is not used in function ๐ผ.
Therefore, the optimal solution is
The following plot compares the โdirectโ numerical solution (red trace) to the minimum residual solution (blue trace). We will study the very important topic of numerical solution of differential equations in Lecture 22 (e.g., employing ode45).
Your turn: Consider the first-order, nonlinear, homogeneous differential equation with varying coefficient ๐ฆฬ(๐ฅ) + (2๐ฅ โ 1)๐ฆ2(๐ฅ) = 0, with ๐ฆ(0) = 1 and ๐ฅ โ [0 1].
Employ the method of minimizing the squared residual to solve for the approximate solution
๐ฆฬ(๐ฅ) = ๐0 + ๐1๐ฅ + ๐2๐ฅ2+ ๐3๐ฅ3 + ๐4๐ฅ4 over the interval ๐ฅ โ [0 1]. Plot ๐ฆฬ(๐ฅ) and the exact solution ๐ฆ(๐ฅ) given by
๐ฆ(๐ฅ) = 1
๐ฅ2โ ๐ฅ + 1 Ans. ๐ฆฬ(๐ฅ) = 1 + 0.964๐ฅ + 0.487๐ฅ2โ 2.903๐ฅ3+ 1.452๐ฅ4
Your turn: Determine the parabola ๐ฆ(๐ฅ) = ๐๐ฅ2 + ๐๐ฅ + ๐ that approximates the cubic ๐(๐ฅ) = 2๐ฅ3โ ๐ฅ2+ ๐ฅ + 1 (over the interval ๐ฅ โ [0 2]) in the LSE sense. In other words, determine the coefficients ๐, ๐ and ๐ such that the following error function is minimized,
๐ธ(๐, ๐, ๐) = โซ [๐(๐ฅ) โ ๐ฆ(๐ฅ)]2๐๐ฅ
2 0
Solve the problem in two ways: (1) analytically; and (2) Employing fminsearch after evaluating the integral. Plot ๐(๐ฅ) and ๐ฆ(๐ฅ) on the same set of axis.
Appendix: Explicit Matrix Formulation for the Quadratic Regression Problem
Earlier in this lecture we have derived the ๐-degree polynomial LSE regression formulation as follows, โ ( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฆ๐ ๐ ๐=1 โ ๐ฅ๐ ( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = โ ๐ฅ๐๐ฆ๐ ๐ ๐=1 ๐ ๐=1 โ ๐ฅ๐2( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฅ๐2๐ฆ๐ ๐ ๐=1 . . โ ๐ฅ๐๐( โ ๐ฅ๐๐โ๐+1 ๐+1 ๐=1 ๐๐โ๐+1) = ๐ ๐=1 โ ๐ฅ๐๐๐ฆ๐ ๐ ๐=1
Now, setting ๐ = 2 (designating a quadratic regression model) leads to three equations: โ (โ ๐ฅ๐3โ๐ 3 ๐=1 ๐3โ๐) = ๐ ๐=1 โ ๐ฆ๐ ๐ ๐=1 โ ๐ฅ๐(โ ๐ฅ๐3โ๐ 3 ๐=1 ๐3โ๐) = โ ๐ฅ๐๐ฆ๐ ๐ ๐=1 ๐ ๐=1 โ ๐ฅ๐2(โ ๐ฅ๐3โ๐ 3 ๐=1 ๐3โ๐) = ๐ ๐=1 โ ๐ฅ๐2๐ฆ๐ ๐ ๐=1
It can be shown that the above equations (Your Turn) can be cast in matrix form as, [ ๐ โ ๐ฅ๐ ๐ ๐=1 โ ๐ฅ๐2 ๐ ๐=1 โ ๐ฅ๐ ๐ ๐=1 โ ๐ฅ๐2 ๐ ๐=1 โ ๐ฅ๐3 ๐ ๐=1 โ ๐ฅ๐2 ๐ ๐=1 โ ๐ฅ๐3 ๐ ๐=1 โ ๐ฅ๐4 ๐ ๐=1 ] [ ๐0 ๐1 ๐3] = [ โ ๐ฆ๐ ๐ ๐=1 โ ๐ฅ๐๐ฆ๐ ๐ ๐=1 โ ๐ฅ๐2๐ฆ๐ ๐ ๐=1 ]
This is a 3x3 linear system that can be solved using the methods of Lectures 15 & 16. With this type of formulation, care must be taken as to employ numerical solution methods that can handle ill-condition coefficient matrices; note the dominance of the (all positive) coefficients in the last row of the matrix.
The following two-part video (part1, part2) derives the above result (directly) from basic principles. Here is an example of quadratic regression: Part 1 Part 2.
Example. Employ the above formulation to fit a parabola to the following data.
๐ฅ 0 5 8 12 16 18
๐ฆ 20 36.2 52 60 69.2 70
The code that generated the above result is shown below.
Your turn: Verify the above solution employing Polyfit. Repeat employing the pseudo-inverse solution a = pinv(Z)*y applied to the formulation,
๐๐ = [ ๐ฅ1๐ ๐ฅ1๐โ1 โฏ ๐ฅ1 1 ๐ฅ2๐ ๐ฅ2๐โ1 โฏ ๐ฅ2 1 โฎ โฎ โฑ โฎ โฎ ๐ฅ๐โ1๐ ๐ฅ๐โ1๐โ1 โฏ ๐ฅ๐โ1 1 ๐ฅ๐๐ ๐ฅ๐๐โ1 โฏ ๐ฅ๐ 1][ ๐๐ ๐๐โ1 โฎ ๐1 ๐0 ] = [ ๐ฆ1 ๐ฆ2 โฎ ๐ฆ๐โ1 ๐ฆ๐ ]
Your turn: Extend the explicit matrix formulation of this appendix to a cubic function ๐(๐ฅ) = ๐0 + ๐1๐ฅ + ๐2๐ฅ2 + ๐3๐ฅ3 and use it to determine the polynomial coefficients for the data set from the last example. Compare (by plotting) your solution to the following solution that was obtained using nonlinear regression,
๐ฆ(๐ฅ) = 74.321
1+2.823๐โ0.217๐ฅ.
Verify your solution employing Polyfit.