## Sum of the residuals for the linear regression model is zero.

Prove that the sum of the residuals for the linear regression model is zero.________________________________________________

This post is brought to you by

• Holistic Numerical Methods Open Course Ware:
• the textbooks on
• the Massive Open Online Course (MOOCs) available at

## Effect of Significant Digits: Example 2: Regression Formatting in Excel

In a series of bringing pragmatic examples of the effect of significant digits, we discuss the influence of using default and scientific formats in the trendline function of Microsoft Excel.  This is the second example (first example was on a beam deflection problem) in the series.

________________________________________________

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, the textbook on Introduction to Programming Concepts Using MATLAB, and the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos.  Subscribe to the blog via a reader or email to stay updated with this blog. Let the information follow you.

## Does the solve command in MATLAB not give you an answer?

Recently, I had assigned a project to my class where they needed to regress n number of x-y data points to a nonlinear regression model y=exp(b*x).  However, they were NOT allowed to transform the data, that is, transform data such that linear regression formulas can be used to find the constant of regression b.  They had to do it the new-fashioned way: Find the sum of the square of the residuals and then minimize the sum with respect to the constant of regression b.

To do this, they conducted the following steps

1. setup the equation by declaring b as a syms variable,
2. calculate the sum  of the square of the residuals using a loop,
3. use the diff command to set up the equation,
4. use the solve command.

However, the solve command gave some odd answer like log(z1)/5 + (2*pi*k*i)/5.  The students knew that the equation has only one real solution – this was deduced from the physics of the problem.

We did not want to set up a separate function mfile to use the numerical solvers such as fsolve.  To circumvent the setting up of a separate function mfile, we approached it as follows.  If dbsr=0 is the equation you want to solve, use

F = vectorize(inline(char(dbsr)))
fsolve(F, -2.0)

What char command does is to convert the function dbsr to a string, inline constructs it to an inline function, vectorize command vectorizes the formula (I do not fully understand this last part myself or whether it is needed).

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, the textbook on Introduction to Programming Concepts Using MATLAB, and the YouTube video lectures available athttp://numericalmethods.eng.usf.edu/videos.  Subscribe to the blog via a reader or email to stay updated with this blog. Let the information follow you.

## Does it make a large difference if we transform data for nonlinear regression models

__________________________________________________

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, and the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos and http://www.youtube.com/numericalmethodsguy

## To prove that the regression model corresponds to a minimum of the sum of the square of the residuals

Many regression models when derived in books only show the first derivative test to find the formulas for the constants of a regression model.  Here we take a  simple example to go through the complete derivation.

_________________________________________________________

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, and the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos and http://www.youtube.com/numericalmethodsguy

## How do I do polynomial regression in MATLAB?

Many students ask me how do I do this or that in MATLAB.  So I thought why not have a small series of my next few blogs do that.  In this blog, I show you how to do polynomial regression.

• The MATLAB program link is here.
• The HTML version of the MATLAB program is here.
• DO NOT COPY AND PASTE THE PROGRAM BELOW BECAUSE THE SINGLE QUOTES DO NOT TRANSLATE TO THE CORRECT SINGLE QUOTES IN MATLAB EDITOR.  DOWNLOAD THE MATLAB PROGRAM INSTEAD

%% HOW DO I DO THAT IN MATLAB SERIES?
% In this series, I am answering questions that students have asked
% me about MATLAB.  Most of the questions relate to a mathematical
% procedure.

%% TOPIC
% How do I do polynomial regression?

%% SUMMARY

% Language : Matlab 2008a;
% Authors : Autar Kaw;
% Mfile available at
% http://numericalmethods.eng.usf.edu/blog/regression_polynomial.m;
% Last Revised : August 3, 2009;
% Abstract: This program shows you how to do polynomial regression?
%           .
clc
clear all
clf

%% INTRODUCTION

disp(‘ABSTRACT’)
disp(‘   This program shows you how to do polynomial regression’)
disp(‘ ‘)
disp(‘AUTHOR’)
disp(‘   Autar K Kaw of https://autarkaw.wordpress.com’)
disp(‘ ‘)
disp(‘MFILE SOURCE’)
disp(‘   http://numericalmethods.eng.usf.edu/blog/regression_polynomial.m’)
disp(‘ ‘)
disp(‘LAST REVISED’)
disp(‘   August 3, 2009’)
disp(‘ ‘)

%% INPUTS
% y vs x data to regress
% x data
x=[-340  -280  -200  -120  -40  40  80];
% ydata
y=[2.45  3.33  4.30   5.09  5.72  6.24  6.47];
% Where do you want to find the values at
xin=[-300 -100 20  125];
%% DISPLAYING INPUTS
disp(‘  ‘)
disp(‘INPUTS’)
disp(‘________________________’)
disp(‘     x         y  ‘)
disp(‘________________________’)
dataval=[x;y]’;
disp(dataval)
disp(‘________________________’)
disp(‘   ‘)
disp(‘The x values where you want to predict the y values’)
dataval=[xin]’;
disp(dataval)
disp(‘________________________’)
disp(‘  ‘)

%% THE CODE
% Using polyfit to conduct polynomial regression to a polynomial of order 1
pp=polyfit(x,y,1);
% Predicting values at given x values
yin=polyval(pp,xin);
% This is only for plotting the regression model
% Find the number of data points
n=length(x);
xplot=x(1):(x(n)-x(1))/10000:x(n);
yplot=polyval(pp,xplot);
%% DISPLAYING OUTPUTS
disp(‘  ‘)
disp(‘OUTPUTS’)
disp(‘________________________’)
disp(‘________________________’)
dataval=[xin;yin]’;
disp(dataval)
disp(‘________________________’)

xlabel(‘x’);
ylabel(‘y’);
title(‘y vs x ‘);
plot(x,y,’o’,’MarkerSize’,5,’MarkerEdgeColor’,’b’,’MarkerFaceColor’,’b’)
hold on
plot(xin,yin,’o’,’MarkerSize’,5,’MarkerEdgeColor’,’r’,’MarkerFaceColor’,’r’)
hold on
plot(xplot,yplot,’LineWidth’,2)
legend(‘Points given’,’Points found’,’Regression Curve’,’Location’,’East’)
hold off
disp(‘  ‘)

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu, the textbook on Numerical Methods with Applications available from the lulu storefront, and the YouTube video lectures available at http://numericalmethods.eng.usf.edu/videos

## Finding the optimum polynomial order to use for regression

Many a times, you may not have the privilege or knowledge of the physics of the problem to dictate the type of regression model. You may want to fit the data to a polynomial. But then how do you choose what order of polynomial to use.

Do you choose based on the polynomial order for which the sum of the squares of the residuals, Sr is a minimum? If that were the case, we can always get Sr=0 if the polynomial order chosen is one less than the number of data points. In fact, it would be an exact match.

So what do we do? We choose the degree of polynomial for which the variance as computed by

Sr(m)/(n-m-1)

is a minimum or when there is no significant decrease in its value as the degree of polynomial is increased. In the above formula,

Sr(m) = sum of the square of the residuals for the mth order polynomial

n= number of data points

m=order of polynomial (so m+1 is the number of constants of the model)

Let’s look at an example where the coefficient of thermal expansion is given for a typical steel as a function of temperature. We want to relate the two using polynomial regression.

 Temperature Instantaneous Thermal Expansion oF 1E-06 in/(in oF) 80 6.47 40 6.24 0 6.00 -40 5.72 -80 5.43 -120 5.09 -160 4.72 -200 4.30 -240 3.83 -280 3.33 -320 2.76

If a first order polynomial is chosen, we get

$alpha=0.009147T+5.999$, with Sr=0.3138.

If a second order polynomial is chosen, we get

$alpha=-0.00001189T^2+0.006292T+6.015$ with Sr=0.003047.

Below is the table for the order of polynomial, the Sr value and the variance value, Sr(m)/(n-m-1)

 Order of polynomial, m Sr(m) Sr(m)/(n-m-1) 1 0.3138 0.03486 2 0.003047 0.0003808 3 0.0001916 0.000027371 4 0.0001566 0.0000261 5 0.0001541 0.00003082 6 0.0001300 0.000325

So what order of polynomial would you choose?

From the above table, and the figure below, it looks like the second or third order polynomial would be a good choice as very little change is taking place in the value of the variance after m=2.

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at http://numericalmethods.eng.usf.edu