Finding the optimum polynomial order to use for regression

Many a times, you may not have the privilege or knowledge of the physics of the problem to dictate the type of regression model. You may want to fit the data to a polynomial. But then how do you choose what order of polynomial to use.

Do you choose based on the polynomial order for which the sum of the squares of the residuals, Sr is a minimum? If that were the case, we can always get Sr=0 if the polynomial order chosen is one less than the number of data points. In fact, it would be an exact match.

So what do we do? We choose the degree of polynomial for which the variance as computed by


is a minimum or when there is no significant decrease in its value as the degree of polynomial is increased. In the above formula,

Sr(m) = sum of the square of the residuals for the mth order polynomial

n= number of data points

m=order of polynomial (so m+1 is the number of constants of the model)

Let’s look at an example where the coefficient of thermal expansion is given for a typical steel as a function of temperature. We want to relate the two using polynomial regression.


Instantaneous Thermal Expansion


1E-06 in/(in oF)























If a first order polynomial is chosen, we get

alpha=0.009147T+5.999, with Sr=0.3138.

If a second order polynomial is chosen, we get

alpha=-0.00001189T^2+0.006292T+6.015 with Sr=0.003047.

Below is the table for the order of polynomial, the Sr value and the variance value, Sr(m)/(n-m-1)

Order of

polynomial, m





















So what order of polynomial would you choose?

From the above table, and the figure below, it looks like the second or third order polynomial would be a good choice as very little change is taking place in the value of the variance after m=2.

Optimum order of polynomial for regression

This post is brought to you by Holistic Numerical Methods: Numerical Methods for the STEM undergraduate at

Subscribe to the feed to stay updated and let the information follow you.

Author: Autar Kaw

Autar Kaw ( is a Professor of Mechanical Engineering at the University of South Florida. He has been at USF since 1987, the same year in which he received his Ph. D. in Engineering Mechanics from Clemson University. He is a recipient of the 2012 U.S. Professor of the Year Award. With major funding from NSF, he is the principal and managing contributor in developing the multiple award-winning online open courseware for an undergraduate course in Numerical Methods. The OpenCourseWare ( annually receives 1,000,000+ page views, 1,000,000+ views of the YouTube audiovisual lectures, and 150,000+ page views at the NumericalMethodsGuy blog. His current research interests include engineering education research methods, adaptive learning, open courseware, massive open online courses, flipped classrooms, and learning strategies. He has written four textbooks and 80 refereed technical papers, and his opinion editorials have appeared in the St. Petersburg Times and Tampa Tribune.

18 thoughts on “Finding the optimum polynomial order to use for regression”

  1. Hi, how exactly do you calculate Sr? I read your definition of it several times but still clues. Please let me know. Thanks.


  2. Very nice article but I cannot seam to know how to interpret the following sentence:

    We choose the degree of polynomial for which the variance as computed by
    is a minimum or when there is no significant decrease in its value as the degree of polynomial is increased

    So, what does “significant decrease” mean? Is it statistical significance test? Or you just set a threshold, say 0.001, and choose the order when the change in the Sr(m)/(n-m-1) drops below this threshold?
    Could you help me out on this issue?


    1. There is no rule of thumb that I know of. If there is a minimum at a low order of polynomial, that is an indication of the optimum polynomial as well. If there is any threshold, it should be a relative number.


      1. Thanks for your answer!

        I am working to find an algorithmic/automatic way of predicting the order of the polynomial.
        But with not much luck!
        However I found ways to predict the order of the polynomial and accept that it can overestimate the true order… it can do that with 83% accuracy.

        I’m still crunching numbers!


  3. This is very nice technique, but I cannot understand the meaning of the denominator ‘n-m-1’.
    Is it a weighting?


  4. Hi, this is good article.

    However I cannot understand the meaning of (n-m-1).
    Is this a weighting?

    Also, is there any reference of this article?
    Please tell me it.

    Thank you.


  5. Hi, thank you for this informative article. However, I don’t understand why n-m-1 is used as the denominator instead of n-1.


    1. Rewrite n-m-1 as n-(m+1). The numerator is Sr for a polynomial of order m. So when n=m+1, then Sr=0. So as m increases, Sr, and n-(m+1) both decrease. And since we want something that is optimum, we give them equal weight. Plot n-m-1 vs m and Sr vs m separately to see what happens.


    1. If Sr is given all the weight, then the optimum order of polynomial m is n-1. This is the case where the polynomial goes through all the data points and Sr=0. But regression is all about finding a simplified curve to represent the data.


  6. That’s a great note. I just want to ask on the following:
    Can the optimal degree be determined by just looking at the Peaks and Troughs?
    I’m asking because what if I don’t have software to do so, and I’m just running out of time?

    I tried this method but the variance behaviour is so confusing. From order one to order ten, the variance just go up and down and up and down. Can you advise me on this?


  7. Hello Autar Kaw!
    I have one question about using the “sum of squared residuals” as a criterion for finding the optimal order. When we increase the order of the polynomial, we will always decrease the sum of the squares of the residuals until we get an overtrained model (the polynomial will cross all the training points). Is this a way to retrain? And how can I tell when I need to stop?


    1. That is why n-(m+1) is used to compensate for the decreasing residual. Keep in mind, this is just a criterion that makes sense. Its validity is not proven.


  8. What about this situation:

    There is a 5% fall in variance from X^1 to X^2,but there is a 43% fall in variance from second to third order.

    I’m using 10 observations to find the right order in order to explain.

    Should I just focus only on first half (first to fifth order) and analyze the behaviour of their variances which is from your formula SSR/n-m-1?

    Please advise.



    1. Unless you have a better criterion, SSR/(n-m-1) seems to be a good objective function. How much fall you look for in this number is mostly in the eye of the beholder as we are not looking for a minimum.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s