The way to Tell Amongst Two Regression Models with Statistical Significance

-

Diving into the F-test for nested models with algorithms, examples and code

When analyzing data, one often needs to check two regression models to find out which one matches best to a bit of information. Often, one model is a simpler version of a more complex model that features additional parameters. Nonetheless, more parameters don’t all the time guarantee that a more complex model is definitely higher, as they may simply overfit the info.

To find out whether the added complexity is statistically significant, we are able to use what’s called the F-test for nested models. This statistical technique evaluates whether the reduction within the Residual Sum of Squares (RSS) as a consequence of the extra parameters is meaningful or simply as a consequence of likelihood.

In this text I explain the F-test for nested models after which I present a step-by-step algorithm, reveal its implementation using pseudocode, and supply Matlab code which you can run straight away or re-implement in your favorite system (here I selected Matlab since it gave me quick access to statistics and fitting functions, on which I didn’t need to spend time). Throughout the article we’ll see examples of the F-test for nested models at work in a few settings including some examples I built into the instance Matlab code.

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x