Looking for Patten Makers Photography? Just click the name and you will be transported accross!

Friday, November 8, 2013

Significant Model Improvement - F change method

Previously I have written about the AICc, a means of testing which model is the best tradeoff between complexity and explanatory power. While the AICc can rank models from most efficient to least efficient it cannot give any indication on whether the improvement between models is statistically significant. If you want to make a claim about the statistical significance of changes to your model you need an alternative approach. One of the most straight forward is the F change statistic.

The F change statistic operates in much the same way as a standard F statistic. In this case rather than providing a ratio of the unexplained variance to the explained variance you are providing a ratio between the change in explanatory power relative to the unexplained variance.

In order to calculate the F change statistic you will need to know the residual sum of squares (RSS) for each model, the number of parameters (K) in each model (1 and 2 here), and the number of observations (n) you have.

F=((RSS1-RSS2)/(K2-K1))/(RSS2/(n-K2))

The degrees of Freedom for your resulting F statistic are K2-K1 and n-K2.

BONUS: By inspecting the formula given here you should be able to see how a more complicated model with a higher RSS would produce a negative F statistic. Because the F distribution is a squared distribution F statistics can only be positive. In such a case you would consider the absolute value of the F statistic (i.e. -3 become 3). If such a result were found to be significant this would tell you that the more complicated model was significantly worse than the less complicated model rather than the other way around. This also helps to show that F distribution can be used for two tailed and one tailed tests despite its asymmetry.

0 comments:

Post a Comment