The F change statistic operates in much the same way as a standard F statistic. In this case rather than providing a ratio of the unexplained variance to the explained variance you are providing a ratio between the change in explanatory power relative to the unexplained variance.
In order to calculate the F change statistic you will need to know the residual sum of squares (RSS) for each model, the number of parameters (K) in each model (1 and 2 here), and the number of observations (n) you have.
F=((RSS1-RSS2)/(K2-K1))/(RSS2/(n-K2))
The degrees of Freedom for your resulting F statistic are K2-K1 and n-K2.
BONUS: By inspecting the formula given here you should be able to see how a more complicated model with a higher RSS would produce a negative F statistic. Because the F distribution is a squared distribution F statistics can only be positive. In such a case you would consider the absolute value of the F statistic (i.e. -3 become 3). If such a result were found to be significant this would tell you that the more complicated model was significantly worse than the less complicated model rather than the other way around. This also helps to show that F distribution can be used for two tailed and one tailed tests despite its asymmetry.
0 comments:
Post a Comment