How do we know what size of F value to deem as significant? We can look it up in the table (as we will anyhow), but what does this mean?

The F test is one example of a statistical test that determines how unlikely your result to have been, if the two values you compared really weren’t different. Think of it this way: even if the variances of dorsal and anal fin-ray counts *were not* different in the whole population of fluffy sculpins, there is not a very good chance that in our *sample* of 30 fish that the F ratio would turn out to be exactly 1.0 (that is, the two variances equal each other exactly). Just by chance, through the luck of the draw, we might get different variances in our samples. But common sense tells us that small differences (and F values not very different from 1.0) might come up quite often if the variances were the same, but that big differences (and F values that are quite different from 1.0) would come up less commonly. So how big a difference in variances (how big a value of F) do we deem as indicating that the two variances are different, beyond a reasonable doubt? Here we depend on the mathematical wizardry of statisticians, who have worked out the chances of getting particular values of F, if the two variances really weren’t different. This is what we get from the F table.

The F table is organized according to the degrees of freedom in the numerator and denominator of the F ratio (the degrees of freedom depend on the sample sizes). Find the column of tabulated F values for *v*_{1} = 30 (the closest value to 29) and *v*_{2} = 29. The tabulated values of F in this column range from smaller numbers at the top of the column to larger numbers at the bottom of the column. At the margin of the table is a corresponding column of decimal fractions, ranging from 0.75 to 0.001. These values are probabilities, figured out by the mathematical wizards. They represent the chance of getting a calculated F value that large or larger, if the real variances of the two samples were the same. Note that there would be a 50:50 chance of getting a calculated F value of 1.0 or larger, if the real variances are equal to each other. This makes sense, since if the variances are equal, you should get F ratios around 1.0. As you move down the column, you see that larger F values would be less common: a calculated F value of 1.62 or larger would happen only 10% of the time (if the underlying variances were equal), a calculated F value of 1.85 or larger would happen only 5% of the time, and a calculated F value of 2.41 would happen only 1% of the time. We use these probabilities to decide if our results would happen very often, if the real variances were equal. If our calculated F ratio turned out to be uncommonly large, we could make the decision that maybe the variances really aren’t equal. In many areas of biology, we arbitrarily use the 5% level of probability, thinking that if the F ratio we calculated would happen only 5% of the time if the variances really were equal, we could and proceed with the assumption that the variances really are different, and accept a 5% chance of being wrong. Therefore, we might use the tabulated F value of 1.85 as our **critical value**, and if our *calculated* F value is larger, we would reject the notion (known as the **null hypothesis**, or **H _{0}**) that the variances are equal, and

**If F _{calculated} > F_{critical}, H_{0} is rejected.**

**If F _{calculated} < F_{critical}, H_{0} cannot be rejected.**