I asked my father, a (retired) statistician and lecturer, to look at the last school performance results (available here) and look at the perceived significant drop (65% to 54%) in Key Stage 2 pupils achieving Level 4 or above in reading, writing and maths. This is his report:
Statistics can be a very useful tool, but all too often they can be misleading or misinterpreted. The School Results case is not as straightforward as it may appear.
The use of percentage results gives a false impression, and masks the fact that only small numbers (of pupils) are involved.
For a class size of 24, the 2013 result of 52% translates as 13 pupils. For the same class size the 2012 value of 65% gives 16 pupils (or 15 if the class was only 23 in size).
Since this is only a difference of 2 or 3 pupils it is easy to suggest that this could simply be due to the presence of more ‘underachievers’ in 2013 – or conversely extra ‘high-fliers’ the year before.
This could be true – but it is not the only possible explanation. Even with exactly uniform situations numerical measurements on samples can (and do) vary by chance. This is easy to demonstrate if need be.
Samples of small size can show differences that appear surprisingly large – while larger samples still show big variations, but which represent smaller percentages.
The school case of interest has small samples ( of 13 or 16 successes) and this needs consideration as to whether this difference is meaningful, or possibly due to chance variations.
There is a routine statistical method of analysing such a situation. If applied to the case in question it shows that there is a significant (more than 10 %) possibility of the observed difference occurring by chance.
Hence it is not statistically justifiable to say that the difference has a ‘cause and effect’ explanation. It might have, but the statistics do not provide evidence.
JOD 11 January 2013
In Summary – the statistics cannot prove that the drop in pupils reaching the target had any cause at all.