All I knew about the next session of the interview was that it was called, “a discussion with the headteacher on a topic of his choosing”. It was my first or second interview for the job of deputy headteacher (vice principal) in London and so I was not sure whether this was normal.
The headteacher was sat at a desk and I was invited to sit opposite him. A number of school governors observed. The headteacher asked me what I would do if a head of subject requested to use ‘ability grouping’ i.e. place students into different maths or science or history etc. classes based upon their prior level of attainment. I gave a pragmatic answer. At that time, I was not well versed in education research, but I did know that the research on ability grouping was inconclusive: Whether you group by ability or not, it doesn’t make much difference. I suggested that I would probe and question to understand the head of subject’s position and ensure that it had been thought through, but I would ultimately be guided by their preference as the subject expert.
This was the wrong answer. The headteacher replied with something of an ideological rant on the evils of ability grouping. I held my pragmatic line.
At the end of the session, I sought the chair of governors, withdrew from the interview and went back to school to teach my classes.
The evidence available then is much as it is now. Ability grouping versus mixed ability teaching really doesn’t seem to make much difference. If anything, there may be small gains for the most able students and small losses for the least able. If so, never has such an inconsequential position been held with such furious passion by so many.
However, this evidence is not exactly gold standard and is the subject of debate. Most of the evidence has been correlational or quasi-experimental, with very little coming from randomised controlled trials. That makes it hard to separate out the factors. For instance, in the high-profile Kulik and Kulik (1982) meta-analysis, only 13 out of the 51 studies used random assignment. In the wild, it is also possible that the effect of ability grouping could be complicated by practices that might lead, for example, to lower ability groups being assigned a disproportionate number of new and inexperienced teachers. That is why I welcomed the initiation, in 2014, of a large-scale project to evaluate the effect of ability grouping using randomised controlled trials. Best practice in grouping students was headed by Professor Becky Francis of the UCL Institute of Education, funded by the UK’s Education Endowment Foundation, and aimed to test the effectiveness of the best possible versions of ability grouping and mixed ability teaching.
My enthusiasm dimmed when I realised that mixed ability teaching and ability grouping were not going to be compared directly with each other. Instead, the ‘best’ versions of each were to be compared with a control.
My enthusiasm was finally extinguished when, prior to the release of the data on effectiveness, the research team published one of those ubiquitous French-philosopher inspired papers labeling ability grouping as ‘symbolic violence’. How could any results now be regarded as impartial?
When the results finally did arrive, they consisted of two nulls: There was no effect of the ‘best’ version of ability grouping compared to the control and no effect of the ‘best’ version of mixed ability grouping compared to the control. So we are back to square one, provided you accept the data. The total cost of the project was £1,184,349.
In a truly extraordinary move, the Education Endowment Foundation have committed £850,000 to having another go. Becky Francis is still involved, although is not the project lead, and this time it looks like they will eschew randomised controlled trials in favour of a quasi-experimental study, making this an odd project for the Education Endowment Foundation with its commitment to randomised controlled trials.
This is a reversal of the normal situation. Usually, a quasi-experimental design of this kind might be conducted as a pilot study prior to going to a randomised controlled trial. It would be small-scale and presumably cost far less than £850,000. To progress from a randomised controlled trial to a quasi-experimental study seems eccentric. Mind you, the weaker trial design is perhaps more likely to lead to a result one way or the other.
I am wondering whether, this time, the result will favour mixed ability teaching. What are the chances?