Few educational interventions have generated as many studies as Reading Recovery.
Reading Recovery involves giving struggling readers one-to-one tuition. I have no doubt that it has an effect, as any one-to-one intervention would have an effect. Some people claim that no other intervention has as much evidence of effectiveness at scale. That may be true, but that is likely to be because no other intervention has been tested so much at scale.
When I suggest that it has an effect, I need to be careful about what I mean. Most studies compare Reading Recovery to doing nothing and the students in the ‘do nothing’ control group have usually had a diet of so-called ‘balanced literacy‘ – the whole-language wolf hiding in the Grandma’s bed of phonics. I have written before that we can’t be sure why Reading Recovery works. It could be the proprietary techniques taught to Reading Recovery teachers or it could just be the effect of one-to-one tuition. Indeed, other forms of one-to-one tuition seem to be more effective than Reading Recovery.
It would therefore be really useful if an organisation such as the Education Endowment Foundation used its extensive taxpayer funded resources to run a three-armed randomised controlled trial testing Reading Recovery against a one-to-one systematic synthetic phonics programme and against a control. My prediction would be that both interventions would outperform the control but the phonics intervention would outperform Reading Recovery. Unfortunately, the Education Endowment Foundation seem more interested in kooky Philosophy for Children and in trying to prove the evils of ability grouping.
In the meantime, we can expect to see more studies like a new one conducted on behalf of the KPMG Foundation. On the surface, the results of Reading Recovery seem extraordinary. For instance:
“49% of the Reading Recovery group achieved the nationally expected level of qualification for educational progression (5 or more GCSEs at the former A* to C grades, including English and Maths, equivalent to grades 8 to 4 in the current system), compared to a national average of 54% for all pupils in the same year. Only 23% of the comparison group reached this level.”
Although extraordinary, it is plausible that an early reading intervention could have such a profound effect. After all, academic learning relies heavily on reading. It is a foundational skill. Unfortunately, the study does not provide evidence to justify such a conclusion because of the way it was designed. It was not a randomised controlled trial and it was not even a good example of a quasi-experimental study.
Researchers identified 148 struggling readers in schools that did not offer Reading Recovery and 145 struggling readers in schools that did offer it. Teachers then selected just under two thirds of the students (91) in the Reading Recovery schools to give the intervention. They then compared the results of these 91 students with the 148 in the comparison schools. Can you see the problem?
You either need to compare the results of all 145 of the initially identified students with the 148 in the control, or you need to select roughly two thirds of the control using the same criteria with which you selected the Reading Recovery students and compare this cohort with the 91 students who had the intervention.
The authors claim that the 91 students who were selected were those who were the most in need of the intervention:
“It was not possible to offer Reading Recovery to all the children in Reading Recovery schools. Of the 145 children in Reading Recovery schools, 91 received Reading Recovery (though not all were successfully discontinued), 54 did not. The selection of children to receive Reading Recovery is made by the teacher and teacher leader, informed by children’s performance on the assessments and on age (the lowest achieving children are prioritised, and older children often taken first).”
If this is true then including the additional 54 students would likely only improve the GCSE results for this cohort. On the other hand, if the addition of these extra 54 students washed out any gains, we might conclude that Reading Recovery has no net effect. After all, there is nothing wrong with an intervention which only works with a proportion of the students, but we need to know if this is the case or whether it is actually a zero-sum game. And there is reason to believe that it might be. There are those who suggest, for instance, that contrary to the claims made about selection in this report, it is the more able students who often end-up in a Reading Recovery intervention:
None of this speculation would be relevant if we could see data for all 145 students rather than 91, but we cannot.
A fundamental principle of science is that we compare like with like. There is a hint in the data that the 91 Reading Recovery students were also more affluent than the 148 comparison group. Only 43% qualified for free school meals compared with 62% in the comparison group. The researchers tried to control for this by running separate statistical tests on the free-school-meal and non-free-school-meal populations of each sample i.e. comparing free-school-meal Reading Recovery students with free-school-meal control students. I’m not sure that quite solves the problem of the mismatched groups because wealth is a continuous variable whereas access to free-school-meals is a binary (i.e. the free-school-meal students in the comparison group could still be less affluent, on average, than the free-school-meal students in the Reading Recovery group). There were also slightly more boys in the control group (65% versus 60%). Clearly, socioeconomic status and gender impact on reading outcomes and so they may have been a factor here. These problems would have been avoided by randomisation.
There is also some interesting data on special educational needs that is presented as part of the results of the study. For instance, at age 14, “There were significantly fewer Reading Recovery pupils with a SEN status (35%) than comparison group pupils (52%).” We are invited to believe that this was an effect of the Reading Recovery intervention, but it may actually represent an underlying difference between the populations in the intervention and control groups. Many special educational needs take time to be identified and these may well have been present and unidentified or latent at the time of the initial allocation of students to groups.
In short, this new study demonstrates nothing much, even if we are inclined to believe that Reading Recovery has some effect.
The reason it is necessary to critique studies of this kind is that there are so many of them. As they pile up, commentators make statements to the effect that no other reading intervention has generated such a wealth of positive evidence and the individual studies get buried behind Hattie- or Education Endowment Foundation-style ‘effect sizes’ that teachers and school leaders take as evidence of effectiveness.
But it is not evidence. It is a house of cards.