The effect of Reading Recovery

Earlier this year, Horatio Speaks wrote a blog post about Reading Recovery and its derivative, ‘Switch-On Reading’. I didn’t pick this up at the time but it has come to my attention due to the subsequent discussion. Stephen Gorard, prior to making points about anonymous bloggers that I would reject, made a valid argument about effect size. This is something that keeps coming up and so I’d like to address it.

Basically, most education research is badly designed. Controls are poor, there are high attrition rates, a lack of random sampling and so on. Rather than reject pretty much all of the research on these grounds – The ‘What Works Clearinghouse’ strategy – John Hattie made the case in his 2009 book for taking it into account, but at the same time setting a reasonably high bar for the magnitude of any effects. From quantitative studies it is usually possible to calculate an ‘effect size’. A size of 0.0 means no effect and a size above 1.0 would be most extraordinary. Hattie sets the bar at 0.4 for effects worth considering.

The problem is that Hattie treats both poorly controlled studies and well designed studies in the same way. This means that the effect size of worked exampleS at 0.57 is not really comparable with other effect sizes because the worked example effect experiments were proper randomly controlled trials (RCTs). So the ranking of effects that Hattie generates is dubious (I won’t get into the more general debate about the usefulness and validity of effect sizes here).

When the Education Endowment Foundation conduct proper RCTs of various interventions, it is therefore a little unfair to insist that the effect sizes should be above 0.4. Anything above 0.0 is worth considering in this case, a point that Gorard makes. An additional four months of progress for students in a reading intervention compared to their peers is worth having. This was the finding of the Switch-On Reading study which generated an effect size of 0.24.

However, before you rush out and sign your school up for Switch-On Reading, you might want to consider that the study that was conducted was a complete waste of time.

Reading Recovery style interventions have been evaluated many times in a broadly similar way and so the results could quite easily have been predicted. Yet this does not mean that Reading Recovery is effective. Far from it.

The problem is the control group. We virtually always see Reading Recovery compared with no intervention at all. It seems plausible to me that any series of 20 minute one-to-one reading sessions with a capable other would have some effect on reading. And such sessions could be quite cheap and easy for schools to arrange.

When I was at primary school I was involved in a type of intervention like this. I can’t recall the name of it and so I’m unable to search the literature for the evidence. I was in about Year 4 or 5 and a group of us gave tuition to students who must have been in Year 1 or 2. The little ones would read to us. When they got stuck we simply told them the word; we were absolutely forbidden from helping them sound it out. The horror! Imagine that, we could have killed the love of reading that these struggling, disengaged readers had.

I wouldn’t be surprised if this intervention had an effect size of about 0.24. However, it would be much cheaper than Reading Recovery with its requirement for specially-trained teachers.

What we really need to know about Reading Recovery is whether it has any effect over and above that of any other kind of one-to-one tuition. If not, we can dispense it and just go for the tuition.

Standard

5 thoughts on “The effect of Reading Recovery

  1. Pingback: The effect of Reading Recovery |

  2. There are two kinds of Effect Size. A relative one where you calculate the amount learned by one teaching method *compared* to the next best teaching method.And an absolute one where you calculate the amount learned by one teaching method compared to no teaching at all. There are very different “good” results for both. A good result for the relative one would be anything better than zero, because that would show that teaching method was better than the next best thing. A good result for the absolute method would be anything better than the average rate of learning which Hattie has put at 0.40. Hattie mixes up both with no regard in his book because he doesn’t know hat he’s doing and doesn’t realise they give different answers. The EEF generally uses the relative one, where anything better than zero is “good” and this may be why Stephen Gorard is getting confused.
    More here – https://ollieorange2.wordpress.com/2014/08/10/the-two-kinds-of-effect-size/

  3. A very clear piece, Greg. As I said in my recent post. I am happy to accept I didn’t make a clear distinction between different approaches to effect sizes. However, I don’t agree that four additional months’ progress is acceptable as an outcome for daily intervention lessons over four months. Very simple arithmetic shows that some students may never catch up.

    I remain alarmed that education “experts” think that the rate of progress cited in the study is “normal”. It may be common, but I would hate to think it was seen as acceptable. Skilled reading teachers know that far more can be achieved in the same time frame. See for example, John Walker’s comment below “Vested Interests”: https://horatiospeaks.wordpress.com/2015/10/17/vested-interests/comment-page-1/#comment-289

Leave a reply to Horatio Speaks Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.