I’ve had an idea.
In education, we are surrounded by pseudoscience. There are spurious diagrams of brains or eccentric research approaches. And all of it gets wrapped up in politics. Nobody would claim that it’s somehow ‘right-wing’ to dismiss homeopathy – indeed, alternative medicine is often associated with the privileged classes. Yet if you challenge alternative education then you can expect to attract this label.
Medicine is not perfect, but the reason it has made more progress than education is that it has a sounder evidence base. So that’s what we need. Unfortunately, this is where we hit a major problem: Everything works.
It was John Hattie who made this claim in his 2009 book, Visible Learning. All education interventions appear to work due to the inherent problems with designing studies. It is very hard to design an educational experiment where the participants are blind to the fact that they are receiving the intervention. So this will affect expectations. The teacher or students might try a little harder or simply think about subject content a little more.
It was also Hattie who proposed a way around this. If everything works then let’s look at the size of the effect. By comparing effect sizes, we can see what works best. These will be the interventions where the effect size is large enough that it is unlikely to have arisen due to the subjects’ expectations. Hattie set an arbitrary cut-off for effect sizes of 0.4 of a standard deviation.
The trouble is that you can’t really do this. Effect sizes from different experiments aren’t really comparable in this way. For instance, the effect size will be larger with small children or with a selective cohort of students.
So here’s the idea. Let’s mobilise the resources of groups like the Education Endowment Foundation in the U.K. and Evidence for Learning in Australia to run a different kind of trial; a trial that follows the model of one of my favourite papers.
Instead of having a control group and and intervention group – an AB design – we should run trials with one control group and two competing intervention groups – an ABC design. Both interventions would need to be supported by researchers who are committed to them and both would need equal resources. We could then see which of the two interventions works best. Comparison would be fair because it would be within one experiment.
Good candidates might include running Reading Recovery against a systematic synthetics phonics programme or running ‘productive pedagogies’ against a programme rooted in teacher effectiveness research.
None of this would completely fix the problem of pseudoscience. You’d still see eccentric articles in The Guardian and the proponents of alternative education would rant and rave about ‘positivism’ and politics. But we would start to build an evidence base that could be drawn upon by reasonable teachers and policy makers who haven’t yet hitched themselves to the wagon of woo. Slowly and quietly, we could edge towards a more evidence-based profession.