I have a job for you.
Recent years have seen the advent of organisations like the U.K.’s Education Endowment Foundation (EEF). Spurred by a mission to make education more evidence-based, they have gone about conducting large-scale randomised controlled trials (RCTs). This has been largely positive, although the most striking results have often been when trials have failed, and there is a reason for this. If we take a number of schools and randomise them into two groups, the first group receiving a packaged intervention and associated training and the second group sitting on a waiting list, what can we conclude if we see an improvement for the first group? Not a lot. The effect could have been anything from that of a placebo, to an effect of simply completing lots of subject-specific training, to an effect of any or all elements of the package that make-up the intervention.
For instance, in these conditions, Reading Recovery often shows-up as being effective, but Reading Recovery involves one-to-one tuition and this is Benjamin Bloom’s archetype of the perfect teaching method. If the conditions it is being compared with do not involve one-to-one tuition than any effect could be as a result of the form of teaching rather than the content. Moreover, there is some evidence that alternatives to Reading Recovery do even better in such trials.
Unfortunately, we rarely see Reading Recovery compared in a single trial to one of the most viable alternatives. This is why I have called for the use of ABC designs where two competing interventions are compared with a control group. We can then see which one has the largest effect. So far, this call has gone unheeded.
And this might be due to the expense. Large trials are expensive. Adding in an additional trial condition while preserving the statistical power of the study would involve recruiting more schools, training more teachers and so on.
So why don’t we look at this the other way around?
The two main problems with these large RCTs is the scale and the fact that they tend to vary more than one thing at a time i.e. a whole package of things versus none of those things. Randomising students rather than schools can alleviate the first problem while retaining statistical power. The second problem can be mitigated by being less ambitious in what we seek to investigate. Let’s focus on really small changes that we can make one at a time.
This is what I am doing with my PhD research. I am manipulating the order that instructional events happen. The only thing that changes between the two groups is the order of events. It’s not even clear to the students, or me for that matter, which group is the control and which is the intervention.
Cognitive scientists have been doing work like this for decades, usually with undergraduate psychology students. It’s this kind of research that has led to our understanding of the value of retrieval practice and the merits of interleaving and spaced practice. One criticism that has be leveled against such research is that we now know an awful lot about undergraduate psychology students but can we really extrapolate all of that to school children?
That’s where you come in. Think of something small, really small. Think of how you could vary that one thing and nothing else, then contact a decent university about pursuing an experimental Masters or PhD programme involving school-age children. These small things may seem inconsequential, but they can help build our understanding of learning from the ground up. And that’s really big.