All the small things

Embed from Getty Images

I have a job for you.

Recent years have seen the advent of organisations like the U.K.’s Education Endowment Foundation (EEF). Spurred by a mission to make education more evidence-based, they have gone about conducting large-scale randomised controlled trials (RCTs). This has been largely positive, although the most striking results have often been when trials have failed, and there is a reason for this. If we take a number of schools and randomise them into two groups, the first group receiving a packaged intervention and associated training and the second group sitting on a waiting list, what can we conclude if we see an improvement for the first group? Not a lot. The effect could have been anything from that of a placebo, to an effect of simply completing lots of subject-specific training, to an effect of any or all elements of the package that make-up the intervention.

For instance, in these conditions, Reading Recovery often shows-up as being effective, but Reading Recovery involves one-to-one tuition and this is Benjamin Bloom’s archetype of the perfect teaching method. If the conditions it is being compared with do not involve one-to-one tuition than any effect could be as a result of the form of teaching rather than the content. Moreover, there is some evidence that alternatives to Reading Recovery do even better in such trials.

Unfortunately, we rarely see Reading Recovery compared in a single trial to one of the most viable alternatives. This is why I have called for the use of ABC designs where two competing interventions are compared with a control group. We can then see which one has the largest effect. So far, this call has gone unheeded.

And this might be due to the expense. Large trials are expensive. Adding in an additional trial condition while preserving the statistical power of the study would involve recruiting more schools, training more teachers and so on.

So why don’t we look at this the other way around?

The two main problems with these large RCTs is the scale and the fact that they tend to vary more than one thing at a time i.e. a whole package of things versus none of those things. Randomising students rather than schools can alleviate the first problem while retaining statistical power. The second problem can be mitigated by being less ambitious in what we seek to investigate. Let’s focus on really small changes that we can make one at a time.

This is what I am doing with my PhD research. I am manipulating the order that instructional events happen. The only thing that changes between the two groups is the order of events. It’s not even clear to the students, or me for that matter, which group is the control and which is the intervention.

Cognitive scientists have been doing work like this for decades, usually with undergraduate psychology students. It’s this kind of research that has led to our understanding of the value of retrieval practice and the merits of interleaving and spaced practice. One criticism that has be leveled against such research is that we now know an awful lot about undergraduate psychology students but can we really extrapolate all of that to school children?

That’s where you come in. Think of something small, really small. Think of how you could vary that one thing and nothing else, then contact a decent university about pursuing an experimental Masters or PhD programme involving school-age children. These small things may seem inconsequential, but they can help build our understanding of learning from the ground up. And that’s really big.


3 thoughts on “All the small things

  1. Michael Pye says:

    Sorry to be a pain could you link or describe an example of this type of trial (how the different stages are broken down and mixed up).

  2. Tom Burkard says:

    RCT trials in education have a further disadvantage: if the experimental method is not implemented faithfully–as will almost always happen if it runs contrary to teachers’ training–results will be meaningless. In an analysis of a Michigan reading initiative, Standerford found that such changes as teachers actually implemented were seldom inconsistent with their prior preferences or beliefs about the best way to teach reading. This suggests that your approach is more likely to produce meaningful results.

    In regard to Reading Recovery, you are too kind by half. Using small group interventions with direct instruction in a secondary school, we achieved excellent results at a tiny fraction the cost (RR cost £6,000 for each ‘successful’ result)–and unlike RR, which uses its own assessments, our pupils’ progress was monitored by standardised reading and spelling tests. In any event, wherever synthetic phonics has been implemented with any rigour, there is no need for such interventions. Despite the enormous hype, RR has largely disappeared in England since New Labour’s ECAR (Every Child a Reader) subsidies ended.

  3. Pingback: Is written feedback more effective than whole-class feedback? – Filling the pail

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.