If you hang around social media debates about education for long enough then it is inevitable that you will be drawn into a discussion of evidence.
These arguments follow a familiar path. The role of randomised controlled trials – or other quantitative studies – will be questioned. The analogy of education and medicine will be challenged. On this basis, the contention will be advanced that we need to accept a wide range of different types of evidence. This will, incidentally, allow us to now include evidence for approaches that were previously lacking in support.
Not so different
First of all, the fields of medicine and education are not so different, particularly when you include the arguments around alternative medicine. Granted, the aims of medicine are much less contentious: people get better or they don’t, whereas an educational approach might fail to improve students understanding of maths but might be advanced with arguments that it will make them more motivated or develop some other quality.
However, giving a drug to a group of people is similar to giving a group of students a certain type of instruction. The pill will have differential effects on individuals due to body mass, genetics, diet, lifestyle, psychology and many other factors, just as the instruction will affect individuals differently. And so we look for average effects and try to uncover broad principles.
In fact, alternative medical practitioners gripe against evidence-based medicine on much the same grounds that some people gripe against evidence-based education: it fails to sufficiently recognise individual differences and contexts and it doesn’t address the whole patient. Still, if I were ill I know where I would turn.
And medicine has already rehearsed the arguments of education, seriously and in some detail. Medicine is the tradgedy to our farce. Faced with calls for alternatives to randomised trials in medicine, the medical blogger David Colquhoun notes that most scientists are unaware that there is even a debate to be had because the role of randomisation has long been resolved:
“Despite this, there is a body of philosophers who dispute it. And of course it is disputed by almost all practitioners of alternative medicine (because their treatments usually fail the tests).”
You might be tempted to draw the conclusion that I consider randomised controlled trials as constituting the best kind of evidence with other forms of evidence being somehow less good. But this would be to commit a category error. Types of evidence are neither good nor bad. We need to focus instead on the quality of the inferences that we draw from them.
For instance, if a politician were to claim that, “no children are subject to restraint techniques in government run care-homes,” then just one well-documented case-study could refute this. Conversely, a randomised trial would be an impractical and wholly unsuitable way of trying to address the question. The type of evidence needs to suit the question you are asking.
Well designed randomised trials are particularly good at investigating causal claims: If I do x then y is more likely to happen. Examples might include the hypothesis that giving a patient an antiviral is likely to reduce the length of a bout of flu or giving students problem based learning will improve their higher order thinking skills.
And this is the bind. If you make causal claims then we should be able to test them with randomised trials. If you really think all children are individual, all teaching interactions are about relationships and are entirely mediated by context then that’s fine but it doesn’t fit with going into a school and encouraging greater use of drama activities or iPads or teaching like a highwayman or whatever. If you are making claims about these teaching approaches then you are doing so because you believe that they will have desirable effects: you are making causal claims whether you state these explicitly or not. The burden is then on you to supply evidence for those claims. And the form of evidence best suited to evaluating causal claims is randomised trials.
Nobody is imposing such trials on you. Nobody is suggesting that all children are the same. It is you who is implying a positive effect and it is this that is testable. Without this evidence, your views are speculative.
Maybe you have evidence from experience, testimonials or case studies. But these aren’t anywhere near as strong for establishing causal links as randomised trials because they are subject to so much potential bias. If the effect is real then a well-run randomised trial will demonstrate it. Let me turn the point around and rephrase it as a question: If the advantages of an approach are so ephemeral that they don’t show up in a trial then can we have any confidence at all in harnessing them in our classrooms?
Of course, there is more to life than simply running trials. They can be poorly designed – and many are. They need to be replicated. And I would always look for triangulation. Longer correlational studies that triangulate with the findings of smaller trials can give us confidence in the mechanisms we have uncovered. We also need a plausible theoretical mechanism for the causal claim or it becomes hard to interpret data or make suggestions about how we might apply or extend the findings. But there is no escape from the fact that causal claims are testable.
Sleight of hand
The education consultant who sells us an approach to instruction on the basis that it is superior but who, when asked for evidence, seeks to muddy the waters, is performing a sleight of hand. It is the same trick attempted by homeopaths and chiropractors. We have not imposed randomised trials on them. After all, we have not asked them to make causal claims. They have done that all by themselves. So it is they who should supply good evidence to support these claims.