Filling the pail

Does it all depend on context?

Advertisements


Following my post on positivism and how the term has become corrupted and weaponised, one commenter suggested that a reason education academics often seek to call out positivism is because of the role of context in education. Context is clearly a key issue for everyone involved in trying to improve educational outcomes and this is probably best summarised by Dylan Wiliam as:

“Everything works somewhere, and nothing works everywhere.”

The first part of this is similar to a claim made by John Hattie in Visible Learning. He notes that in education, you can generate positive evidence for pretty much any intervention. He uses this to argue that we should therefore look for interventions that have an effect size that crosses a particular threshold. I am no longer convinced of this part of Hattie’s argument, but the idea that you can find positive evidence for pretty much anything is a fairly accurate one and poses a problem for those who want to use evidence to inform teaching approaches.

Wiliam adds the idea that, ‘nothing works everywhere’ and, although I doubt Wiliam means it in this way, this claim is something of a refrain among those who wish to keep any kind of scientific evidence out of education. It is the kind of argument adopted by those who reject systematic phonics programmes on the basis that teachers need to adapt to the needs of their specific class of students, despite never explaining exactly what evidence they would collect on these needs and exactly how that would affect what and how they teach.

When context is used in this way, we essentially have an appeal to relativism – that there is no one truth about the most effective way to teach a particular concept.

This is a challenge for evidence-informed teachers because cognitive science would seem to imply some broad principles such as that novices learning new academic concepts tend to do better if exposed to interactive explicit teaching rather than a facilitative approach like inquiry learning. And cognitive scientists Dan Willingham David Daniel have argued that students have more in common with each other in how they learn than they have differences. How can ‘everything works somewhere, and nothing works everywhere’ be consistent with the existence of broadly applicable principles of good teaching? Hasn’t something got to give?

First of all, it’s worth exploring why everything works somewhere. Often, this is just due to poor experimental design. Imagine running a randomised controlled trial, the gold standard method to assess the effect of an intervention, on a pill for reducing the symptoms of influenza. You randomly assign half of your subjects presenting with flu symptoms to no treatment at all and half to the pill. The patients who take the pill report fewer subsequent symptoms. Such a result is, of course, entirely consistent with a placebo effect and you may well obtain similar results if all you give patients is a sugar pill.

Does this prove that no medication is more effective than any other medication for alleviating symptoms of flu? No, because we haven’t tested that. And yet the vast majority of educational research looks like our influenza study or worse in terms of its design. A new and shiny intervention is compared with doing nothing or business as usual or handing out a bunch worksheets with a grunt and, lo and behold, te intervention is found to be more effective. Comparing effect sizes is not the magic that can get you out of this mess because effect sizes from different studies are rarely directly comparable. So the only solution is to test interventions against each other in a controlled way. Few large-scale education studies do this, but smaller educational psychology and cognitive science studies often do. In this situation, the only thing we can really learn from large-scale studies is on those rare occasions that they don’t work. We can say, ‘Even with the advantage of this design, it still doesn’t work. Why?’

There’s no reason why large-scale education studies cannot compare promising interventions directly with each other. They just tend not to.

The second issue is that variability due to context does not necessarily require us to slip into relativism, even if this is what it first appears to do. There is no smoking banana skin.

Imagine I conduct the light experiment I mentioned in my last post. I fire a laser at two slits and record the pattern this produces on a screen. I then try and replicate this situation in a different context and I cannot produce the same pattern. If this was education research, many would suggest this is because the laws governing this process have changed from the one context to the next. Yet this is physics and we know the laws cannot change. What would therefore prevent us from replicating the original pattern? Perhaps the slits are a different width or the laser has a different frequency or we cannot even get hold of a laser. There are lots of factors that can vary between contexts and cause different results without us having to slip into relativism.

In education, these contextual factors are critical and can help us broaden and deepen our theories. In other words, rather than prompting the rejection of the scientific method, they should be the subject of renewed investigation.

Advertisements

Advertisements