Does it all depend on context?


Following my post on positivism and how the term has become corrupted and weaponised, one commenter suggested that a reason education academics often seek to call out positivism is because of the role of context in education. Context is clearly a key issue for everyone involved in trying to improve educational outcomes and this is probably best summarised by Dylan Wiliam as:

“Everything works somewhere, and nothing works everywhere.”

The first part of this is similar to a claim made by John Hattie in Visible Learning. He notes that in education, you can generate positive evidence for pretty much any intervention. He uses this to argue that we should therefore look for interventions that have an effect size that crosses a particular threshold. I am no longer convinced of this part of Hattie’s argument, but the idea that you can find positive evidence for pretty much anything is a fairly accurate one and poses a problem for those who want to use evidence to inform teaching approaches.

Wiliam adds the idea that, ‘nothing works everywhere’ and, although I doubt Wiliam means it in this way, this claim is something of a refrain among those who wish to keep any kind of scientific evidence out of education. It is the kind of argument adopted by those who reject systematic phonics programmes on the basis that teachers need to adapt to the needs of their specific class of students, despite never explaining exactly what evidence they would collect on these needs and exactly how that would affect what and how they teach.

When context is used in this way, we essentially have an appeal to relativism – that there is no one truth about the most effective way to teach a particular concept.

This is a challenge for evidence-informed teachers because cognitive science would seem to imply some broad principles such as that novices learning new academic concepts tend to do better if exposed to interactive explicit teaching rather than a facilitative approach like inquiry learning. And cognitive scientists Dan Willingham David Daniel have argued that students have more in common with each other in how they learn than they have differences. How can ‘everything works somewhere, and nothing works everywhere’ be consistent with the existence of broadly applicable principles of good teaching? Hasn’t something got to give?

First of all, it’s worth exploring why everything works somewhere. Often, this is just due to poor experimental design. Imagine running a randomised controlled trial, the gold standard method to assess the effect of an intervention, on a pill for reducing the symptoms of influenza. You randomly assign half of your subjects presenting with flu symptoms to no treatment at all and half to the pill. The patients who take the pill report fewer subsequent symptoms. Such a result is, of course, entirely consistent with a placebo effect and you may well obtain similar results if all you give patients is a sugar pill.

Does this prove that no medication is more effective than any other medication for alleviating symptoms of flu? No, because we haven’t tested that. And yet the vast majority of educational research looks like our influenza study or worse in terms of its design. A new and shiny intervention is compared with doing nothing or business as usual or handing out a bunch worksheets with a grunt and, lo and behold, te intervention is found to be more effective. Comparing effect sizes is not the magic that can get you out of this mess because effect sizes from different studies are rarely directly comparable. So the only solution is to test interventions against each other in a controlled way. Few large-scale education studies do this, but smaller educational psychology and cognitive science studies often do. In this situation, the only thing we can really learn from large-scale studies is on those rare occasions that they don’t work. We can say, ‘Even with the advantage of this design, it still doesn’t work. Why?’

There’s no reason why large-scale education studies cannot compare promising interventions directly with each other. They just tend not to.

The second issue is that variability due to context does not necessarily require us to slip into relativism, even if this is what it first appears to do. There is no smoking banana skin.

Imagine I conduct the light experiment I mentioned in my last post. I fire a laser at two slits and record the pattern this produces on a screen. I then try and replicate this situation in a different context and I cannot produce the same pattern. If this was education research, many would suggest this is because the laws governing this process have changed from the one context to the next. Yet this is physics and we know the laws cannot change. What would therefore prevent us from replicating the original pattern? Perhaps the slits are a different width or the laser has a different frequency or we cannot even get hold of a laser. There are lots of factors that can vary between contexts and cause different results without us having to slip into relativism.

In education, these contextual factors are critical and can help us broaden and deepen our theories. In other words, rather than prompting the rejection of the scientific method, they should be the subject of renewed investigation.

Standard

4 thoughts on “Does it all depend on context?

  1. I think another problem with Hattie’s effect size stricture is that so many studies seem to measure the effectiveness of various interventions immediately after the said intervention, which of course is notoriously unreliable when it comes to long-term effects.

    When it comes to interventions in Ed, I’m yet to be convinced that anything other than longitudinal studies featuring two competing interventions wholeheartedly supported by their proponents have any real validity. The impossibility of double-blinding is such an important (and misunderstood) barrier otherwise.

  2. Chester Draws says:

    We are all, of course individuals, but let us compare education with medicine.

    If I have a headache then I, like most people, take a paracetamol. I don’t need to go to the doctor and seek individualised advice until after the known most effective strategy has been tried and failed. Moreover, if paracetamol doesn’t work for me, then I try other alternatives. But I try them in succession, not a heap all at the same time.

    But with education we apparently shouldn’t investigate what works best and start with that. We are advised to mix up a range of strategies and hope some of them work. That makes little sense in education, just as it does with medicine.

    As you say Greg, we are actually far more alike than we are different. And effective strategies tend to work on all of us.

  3. Alice Flarend says:

    Continuing up on our previous conversation, I think I have found our difference on this idea of context.

    All of the studies I followed up on in the Kirschner article (which are not all the studies in the article) are studies aimed at more general categories of strategies rather than the ones I have used which are aimed at targeting specific content. In this, I mean things like spacing out retrieval over time versus looking a specific interventions such as strategies on how to use video in teacher education in this article (do not let the PBL in the article think it is about unguided instruction) http://linkinghub.elsevier.com/retrieve/pii/S0742051X10001666
    Among the context that matters in this study of teacher professional development is the voluntary nature of the participants. Would the same results happen if the teachers were required to participate? The study also comments on the specific nature of the videos to use to make their usage more productive.

    Another anecdotal example that brought the idea of context to a forefront involved a teacher workshop on Earth and Space science. Teachers were brainstorming phenomena familiar to students to help provide evidence of crustal folding. Some were discussing the advantages of using roadcuts and others were discussing the disadvantages. That last group were from a flat urban area whereas the first group were from a hilly rural area. Their different contexts directly affected their affordances for teaching the content. The general strategy of using an anchoring phenonemon can work to increase student understanding, but the choice of the phenomenon affects the efficacy.

    So, I think our difference in the idea of context comes down to grain size of analysis.

    • Thanks. I’m not sure about that. You hunt at implementation and scaling issues which are well-known problems in education and are often fatal to an intervention. However, that’s not an issue of fundamental principles.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.