Mirror, mirror, on the webPosted: October 27, 2016
Dr Jonathan Sharples’s opening gambit was pretty odd and pretty interesting. For some reason he wanted us to decide between real and fake names for shades of paint. Imagine if this was a key curriculum objective (bear with me). How would you teach children to pull this off successfully? Perhaps you might ask them to visualise the shade – does it make sense? What colour would it be? Would it appeal to someone likely to be purchasing paint? We might describe the deployment of such questions as a ‘meta-cognitive strategy’.
The first three paint names popped up on the screen: Elephant’s Breath, Churlish Green and Norwegian Blue. Unlikely as the other two sounded, I immediately knew that the fake colour was “Norwegian Blue” because that is the breed of parrot in Monty Python’s dead parrot sketch. Which, when you think about it, is a pretty nifty demonstration of the fact that direct knowledge trumps the use of meta-cognitive strategies every time.
Sharples was speaking at the Evidence for Learning (E4L) Evidence Exchange. He heads-up the Education Endowment Foundation (EFF) in England and the recently founded E4L has a license to use the EEF knowledge base in Australia. E4L also intends to run the kind of randomised controlled trials (RCTs) that the EEF runs in England.
Part of the E4L (and EEF) strategy is to produce a toolkit offering a meta-analysis of different education interventions alongside an effect size stated in months of additional progress. This has the potential to enable educators to think critically about different practices but I wonder whether it is going to do that. At the Evidence Exchange, it became apparent that when people look into this toolkit, they see themselves. Like it’s some kind of mirror.
There were a lot of people who are clearly doing great work and achieving some amazing things but it hardly seems due to the toolkit. Instead, they have looked to it for verification. We heard about the use of feedback, for instance. Although this initiative pre-dated the toolkit. There was a fascinating talk by a Victorian primary school principal on mastery learning. However, he made it clear that it was a subject he had been interested in since 1970 and the work of Benjamin Bloom. So you look into the toolkit, see something you recognise, feel validated and off you go.
Unless you are from the Grattan Institute.
Grattan researcher, Jordana Hunter gave a presentation on ‘targeted teaching’ or what is normally known as ‘differentiation’. The Grattan folks had apparently read some literature that showed that this approach was effective (I disagree and you might want to read my analysis). They then asked around, found two schools who were implementing differentiation effectively and wrote about how these schools did this. This is an odd methodology because there is no comparison group of less effective schools. It means that we cannot make causal inferences about what these schools did that made them successful.
And whatever the literature was that they had read that convinced them of the need for such an approach, it was obviously very different to the literature read by the people who constructed the E4L toolkit. The toolkit analyses an approach known as ‘individualised instruction’. It’s hard to pin down exactly what this is and exactly how far the instruction is personalised because it is based upon meta-analyses of differing interventions, but the E4L verdict is clear:
“Individualising instruction does not tend to be particularly beneficial for learners. One possible explanation for this is that the role of the teacher becomes too managerial in terms of organising and monitoring learning tasks and activities, without leaving time for interacting with learners or providing formative feedback to refocus effort. The average impact on learning tends overall to be low, and is even negative in some studies, appearing to delay progress by one or two months.
Empirical research about individualised instruction as a teaching and learning intervention in Australian schools remains limited, and the few Australian-based studies on individualised instruction also tend to focus on either ‘gifted’ or ‘struggling’ students.
The available Australian research suggests that it is not the most effective or practical intervention and shows that teachers face practical difficulties employing this intervention, such as curriculum restrictions and significant increases in their workload.”
Maybe this is something very different to the kind of differentiation being promoted by Grattan. But given that the Evidence Exchange was about E4L then exactly where is the support for the Grattan method in the toolkit? No matter. Nobody seemed to notice this odd discrepancy. After all, differentiation is in the Australian Professional Standards for Teachers so it must be effective.
When Sharples looks into the mirror he sees meta-cognition. There is an extraordinarily vast array of things that the toolkit groups together as ‘meta-cognition and self-regulation’ which spans affective strategies aimed a student motivation and resilience, all the way to prosaic methods for planning. The measured effects are not restricted to cognitive ones. When you take all this into account, it is hard to interpret the extra 8 months of progress this whole mass of different things is suggested to produce.
On his own graph of effects versus costs, Sharples has reduced this to simply ‘meta-cognitive’ and he is clearly a fan.
Meta-cognitive strategies are particularly suited to the kinds of interventions that organisations like the EEF run. Dan Willingham has referred to meta-cognitive reading strategies as a ‘bag of tricks’ and with good reason. They are not skills in the sense that a sequence of deliberate practice will make you improve at them. They are useful hacks that, once known, produce a one-off hike in performance. If you take a student who can’t structure a piece of persuasive writing and teach them the ‘firstly… secondly… thirdly…’ hack then you will see an immediate and significant jump against a standardised persuasive writing test. But how significantly have you improved their writing skill? The slower, much more liminal, curriculum-centred process of building vocabulary is far harder to capture in this way.
- The principal researcher on the study has a philosophical issue with conducting tests of statistical significance and so didn’t do one. I would have thought that EEF studies were public goods and so individual researchers should not be able to impose their tastes on them in this way.
- The outcome measure that they said they were going to use before the trial did not show any effect and so is not the one they used in their report. Instead, once they had the data they decided to do a different analysis looking at progress since KS1 results. This is the well-known problem of researcher degrees of freedom – analyse a study enough ways and eventually you will find something that looks like an effect.
I asked Sharples about the lack of statistical significance and he suggested that they have rerun the numbers and the results stand-up. I look forward to reading this paper.
Sharples also displayed a list of four trials that he said could broadly be categorised as meta-cognitive (Although I think he said that there were six – I might be wrong). He claimed that all of these trials showed a positive result, the implication being that, whenever they had tested meta-cognition, it had worked. But isn’t the “Let’s Think Secondary Science” program a meta-cognitive intervention? And didn’t that fail?