What if John Hattie is right for the wrong reasons?

I have a lot to thank John Hattie for. Through his 2008 book, Visible Learning, I was introduced to the world of education research. It was a reference in this book that enabled me to track down the 2006 article by Kirschner, Sweller and Clark, Why Minimal Guidance During Instruction Does Not Work, an article that has had such an influence on me that I am now completing a PhD in cognitive load theory. Visible Learning has served as a springboard for a range of personal investigations, from tracking down Carl Bereiter’s views on thinking skills programmes to reading about Engelmann’s Direct Instruction. It should be on the shelf of anyone with an interest in education research.

It is also the case that many, if by no means all, of the findings in Visible Learning fit with my own views about education. When I first encountered the book, it provided respite from what was, at the time, the relentless, unchallenged march of constructivism. Hattie not only made it acceptable to explain concepts to students, a guilty strategy that many good teachers hid in an age where teacher-led lessons were frowned upon, but he provided evidence that this was more effective than the alternatives. Explicit teaching was back.

At the time, I tended to accept Hattie’s explanation for his methodology. Everything works, was the claim, and so we should look for practices that give an effect size above a certain threshold. That way, we may move past the inadequacies of individual studies and look at the aggregated evidence. It seemed plausible. Nevertheless, Hattie’s data threw up some odd results. What were ‘Piagetian programmes’ and what relevance did they have for my classroom? Why were tracked students who were given a different curriculum diet scythed off from the rest of the data on ability grouping? What were we to make of the findings on homework? Surely, the effectiveness of homework largely depends on what the homework is?

I am now sceptical about Hattie’s approach. As I have learnt more about education research, I have started to understand what factors increase or decrease effect sizes. Worryingly, the best quality studies tend to produce effect sizes below Hattie’s threshold and so his findings draw us away from such studies. Maybe this doesn’t matter. Maybe aggregating bad studies leads us to the same conclusions as focusing on good studies, particularly when it comes to explicit teaching. Maybe.

So perhaps we should leave Hattie alone. Perhaps we should focus on debunking the really bad stuff in education because there is plenty of that around. At least Hattie is making an attempt to filter out the noise.

I cannot accept that. As a profession, we should seek the truth about different approaches, rather than strategically highlighting some issues whilst ignoring others in an effort to support a preferred outcome. The former is science and the latter is politics. If Hattie’s approach is valid then it should withstand scrutiny. If it is flawed then this should be known. If he is right for the wrong reasons then we must be clear about that.

Even from a less principled, strategic standpoint, it makes no sense to give Hattie a free pass. If teachers are led to believe that Visible Learning is the evidence to support explicit teaching, and if they later find out that Hattie’s methods have been challenged – and they will find this out because nobody controls the internet – they may draw the conclusion that explicit teaching itself has been brought into question. We could set-up a new generation of teachers to swing the pendulum back the other way.

Many teachers have rejected progressive teaching methods in recent years and this is at least in part because they have felt misled about the evidence. It is quite possible that the same could happen to explicit teaching.


8 thoughts on “What if John Hattie is right for the wrong reasons?

  1. ijstock says:

    Many teachers have rejected progressive methods because it became (just about) permissible at long last to do so. And because they do not work, or are at best very inefficient, using time that teachers do not have. But I have yet to encounter a teacher who uses Hattie’s findings at a specific lesson-by-lesson level. It is not practicable, however useful his overall findings might be. The main problem is (still) that teaching is not the algorithmic activity that JH and all researchers have to assume. A teacher spends a lot of time being reactive to classroom situations and children’s needs, only some of which can be anticipated. That means the application of theory has to remain at best a background concern. Have you read Duncan Watts’s book Everything is Obvious (when you know the answer)?

  2. Tom Burkard says:

    If the scientific method is built upon the principle of falsification, we have every reason to be even more sceptical of findings in the social ‘sciences’. As one wag quipped, “Tell me the conclusions you’d like, and I’ll supply you with the references”. Yet most people no longer look at ‘scientific’ findings with scepticism–rather, they regard it with almost religious awe. And meta-analysis would seem to distil scientific wisdom, thereby enhancing the authority of Hattie and the EEF.

    I think we are all indebted to Greg for his exposure of the EEF’s ludicrous choice of studies for its meta-analysis of Metacognition and Self-regulation. This prompted me to look at their conclusions on teaching children to read; unsurprisingly, they looked almost exactly like the 1998 National Literacy Strategy, which itself was a political compromised between the various factions in the ‘reading wars’. It’s like the 2006 Rose Review never happened; the ‘simple view of reading’ has been abandoned and synthetic phonics is once again merely one of many strategies used to teach decoding skills. The dreaded “searchlights” are once again shining brightly.

    This is not to dismiss research altogether: after all, my own views have been strongly shaped by any number of powerful studies which challenged the progressive orthodoxy. However, the real test is how one applies this knowledge in the classroom–in other words, does it work? In the words of Bob Dylan, I think it’s time teachers started watching their parking meters.

  3. Pingback: Are effect sizes magic? – Filling the pail

  4. Tough week to be John Hattie. Greg, I presume you have seen Robert Slavin’s blog post this week “John Hattie is wrong” https://robertslavinsblog.wordpress.com/

    All of these criticisms are valid, and much of the evidence that Hattie has gathered may be of poor quality, but he has opened the discussion about what evidence is, and that educators must become knowledgeable about the evidence if they want to be seen as true professionals.

    Lot’s of work to be done on bringing an understanding of evidence to educators, and it might even involve some math!

    • chrismwparsons says:

      Actually, Greg’s blog post written the day before this one was a direct response to Slavin’s post!

  5. Pingback: What if Hattie is right for the wrong reasons? – KBA Teaching and Learning

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.