I have a lot to thank John Hattie for. Through his 2008 book, Visible Learning, I was introduced to the world of education research. It was a reference in this book that enabled me to track down the 2006 article by Kirschner, Sweller and Clark, Why Minimal Guidance During Instruction Does Not Work, an article that has had such an influence on me that I am now completing a PhD in cognitive load theory. Visible Learning has served as a springboard for a range of personal investigations, from tracking down Carl Bereiter’s views on thinking skills programmes to reading about Engelmann’s Direct Instruction. It should be on the shelf of anyone with an interest in education research.
It is also the case that many, if by no means all, of the findings in Visible Learning fit with my own views about education. When I first encountered the book, it provided respite from what was, at the time, the relentless, unchallenged march of constructivism. Hattie not only made it acceptable to explain concepts to students, a guilty strategy that many good teachers hid in an age where teacher-led lessons were frowned upon, but he provided evidence that this was more effective than the alternatives. Explicit teaching was back.
At the time, I tended to accept Hattie’s explanation for his methodology. Everything works, was the claim, and so we should look for practices that give an effect size above a certain threshold. That way, we may move past the inadequacies of individual studies and look at the aggregated evidence. It seemed plausible. Nevertheless, Hattie’s data threw up some odd results. What were ‘Piagetian programmes’ and what relevance did they have for my classroom? Why were tracked students who were given a different curriculum diet scythed off from the rest of the data on ability grouping? What were we to make of the findings on homework? Surely, the effectiveness of homework largely depends on what the homework is?
I am now sceptical about Hattie’s approach. As I have learnt more about education research, I have started to understand what factors increase or decrease effect sizes. Worryingly, the best quality studies tend to produce effect sizes below Hattie’s threshold and so his findings draw us away from such studies. Maybe this doesn’t matter. Maybe aggregating bad studies leads us to the same conclusions as focusing on good studies, particularly when it comes to explicit teaching. Maybe.
So perhaps we should leave Hattie alone. Perhaps we should focus on debunking the really bad stuff in education because there is plenty of that around. At least Hattie is making an attempt to filter out the noise.
I cannot accept that. As a profession, we should seek the truth about different approaches, rather than strategically highlighting some issues whilst ignoring others in an effort to support a preferred outcome. The former is science and the latter is politics. If Hattie’s approach is valid then it should withstand scrutiny. If it is flawed then this should be known. If he is right for the wrong reasons then we must be clear about that.
Even from a less principled, strategic standpoint, it makes no sense to give Hattie a free pass. If teachers are led to believe that Visible Learning is the evidence to support explicit teaching, and if they later find out that Hattie’s methods have been challenged – and they will find this out because nobody controls the internet – they may draw the conclusion that explicit teaching itself has been brought into question. We could set-up a new generation of teachers to swing the pendulum back the other way.
Many teachers have rejected progressive teaching methods in recent years and this is at least in part because they have felt misled about the evidence. It is quite possible that the same could happen to explicit teaching.