I am actually slightly more interested in what to teach than how to teach. However, teaching methods are more amenable to experiment than curriculum content and so I find myself discussing them more often.
The reason why the effect of our choice of content is not easy to test highlights an important flaw in many experimental designs. Think about it: what will you test students on at the end of your experiment? If this content was taught in one condition but not in the other then I can tell you the outcome already. So any fair test of content has to involve a transfer of understanding from one context to another. This is hard to achieve and relies on an element of chance.
So, setting the question of content aside, what are the best teaching methods?
Teacher-led is better
In the words of Jeanne Chall:
“The methods with the highest positive effects on learning are those for which the teacher assumes direction, for example, letting students know what is to be learned and explaining how to learn it, concentrating on tasks, correcting errors, and rewarding of activities – characteristics found in traditional, teacher-centered education… Quite consistently, when results were analysed by socioeconomic status, it was the more traditional education that produced the better academic achievement among children from low-income families.”
There is no great mystery here. If you want a child to learn something then it is more effective to teach it to them than to try to create the conditions through which the child will come to understand that something for themselves. Any teacher who is well versed in formative assessment routines will be aware of just how difficult it is to convey the subtleties of an academic subject while avoiding key misconceptions, even with constant, minute-by-minute attention. The idea that students receiving less teacher input will somehow do better is quite far-fetched.
For instance, what would you predict to be more effective: teaching children how to write or just asking them to do lots and lots of writing? The evidence is clear that explicit writing instruction is superior.
So why is there experimental evidence for alternatives to teacher-led instruction?
The reason why sensible people stray from this fairly obvious position is perhaps related to the way much education research is conducted. If you want to show that your pet approach works then there are plenty of ways to go about this. Firstly, you can try manipulating content. For example, imagine an experiment where one group receives teacher-led instruction about the rate of chemical reactions and the other group conducts experiments. You then give students a test that is all about conducting experiments, the group that learnt through experiments does better and so you conclude that this is more effective than teacher-led instruction.
You could also run your well-resourced and heavily hyped intervention against a poor-quality version of the alternative or perhaps against no alternative at all. There are plenty of experiments where doing something is compared to doing nothing. The Education Endowment Foundation (EEF) seem keen to fund such studies and it is a major reason why I have argued for more ABC designs where two competing interventions are compared against each other and a control.
I suppose the EEF studies do offer us something: If you can’t get your intervention to work under such favourable conditions then it really is a dead duck. The EEF trials of Project-Based Learning and Let’s Think Secondary Science would seem to fit this bill.
This leaves us with a landscape where, as Professor John Hattie is famous for saying, “everything works”. Hattie’s solution is to coral similar studies together using the tool of meta-analysis and then only look for interventions that have an ‘effect size’ that is greater than a certain value (0.40 standard deviations). I am no longer convinced about this solution – it seems arbitrary and takes no account of the quality of the studies that have been fed into the meta-analysis sausage machine.
Well-designed experiments with good controls do tend to consistently show evidence in favour of explicit, teacher-led instruction and so do natural experiments or correlations (see the links here or Rosenshine’s article). The superiority of teacher-led approaches jumps out of the recent two rounds of PISA data. Yes, these are only correlations but they are highly suggestive and suffer far less from potential experimenter bias. They also tell us about what happens in real-world classrooms.
All explicit, all the time?
If you are going to argue that alternatives to explicit instruction are more effective then I will disagree. Similarly, if you want to argue that they are more motivating, I will still disagree. One major component in long-term motivation is the feeling of getting better at something – explicit instruction can deliver this feeling because it is effective.
However, this does not mean – in the words of one critic who dubbed me an ‘extremist’ – that I favour, ‘all explicit, all the time’. All models of explicit instruction include the gradual release of responsibility to the student. Once students have a good grounding in a topic then it is possible for them to do more open-ended and investigative work. For students who have reached a certain level of expertise, this will be more effective than redundantly listening to explanations of concepts that they already understand.
There is also an argument for variety. I don’t think explicit instruction is demotivating but I do think that doing the same thing all the time could definitely be demotivating. We might decide to trade efficiency for variety. A research project may result in less learning overall for the time invested but we might decide that we want to give students that experience. I’m fine with that provided that we do it with our eyes open.
Nevertheless, the evidence is clear. The best way to teach academic content is with explicit instruction.