It is generally acknowledged that standard classroom teaching – with the notable exception perhaps of early primary education – is usually a variant of explicit instruction. We are not necessarily talking about the most effective forms of explicit instruction here. Much of my early teaching, although explicit, didn’t make sufficient use of opportunities to collect and give feedback.
I suspect the fact that forms of explicit instruction have persisted is because they represent a balance between effectiveness and effort, both for the teachers and students. It’s what our teachers did and so it’s our default setting. In fact, this cause-and-effect is a common complaint of those who agitate for revolutions. I have observed many student teachers over the years and, despite what they are told in college, it is instinctive for them to want to stand at the front of the room and explain things. I even suspect that this is what our ancestors did many thousands of years ago when sharing their ideas. This is why such approaches are considered ‘traditional’.
Therefore, if you propose a change to this default setting then it is you who carries the burden of proof. You will need to demonstrate that your method is superior to business-as-usual. And it’s no good just showing that your method, enacted under the most favourable possible conditions, is better than the default approach. You need to show that your method enacted on a cold Thursday afternoon by an ordinary teacher is more effective than the default approach under the same circumstances.
I reckon that Dylan Wiliam has managed to do this, just about. I am convinced by his argument that use of formative assessment strategies improves upon standard instruction and that these gains are scalable and capable of being enacted by normal teachers with full timetables. In fact, some of his proposed measures represent efficiencies – finding out now that your students don’t understand something is far better than waiting three weeks until the end-of-unit assessment and then laboriously writing the same piece of feedback on each paper; feedback with no chance of being acted upon. So, not only does Wiliam present empirical evidence, he weaves it into a story that makes logical sense. We have a theory here.
Proponents of ‘constructivist’ approaches to teaching such as inquiry learning, problem-based learning or project-based learning have significantly failed to do this. Note the number of names that I had to trot-out there. Devotees will expand upon the differences between them but they share some essential similarities. They are broadly in the ‘progressive‘ tradition of education that sees learning as a naturalistic process and that emphasises the need for students, at least in part, to find stuff out for themselves rather than simply have things explained to them.
Advocates certainly see this as a change to what is typical in classrooms. In his famous TED Talk, Dan Meyer presents a broadly constructivist position under the imperative that mathematics lessons need a ‘makeover’. This therefore places the burden of proof firmly with him and with those who are arguing for such changes.
And yet the debates that I get involved in tend to result in constructivists trying to shift the burden of proof on to me. I am somehow supposed to prove that their particular method doesn’t work under any circumstances. If not, they feel perfectly justified in promoting it far and wide. Often, there is negative evidence but, at this point, the constructivist will quibble that I haven’t shown that there are no circumstances at all in which it might work (which I obviously can’t show). Sometimes they will say that the research measured the wrong outcomes and that if it had measured the right outcomes then it would have shown a different result. They rarely present evidence of these special cases where the method does work or where the right outcomes were measured.
The constant name-changes are also problematic. The ‘maker-movement’, for instance, is clearly a constructivist-inspired pedagogy and yet it hasn’t been around long enough to have had its effectiveness researched. No doubt, if I were to offer a critique, I would in turn be criticised for not presenting appropriate evidence. All the evidence against constructivist approaches in general would, presumably, be set aside because this is completely different. Similarly, I am not aware of any studies that test the effectiveness of ‘Mantle of the Expert’ and yet it bears enough similarity to constructivist strategies that I would want to see strong evidence before advocating its adoption by schools. I, and many others, suspect that the evidence thing is one of the reasons for such fluid nomenclature.
If constructivists offer any justification at all then this is often an appeal to ‘theory’ such as by citing Piaget. However, developmental psychologists no longer accept Piaget’s ideas. They therefore do not exemplify the scientific meaning of the word ‘theory’ which stands for something consistent with known evidence. I know that there are those who do not like medical analogies but imagine you were to go to the doctors and be offered a new therapy called ‘water treatment’ where you were required to drink a cup of water at each one of a number of specified times of day in order to ensure that your ‘humours’ were in ‘balance’, based upon the ‘four humours theory’. Imagine if, when you question the evidence for this approach, you are asked for your evidence that it doesn’t work.
Of course, this would never happen because, unlike education, medicine takes its approach to evidence seriously.
Nevertheless, I think it worth stating some of the evidence for explicit instruction and against constructivist approaches. So, here’s my list.
2. Barak Rosenshine reviewed the evidence from process-product research and found that more effective teachers used approaches that he called ‘direct instruction’ and which I would call ‘explicit instruction’ in order to distinguish it from the more scripted Direct Instruction programmes developed by Engelmann and other (such as DISTAR). Most of this is paywalled but he did write a piece for American Educator.
3. Project Follow Through, the largest experiment in the history of education, is generally considered to have demonstrated the superiority of Engelmann’s Direct Instruction (DI) programmes to other methods, including those base upon constructivism. It is important to note that DI was not just the best on tests of basic skills but it performed at, or near, the top on problem solving, reading comprehension and for improving self-esteem.
6. A meta-analysis found a small effect size for ‘guided discovery learning’ over business-as-usual conditions and a negative effect size for pure discovery over explicit instruction. Whilst this might be seen as evidence for guided discovery learning, it is worth bearing in mind that the studies included were not generally RCTs and so the experimental conditions would have favoured the intervention (which is why Hattie sets a cut-off effect size of d=0.40). The definition of guided discovery learning also included the use of worked examples which are generally considered to be characteristic of explicit instruction.
11. Findings on the best way to teach cognitive strategies (such as reading comprehension) also echo the findings of the process-product research i.e. that an explicit approach is more effective. (You may, as I do, still question the value of teaching such strategies or, at least, the time devoted to it). [Paywalled]
12. Classic worked-example studies show the superiority of learning from studying worked examples over learning by solving problems for novice learners. Worked examples are a feature of explicit instruction whereas problem solving (without prior instruction) is a feature of constructivist approaches.
[There are others – I’ll add to this list as I remember them]