Nothing to prove (but I will, anyway…)

It is generally acknowledged that standard classroom teaching – with the notable exception perhaps of early primary education – is usually a variant of explicit instruction. We are not necessarily talking about the most effective forms of explicit instruction here. Much of my early teaching, although explicit, didn’t make sufficient use of opportunities to collect and give feedback.

I suspect the fact that forms of explicit instruction have persisted is because they represent a balance between effectiveness and effort, both for the teachers and students. It’s what our teachers did and so it’s our default setting. In fact, this cause-and-effect is a common complaint of those who agitate for revolutions. I have observed many student teachers over the years and, despite what they are told in college, it is instinctive for them to want to stand at the front of the room and explain things. I even suspect that this is what our ancestors did many thousands of years ago when sharing their ideas. This is why such approaches are considered ‘traditional’.

Therefore, if you propose a change to this default setting then it is you who carries the burden of proof. You will need to demonstrate that your method is superior to business-as-usual. And it’s no good just showing that your method, enacted under the most favourable possible conditions, is better than the default approach. You need to show that your method enacted on a cold Thursday afternoon by an ordinary teacher is more effective than the default approach under the same circumstances.

I reckon that Dylan Wiliam has managed to do this, just about. I am convinced by his argument that use of formative assessment strategies improves upon standard instruction and that these gains are scalable and capable of being enacted by normal teachers with full timetables. In fact, some of his proposed measures represent efficiencies – finding out now that your students don’t understand something is far better than waiting three weeks until the end-of-unit assessment and then laboriously writing the same piece of feedback on each paper; feedback with no chance of being acted upon. So, not only does Wiliam present empirical evidence, he weaves it into a story that makes logical sense. We have a theory here.

Proponents of ‘constructivist’ approaches to teaching such as inquiry learning, problem-based learning or project-based learning have significantly failed to do this. Note the number of names that I had to trot-out there. Devotees will expand upon the differences between them but they share some essential similarities. They are broadly in the ‘progressive‘ tradition of education that sees learning as a naturalistic process and that emphasises the need for students, at least in part, to find stuff out for themselves rather than simply have things explained to them.

Advocates certainly see this as a change to what is typical in classrooms. In his famous TED Talk, Dan Meyer presents a broadly constructivist position under the imperative that mathematics lessons need a ‘makeover’. This therefore places the burden of proof firmly with him and with those who are arguing for such changes.

And yet the debates that I get involved in tend to result in constructivists trying to shift the burden of proof on to me. I am somehow supposed to prove that their particular method doesn’t work under any circumstances. If not, they feel perfectly justified in promoting it far and wide. Often, there is negative evidence but, at this point, the constructivist will quibble that I haven’t shown that there are no circumstances at all in which it might work (which I obviously can’t show). Sometimes they will say that the research measured the wrong outcomes and that if it had measured the right outcomes then it would have shown a different result. They rarely present evidence of these special cases where the method does work or where the right outcomes were measured.

The constant name-changes are also problematic. The ‘maker-movement’, for instance, is clearly a constructivist-inspired pedagogy and yet it hasn’t been around long enough to have had its effectiveness researched. No doubt, if I were to offer a critique, I would in turn be criticised for not presenting appropriate evidence. All the evidence against constructivist approaches in general would, presumably, be set aside because this is completely different. Similarly, I am not aware of any studies that test the effectiveness of ‘Mantle of the Expert’ and yet it bears enough similarity to constructivist strategies that I would want to see strong evidence before advocating its adoption by schools. I, and many others, suspect that the evidence thing is one of the reasons for such fluid nomenclature.

If constructivists offer any justification at all then this is often an appeal to ‘theory’ such as by citing Piaget. However, developmental psychologists no longer accept Piaget’s ideas. They therefore do not exemplify the scientific meaning of the word ‘theory’ which stands for something consistent with known evidence. I know that there are those who do not like medical analogies but imagine you were to go to the doctors and be offered a new therapy called ‘water treatment’ where you were required to drink a cup of water at each one of a number of specified times of day in order to ensure that your ‘humours’ were in ‘balance’, based upon the ‘four humours theory’. Imagine if, when you question the evidence for this approach, you are asked for your evidence that it doesn’t work.

Of course, this would never happen because, unlike education, medicine takes its approach to evidence seriously.

Nevertheless, I think it worth stating some of the evidence for explicit instruction and against constructivist approaches. So, here’s my list.

1. Kirschner, Sweller and Clark reviewed a number of studies and the literature on cognitive load theory whilst critiquing constructivist approaches.

2. Barak Rosenshine reviewed the evidence from process-product research and found that more effective teachers used approaches that he called ‘direct instruction’ and which I would call ‘explicit instruction’ in order to distinguish it from the more scripted Direct Instruction programmes developed by Engelmann and other (such as DISTAR). Most of this is paywalled but he did write a piece for American Educator.

3. Project Follow Through, the largest experiment in the history of education, is generally considered to have demonstrated the superiority of Engelmann’s Direct Instruction (DI) programmes to other methods, including those base upon constructivism. It is important to note that DI was not just the best on tests of basic skills but it performed at, or near, the top on problem solving, reading comprehension and for improving self-esteem.

4. An analysis that compared students’ maths and science scores on the TIMSS international assessment showed a correlation between higher performance and the teacher adopting a ‘lecture style’.

5. A RCT from Costa Rica showed that an ‘innovative’ constructivist-based approach produced worse performance than the business-as-usual control.

6. A meta-analysis found a small effect size for ‘guided discovery learning’ over business-as-usual conditions and a negative effect size for pure discovery over explicit instruction. Whilst this might be seen as evidence for guided discovery learning, it is worth bearing in mind that the studies included were not generally RCTs and so the experimental conditions would have favoured the intervention (which is why Hattie sets a cut-off effect size of d=0.40). The definition of guided discovery learning also included the use of worked examples which are generally considered to be characteristic of explicit instruction.

7. An analysis of the introduction of a more constructivist approach to teaching mathematics in Quebec showed an association with a decline in test scores.

8. One of my favourite studies ran a constructivist maths intervention against an explicit one (as well as business-as-usual) and found the explicit intervention was superior.

9. Klahr and Nigam found that the minority of students who were able to discover a scientific principle for themselves didn’t understand it any better than students who were taught it.

10. Studies of teacher expertise are broadly consistent with the earlier findings from process-product research as described by Rosenshine.

11. Findings on the best way to teach cognitive strategies (such as reading comprehension) also echo the findings of the process-product research i.e. that an explicit approach is more effective. (You may, as I do, still question the value of teaching such strategies or, at least, the time devoted to it). [Paywalled]

12. Classic worked-example studies show the superiority of learning from studying worked examples over learning by solving problems for novice learners. Worked examples are a feature of explicit instruction whereas problem solving (without prior instruction) is a feature of constructivist approaches.

[There are others – I’ll add to this list as I remember them]


27 thoughts on “Nothing to prove (but I will, anyway…)

  1. Hi Greg
    I wonder if your disagreement with constructive education is related the Engelmann’s affiliation with behaviourist learning reflected in his DI work? I noticed his criticism of DI in one of his blogs and put 1+2 to get 5!
    Now your stand makes sense to me. Though I would disagree that the burden of proof lies with teachers who align practice with constructivism. The burden of proof I believe is with the teacher who refuses to shift practice just because it doesn’t suit their long held beliefs based on 60s psychology.
    Please do not substitute DI for Explicit Instruction. Theta re certainly not the same thing. DI as we have twittered about is a program that you pay for (not just the manual but the $K for trainings and weekly analysis of student data by Engelmann’s crowd).

    • I am aware of the different meanings for ‘direct instruction’ and I have mentioned this a number of times in my blog. Capital letters are conventionally used to signify Engelmann’s programmes ie Direct Instruction. However, Rosenshine uses ‘direct instruction’ (lower case) to signify explicit instruction more generally. To try to avoid this confusion, I use ‘explicit instruction’.

      • Of course. Though you have combined research from DI, Lecture style teaching and explicit instruction above “Nevertheless, I think it worth stating some of the evidence for explicit instruction and against constructivist approaches. So, here’s my list.” Some would think you are interchanging the terms as if they were the same thing.

  2. Thank you for your post Greg, and your work. I’m learning so much from your writing. From a personal perspective I’m pretty cool with individual teachers adopting the teaching style that brings out the best in them. I find it a shame that this doesn’t happen much in schools because of a general determination to define and hold teachers to the best approaches, on average, regardless of teacher personality or topic. This is why school feels so same-same for students and there’s a disrespect for teachers who aren’t being true to themselves in their teaching style. It’s always obvious to student when their teachers is being a puppet.

    When a student is ready for direct instruction, it’s hugely valued. When not, it requires the student to bend their mind to fit what’s on offer, to take it and be grateful. Yet, on average, I can completely see why direct instruction approaches produce better results. I just don’t see this as a reason to hold people to teaching that way when they have a gift for other approaches. I’d like to call for more trust in students to know when they need direct instruction and some freedom for them to work with the teacher who’s natural style is direct instruction, at that point. By doing this both students and teachers would see more value, and less need to justify styles based on studies, because the benefits would be felt in real time.

    • gulluva says:

      This is a really useful discussion. Explicit instruction is not a natural way for me to think about teaching and learning and is certainly something I need to develop. But I don’t think I could be a teacher without thinking and acting upon ideas for alternative ways of teaching.
      The challenge is to make an effort to “know thy impact” and act upon it. This is of course much more difficult if methods are not as easily measured but it seems like a fair challenge.

  3. Some interesting points raised here, Greg, thanks. Clearly, an evidence-based approach to education is desirable. However, there are a couple of points I would make.

    The first is that Kirschner, Sweller & Clark’s paper was based on some dubious assumptions about what constitutes IL & PBL, notably that there is minimal guidance whereas, in fact, there is considerable scaffolding of learners in these approaches. These mistaken assumptions are outlined in Hmelo-Silver, Duncan & Chinn’s response to Kirschner et al and can be found here:

    The second is that, while RCTs are the gold-standard for evidence-based medicine, are they the best or only way to develop an evidence base for education? The number of variables to consider in a class of students, the teacher(s) involved, other psychological & physiological factors are considerably more complex than a RCT set up to test, say, a specific drug among a specific group of individuals. A RCT-only approach to educational research suggests a positivist model that I find difficult to accept.

    I would be interested to hear your thoughts on this.

  4. Pingback: #TFImpact15 – The Teach First Impact Conference | guerinmaths

  5. Interestingly, test scores from the Iowa Test of Basic Skills (ITBS), a criterion referenced test that has been given for years, show that from mid 4o’s through mid 60’s, scores in math and other subjects increased steadily. I write about this and show the data (and give the source for such data) in this article:

    From the article: “One conclusion that can be drawn from these test scores is that the method of education in effect during that period appeared to be working. And by definition, whatever was working during that time period was not failing. That the math could have been made more challenging and covered more topics in the early grades does not negate the fact that the method was effective. ”

    Criticisms of these data are generally of the “so what; those aren’t authentic tests” type. Of course, if the tests had shown a steep decline, there likely would have been no comments about their “authenticity”.

  6. Pingback: Flipping the System: Where Should Ground Up Education Reform Start? | Educhatter's Blog

  7. Pingback: Dismissed as a troll | Filling the pail

  8. Pingback: If constructivist teaching is the aspirin then what exactly is the headache? | Filling the pail

  9. Pingback: David Klahr writes | Filling the pail

  10. Pingback: tips for reducing maths anxiety | Filling the pail

  11. Pingback: Evidence that inquiry maths works | Filling the pail

  12. Pingback: Dismissed as an ‘ideologue’ | Filling the pail

  13. Pingback: The evidence for Dan Meyer’s TED talk | Filling the pail

  14. Pingback: What’s the point? | Filling the pail

  15. Pingback: Denying the debate about progressive and traditional education (Part 3) | Scenes From The Battleground

  16. Pingback: Teachers of Australia – there is another way | Filling the pail

  17. Pingback: The best way to teach | Filling the pail

  18. Pingback: Fuzzy maths failure in Australia – Filling the pail

  19. Pingback: Should maths lessons relate to students’ everyday lives? – Filling the pail

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.