Direct and Explicit

There was no conspiracy against Ignaz Semmelweis. It’s just that the doctors didn’t like what he had to say.

Semmelweis discovered that if obstetricians washed their hands then this dramatically reduced the incidence of puerperal fever. He linked this to the fact that doctors would often move from examining corpses to examining patients and thought that something was being transferred. So Semmelweis foreshadowed the germ theory of disease.

You might think that his success in cutting mortality rates would be universally welcomed. It wasn’t. Some questioned the validity of his data, suggesting that further studies were needed. Others felt the findings were a bit of an insult to the medical profession because doctors were gentleman and gentlemen’s hands were clean.

The main problem was that it didn’t fit with the prevailing orthodoxy that puerperal fever was complicated and developed in different ways in different patients. Doctors took what might be called a ‘patient-centered’ approach; disease was caused by an imbalance of the four humours; blood, phlegm, yellow bile and black bile. In each patient, the balance would be different. It didn’t seem at all likely that there was one single, simple cause which a prescription such as ‘wash your hands’ would eliminate.

I can’t help seeing parallels between this story and the strange tale of Project Follow Through. Every day it seems as if I come across people with senior roles in education who have not heard of Follow Through or have only heard a brief critique of it.

This little piece of forgotten history is fascinating and I have written about it before. In short, it was a massive experiment in the US based on a horse-race design. Different approaches to teaching early literacy and numeracy were trialled against each other. Due to its size and ambitions, it was impossible to control conditions in the way a conscientious statistician would wish. This has led some to claim that it demonstrates nothing – the critique that I mentioned.

However, I am rather convinced by Carl Bereiter’s argument. Yes, there are problems with analysing the data but we may still discern a significant result. “Direct Instruction” was the most successful intervention. The least effective approaches were those that Bereiter calls ‘child-centered’ such as the “Open Education Model” and “Responsive Education”.

It is important to note what Direct Instruction means in this context. The term ‘direct instruction’ i.e. in lower case is often used to mean any form of explicit teaching that is coupled with deliberate practice in order to meet clear objectives – the sorts of techniques that Barak Rosenshine has written extensively about. However, Direct Instruction i.e. in upper case, refers to specific educational programs originally developed by Siegfried Engelmann and colleagues for the Follow Through experiment. These involve the processes of generic direct instruction but also include a tightly structured approach to the curriculum that means that lessons are effectively scripted. Neither approach is synonymous with lecturing because any form of direct instruction typically involves a large component of teacher-student interaction throughout.

When you read back through the Follow Through literature there are echoes of exactly the debates that are taking place on Twitter today. Critics of Follow Through wonder whether the programs that turned out to be less effective might have performed better if assessed on different outcomes. This is a familiar argument: Direct Instruction might be better for developing basic skills but perhaps ‘child-centered’ or ‘constructivist’ approaches might be better at developing other things?

For a start, this is not what Follow Through showed. Although there were tests of basic skills, there were also tests of mathematical problem solving and reading comprehension. Direct Instruction still outperformed the other models in these measures, although not by as much. And this seems like quite a robust result, replicable in different contexts and at different student ages. It is as if we have unearthed a set of general principles rather than a one-off case. Indeed, these principles arise multiple times in the process-product research of the 1970s and 1980s.

Secondly, if there is robust evidence of other approaches outperforming direct instruction on different measures then it would be very interesting to see it. I have read commentators suggest that direct instruction is not as good as other approaches for tackling scientific misconceptions or developing argumentation or critical thinking and yet I see few trials that support this. The studies that are cited usually come from higher education and show gains for ‘active learning’ over straight lectures. In this context, ‘active learning’ often means lectures where the students have clickers so that they can interact with the lecturer. I am not surprised by the results of such studies but they have little to say about direct instruction.

And we must all be careful not to retreat into unfalsifiable positions. If the claim is being made that certain teaching methods are better for producing certain outcomes but that these outcomes cannot be measured in any way then we can’t test such a claim. If you cannot test a claim then it becomes a belief. And as the Semmelweis example shows, people’s beliefs can be mistaken.

Standard

14 thoughts on “Direct and Explicit

  1. Reblogged this on From experience to meaning… and commented:
    Found this blog post via @tombennett71 and it is a very interesting read. For the people who know the Kirschner, Sweller & Clark article, it won’t come as a surprise, but do note that this article has spurred discussion (actually, this book is next on my reading list. The most important element in this blog post is the urge for replication, but educational research does have some issues here.

  2. You seem to be saying that the DI (upper case) results from PFT can be extrapolated to older students and other material because other studies have shown di (lower case) to be more effective in some instances. Or have I misunderstood?

    • I can see why you read it that way but it’s not quite what I mean. It’s this confusion between Direct Instruction and direct instruction again. I am not actually extrapolating from PFT. I am saying that other evidence exists in support of explicit approaches from other knowledge domains and age groups e.g. the process-product research of the 1970s/80s. I suppose that I am making the case that we should not be surprised at the PFT results in this context.

  3. Hi Greg. Nice summary. I do take exception, to this statement:

    “Although there were tests of basic skills, there were also tests of mathematical problem solving and reading comprehension. Direct Instruction still outperformed the other models in these measures, although not by as much.”

    Actually on the cognitive domain measures, DI outperformed the other models just as much as on the basic skills measures. It was on the affective domain measures that some of the other models didn’t lose quite as badly to DI. In particular one cognitive-domain-oriented model, “Parent Education”, performed almost as well as DI in the affective domain (but far below DI and no better than the control in the skills and cognitive domains) and two other “skills” models performed reasonably well on affective domain outcomes.

    The second-worst performing model was “Cognitively Oriented Curriculum”, a system backed by big money and big education. Of interest is that this model is a direct ancestor of many of the big child-centered, discovery/inquiry-based systems in use today. Of course. One comes to expect such irony in education circles, where every day is upside-down day.

  4. Pingback: Minimal Guidance | Filling the pail

  5. Pingback: Student Motivation | Filling the pail

  6. Pingback: Making Lessons 85% Review: The Genius Behind Engelmann’s Teaching to Mastery – Mr. G Mpls

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.