There was no conspiracy against Ignaz Semmelweis. It’s just that the doctors didn’t like what he had to say.
Semmelweis discovered that if obstetricians washed their hands then this dramatically reduced the incidence of puerperal fever. He linked this to the fact that doctors would often move from examining corpses to examining patients and thought that something was being transferred. So Semmelweis foreshadowed the germ theory of disease.
You might think that his success in cutting mortality rates would be universally welcomed. It wasn’t. Some questioned the validity of his data, suggesting that further studies were needed. Others felt the findings were a bit of an insult to the medical profession because doctors were gentleman and gentlemen’s hands were clean.
The main problem was that it didn’t fit with the prevailing orthodoxy that puerperal fever was complicated and developed in different ways in different patients. Doctors took what might be called a ‘patient-centered’ approach; disease was caused by an imbalance of the four humours; blood, phlegm, yellow bile and black bile. In each patient, the balance would be different. It didn’t seem at all likely that there was one single, simple cause which a prescription such as ‘wash your hands’ would eliminate.
I can’t help seeing parallels between this story and the strange tale of Project Follow Through. Every day it seems as if I come across people with senior roles in education who have not heard of Follow Through or have only heard a brief critique of it.
This little piece of forgotten history is fascinating and I have written about it before. In short, it was a massive experiment in the US based on a horse-race design. Different approaches to teaching early literacy and numeracy were trialled against each other. Due to its size and ambitions, it was impossible to control conditions in the way a conscientious statistician would wish. This has led some to claim that it demonstrates nothing – the critique that I mentioned.
However, I am rather convinced by Carl Bereiter’s argument. Yes, there are problems with analysing the data but we may still discern a significant result. “Direct Instruction” was the most successful intervention. The least effective approaches were those that Bereiter calls ‘child-centered’ such as the “Open Education Model” and “Responsive Education”.
It is important to note what Direct Instruction means in this context. The term ‘direct instruction’ i.e. in lower case is often used to mean any form of explicit teaching that is coupled with deliberate practice in order to meet clear objectives – the sorts of techniques that Barak Rosenshine has written extensively about. However, Direct Instruction i.e. in upper case, refers to specific educational programs originally developed by Siegfried Engelmann and colleagues for the Follow Through experiment. These involve the processes of generic direct instruction but also include a tightly structured approach to the curriculum that means that lessons are effectively scripted. Neither approach is synonymous with lecturing because any form of direct instruction typically involves a large component of teacher-student interaction throughout.
When you read back through the Follow Through literature there are echoes of exactly the debates that are taking place on Twitter today. Critics of Follow Through wonder whether the programs that turned out to be less effective might have performed better if assessed on different outcomes. This is a familiar argument: Direct Instruction might be better for developing basic skills but perhaps ‘child-centered’ or ‘constructivist’ approaches might be better at developing other things?
For a start, this is not what Follow Through showed. Although there were tests of basic skills, there were also tests of mathematical problem solving and reading comprehension. Direct Instruction still outperformed the other models in these measures, although not by as much. And this seems like quite a robust result, replicable in different contexts and at different student ages. It is as if we have unearthed a set of general principles rather than a one-off case. Indeed, these principles arise multiple times in the process-product research of the 1970s and 1980s.
Secondly, if there is robust evidence of other approaches outperforming direct instruction on different measures then it would be very interesting to see it. I have read commentators suggest that direct instruction is not as good as other approaches for tackling scientific misconceptions or developing argumentation or critical thinking and yet I see few trials that support this. The studies that are cited usually come from higher education and show gains for ‘active learning’ over straight lectures. In this context, ‘active learning’ often means lectures where the students have clickers so that they can interact with the lecturer. I am not surprised by the results of such studies but they have little to say about direct instruction.
And we must all be careful not to retreat into unfalsifiable positions. If the claim is being made that certain teaching methods are better for producing certain outcomes but that these outcomes cannot be measured in any way then we can’t test such a claim. If you cannot test a claim then it becomes a belief. And as the Semmelweis example shows, people’s beliefs can be mistaken.