Yesterday, Doug Holton dismissed me as a troll. I had written a blog post that was highly critical of an article that I had read in The Age. Holton effectively told one of the journalists that I was not worth bothering with.
It seems that some people define troll to mean, ‘people who say things that I don’t like.’ I think this debases us as an online community. It appears to be an attempt to shut down robust debate.
Moving past the trolling issue, Holton’s complaint seems to be that I don’t pay attention to counter-evidence. The link that Holton provides is to the IES evaluation of Direct Instruction. I don’t actually advocate Direct Instruction programs, although I find them interesting and I refer to the evidence in favour of them when advancing my case for explicit instruction more generally. And so it is highly relevant to this case that IES finds pretty much no effect for these approaches.
There is a caveat.
IES has extremely high standards for the evidence that it will accept when producing its reports. It is a comment on the quality of educational research that IES often rejects more studies than it accepts. For its overall evaluation of Direct Instruction – a program that has now been developed across maths, literacy and other domains and upwards through the different grade levels – IES is able to make use of a single study; one which it accepts ‘with reservations’:
“The study was classified as “meets evidence standards with reservations” due to severe overall attrition. Based on the number of classes and children in the original study, the sample size at assignment was 368 children with disabilities [Cole et al. (1993) stated that the full sample included just 206 children]. However, the analysis sample was 164 children. Based upon the inconsistency between the figures at assignment, the study was downgraded for severe overall attrition.”
The study is interesting – it compares early years special education students who receive Direct Instruction with those who receive something called “Mediated Learning” which is based upon the work of Feuerstein. I’d not heard of this before and it is difficult to form a picture of exactly what is involved from the research paper. The result is certainly not what I would expect.
Oddly, the IES has a separate report on ‘Reading Mastery’ which shows significant gains for that particular program. Reading Mastery is a Direct Instruction program. Again, the report ultimately rests on a single study; a paper from 2000 that meets the required standards. I’m sure that there is logic to this, but why would this report meet the standards for evaluating Reading Mastery but then not be included in the report evaluating Direct Instruction?
I have interacted with Doug Holton before but I would have to suggest that the communication is a little one-way. He wrote a piece that claims that people like me get our views from Wikipedia and listed a whole load of evidence to support his ideas. I commented on this and raised an important point. Much of the evidence that Holton presents is of college-level studies that compare supposedly ‘active’ learning with traditional lectures. I’ve read a number of these papers now and it is often hard to pin-down exactly what the active learning condition consists of. In some cases, we are clearly comparing straight lectures with lectures where students interact via clickers.
I would predict that the interactive lectures would be more effective than the non-interactive ones. Firstly, you have the fact that these will be something of a novelty and are likely to generate a Hawthorne effect. Secondly, I actually promote interactivity during explicit instruction because I think it helps maintain attention. I suggest that students should be regularly called-upon to answer questions and that these students should not self-select. Indeed, a key feature of Direct Instruction programs is that they are highly interactive.
So I’m not sure that all of Holton’s evidence is actually relevant to the question of explicit classroom instruction versus constructivist inspired approaches i.e. the questions addressed in the Kirschner, Sweller, Clark paper (below) that he criticises.
Although I have now made this point a number of times, Holton has never addressed it in my interactions with him. I invite him to do so.
What is this all about? I am broadly in favour of explicit instruction in K-12 education rather than inquiry learning and the like. Below, I list some evidence that I believe supports my own position. I have copied this evidence from a previous post.
2. Barak Rosenshine reviewed the evidence from process-product research and found that more effective teachers used approaches that he called ‘direct instruction’ and which I would call ‘explicit instruction’ in order to distinguish it from the more scripted Direct Instruction programmes developed by Engelmann and other (such as DISTAR). Most of this is paywalled but he did write a piece for American Educator.
3. Project Follow Through, the largest experiment in the history of education, is generally considered to have demonstrated the superiority of Engelmann’s Direct Instruction (DI) programmes to other methods, including those base upon constructivism. It is important to note that DI was not just the best on tests of basic skills but it performed at, or near, the top on problem solving, reading comprehension and for improving self-esteem.
6. A meta-analysis found a small effect size for ‘guided discovery learning’ over business-as-usual conditions and a negative effect size for pure discovery over explicit instruction. Whilst this might be seen as evidence for guided discovery learning, it is worth bearing in mind that the studies included were not generally RCTs and so the experimental conditions would have favoured the intervention (which is why Hattie sets a cut-off effect size of d=0.40). The definition of guided discovery learning also included the use of worked examples which are generally considered to be characteristic of explicit instruction.
11. Findings on the best way to teach cognitive strategies (such as reading comprehension) also echo the findings of the process-product research i.e. that an explicit approach is more effective. (You may, as I do, still question the value of teaching such strategies or, at least, the time devoted to it). [Paywalled]
12. Classic worked-example studies show the superiority of learning from studying worked examples over learning by solving problems for novice learners. Worked examples are a feature of explicit instruction whereas problem solving (without prior instruction) is a feature of constructivist approaches.
[There are others – I’ll add to this list as I remember them]