Dismissed as a troll

Yesterday, Doug Holton dismissed me as a troll. I had written a blog post that was highly critical of an article that I had read in The Age. Holton effectively told one of the journalists that I was not worth bothering with.

Holton Troll II

It seems that some people define troll to mean, ‘people who say things that I don’t like.’ I think this debases us as an online community. It appears to be an attempt to shut down robust debate.

Moving past the trolling issue, Holton’s complaint seems to be that I don’t pay attention to counter-evidence. The link that Holton provides is to the IES evaluation of Direct Instruction. I don’t actually advocate Direct Instruction programs, although I find them interesting and I refer to the evidence in favour of them when advancing my case for explicit instruction more generally. And so it is highly relevant to this case that IES finds pretty much no effect for these approaches.

There is a caveat.

IES has extremely high standards for the evidence that it will accept when producing its reports. It is a comment on the quality of educational research that IES often rejects more studies than it accepts. For its overall evaluation of Direct Instruction – a program that has now been developed across maths, literacy and other domains and upwards through the different grade levels – IES is able to make use of a single study; one which it accepts ‘with reservations’:

“The study was classified as “meets evidence standards with reservations” due to severe overall attrition. Based on the number of classes and children in the original study, the sample size at assignment was 368 children with disabilities [Cole et al. (1993) stated that the full sample included just 206 children]. However, the analysis sample was 164 children. Based upon the inconsistency between the figures at assignment, the study was downgraded for severe overall attrition.”

The study is interesting – it compares early years special education students who receive Direct Instruction with those who receive something called “Mediated Learning” which is based upon the work of Feuerstein. I’d not heard of this before and it is difficult to form a picture of exactly what is involved from the research paper. The result is certainly not what I would expect.

Oddly, the IES has a separate report on ‘Reading Mastery’ which shows significant gains for that particular program. Reading Mastery is a Direct Instruction program. Again, the report ultimately rests on a single study; a paper from 2000 that meets the required standards. I’m sure that there is logic to this, but why would this report meet the standards for evaluating Reading Mastery but then not be included in the report evaluating Direct Instruction?

I have interacted with Doug Holton before but I would have to suggest that the communication is a little one-way. He wrote a piece that claims that people like me get our views from Wikipedia and listed a whole load of evidence to support his ideas. I commented on this and raised an important point. Much of the evidence that Holton presents is of college-level studies that compare supposedly ‘active’ learning with traditional lectures. I’ve read a number of these papers now and it is often hard to pin-down exactly what the active learning condition consists of. In some cases, we are clearly comparing straight lectures with lectures where students interact via clickers.

I would predict that the interactive lectures would be more effective than the non-interactive ones. Firstly, you have the fact that these will be something of a novelty and are likely to generate a Hawthorne effect. Secondly, I actually promote interactivity during explicit instruction because I think it helps maintain attention. I suggest that students should be regularly called-upon to answer questions and that these students should not self-select. Indeed, a key feature of Direct Instruction programs is that they are highly interactive.

So I’m not sure that all of Holton’s evidence is actually relevant to the question of explicit classroom instruction versus constructivist inspired approaches i.e. the questions addressed in the Kirschner, Sweller, Clark paper (below) that he criticises.

Although I have now made this point a number of times, Holton has never addressed it in my interactions with him. I invite him to do so.

What is this all about? I am broadly in favour of explicit instruction in K-12 education rather than inquiry learning and the like. Below, I list some evidence that I believe supports my own position.  I have copied this evidence from a previous post.

1. Kirschner, Sweller and Clark reviewed a number of studies and the literature on cognitive load theory whilst critiquing constructivist approaches.

2. Barak Rosenshine reviewed the evidence from process-product research and found that more effective teachers used approaches that he called ‘direct instruction’ and which I would call ‘explicit instruction’ in order to distinguish it from the more scripted Direct Instruction programmes developed by Engelmann and other (such as DISTAR). Most of this is paywalled but he did write a piece for American Educator.

3. Project Follow Through, the largest experiment in the history of education, is generally considered to have demonstrated the superiority of Engelmann’s Direct Instruction (DI) programmes to other methods, including those base upon constructivism. It is important to note that DI was not just the best on tests of basic skills but it performed at, or near, the top on problem solving, reading comprehension and for improving self-esteem.

4. An analysis that compared students’ maths and science scores on the TIMSS international assessment showed a correlation between higher performance and the teacher adopting a ‘lecture style’.

5. A RCT from Costa Rica showed that an ‘innovative’ constructivist-based approach produced worse performance than the business-as-usual control.

6. A meta-analysis found a small effect size for ‘guided discovery learning’ over business-as-usual conditions and a negative effect size for pure discovery over explicit instruction. Whilst this might be seen as evidence for guided discovery learning, it is worth bearing in mind that the studies included were not generally RCTs and so the experimental conditions would have favoured the intervention (which is why Hattie sets a cut-off effect size of d=0.40). The definition of guided discovery learning also included the use of worked examples which are generally considered to be characteristic of explicit instruction.

7. An analysis of the introduction of a more constructivist approach to teaching mathematics in Quebec showed an association with a decline in test scores.

8. One of my favourite studies ran a constructivist maths intervention against an explicit one (as well as business-as-usual) and found the explicit intervention was superior.

9. Klahr and Nigam found that the minority of students who were able to discover a scientific principle for themselves didn’t understand it any better than students who were taught it.

10. Studies of teacher expertise are broadly consistent with the earlier findings from process-product research as described by Rosenshine.

11. Findings on the best way to teach cognitive strategies (such as reading comprehension) also echo the findings of the process-product research i.e. that an explicit approach is more effective. (You may, as I do, still question the value of teaching such strategies or, at least, the time devoted to it). [Paywalled]

12. Classic worked-example studies show the superiority of learning from studying worked examples over learning by solving problems for novice learners. Worked examples are a feature of explicit instruction whereas problem solving (without prior instruction) is a feature of constructivist approaches.

[There are others – I’ll add to this list as I remember them]



6 thoughts on “Dismissed as a troll

  1. The journalist seemed incredibly thin skinned and the defence as highlighting innovators was just odd. Innovation is not necessarily good or bad. Every new scam is an innovation. But maybe a title like “Overly credulous article” would have been better. But then maybe only something like “Article highlights innovation…” would have kept the peace.

    The whole discussion does beg the question why aren’t some sides of the discussion engaging in a meaningful debate?

    PS. Great post to respond to the evidence question.

  2. David says:

    I would add:

    The Sutton Trust report: http://www.suttontrust.com/wp-content/uploads/2014/10/What-Makes-Great-Teaching-REPORT.pdf

    The Kirschner and JG van Merrionboer article, “Do Learners Really Know Best” http://ocw.metu.edu.tr/pluginfile.php/3298/course/section/1174/Do%20Learners%20Really%20Know%20Best.pdf

    The Schwerdt and Wupperman study at Harvard: http://www.hks.harvard.edu/pepg/PDF/Papers/PEPG10-15_Schwerdt_Wuppermann.pdf

    Updated Meta-Analysis of Learner Control within Educational Technology
    Karich, Abbey C.; Burns, Matthew K.; Maki, Kathrin E.
    Review of Educational Research, v84 n3 p392-410 Sep 2014

  3. The Quirky Teacher says:

    You’re one of the last people I would call a ‘troll’. If anything, I always know a blog post by you will be well referenced and contain a lot of excellent further reading. It is very clear to anyone with half a brain that you know what you’re talking about.

    It always amazes me how brazen non-thinkers can be so outrageous in their dismissal of logic and good reasoning, backed up by decent research. In my previous field of work, such people wouldn’t last very long. It is one of the most peculiar aspects of education (compared to my previous experience) that people are promoted beyond their capabilities and are relatively untouchable with regards to having their opinions challenged. It’s like Education is some kind of communist state: less academic ‘generals’ are in positions of power and have no idea that their decisions might just set off some kind of terrible nuclear reaction.

  4. Ann in L.A. says:

    I like this one:

    Click to access HillParker5.pdf

    Which looked at graduates from six US high schools with regards to their college math placement. When schools adopted curricula in line with NCTM recommendations (heavy on discovery learning,) the level of college placement and eventual pass rate for university-level Calculus went down significantly. They were able to look at individual schools which changed their math programs–thus pretty closely comparing apples to apples, and look at how that diminished their graduates’ ability to take college-level math.

  5. Pingback: Gotcha? | Filling the pail

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.