The key difference between lecturing and explicit instruction is that explicit instruction is highly interactive. I advocate for explicit instruction and I am prepared to accept that non-interactive lecturing is a bit rubbish. The reasons are intuitive enough. If you think you might be called upon to respond then you are more likely to pay attention.
This is such an important idea that it is one of the basics that I look for when observing lessons. I do not believe that lesson observation is valid for making many inferences, but I do think you can look for a few key conditions: students complete tasks that teachers set, students don’t talk while the teacher is talking and questioning of the students by the teacher is frequent and unpredictable from the students’ perspective. I don’t really mind how teachers achieve this but I do think it is a prerequisite for effective teaching.
I am therefore prepared to accept much of the evidence for ‘active learning’ that comes out of university departments. It frustrates me when this is presented as evidence for constructivist teaching methods because I consider it nothing of the sort. Typically, a straight, non-interactive lecture is compared with a lecture that requires students to participate in some way. The most basic of these designs is for students to use electronic voting buttons or ‘clickers’ to answer questions posed by the lecturer, but there are many other variations.
Interestingly, the active learning findings makes so much intuitive sense that I have never sought to question this research. I have been blinded by my own bias.
There are lots of potential problems with education research, some of which require quite a sophisticated understanding of statistics to grasp. However, a fairly basic problem known as the ‘file-drawer’ problem is pretty easy to understand: researchers are more likely to get their papers published if they find interesting results. If a researcher tests two learning conditions against each other and finds no effect – a null result – then it’s unlikely to be published.
The problem with this kind of selective publishing is that null results could be quite common and yet you won’t know this by looking at published papers. Meta-analyses which attempt to synthesise evidence from many studies just compound the problem. I am aware of the issue as a PhD student – if I find a null result then it can certainly go into my PhD dissertation but it’s not something that a journal is likely to pick up.
According to a recent paper by Phillip Dawson and Samantha Dawson, the evidence pitting lectures against active learning suffers from this file-drawer problem. The researchers re-analysed data from a recent meta-analysis by Freeman et. al. that showed benefits for active learning and found that there was evidence of missing, unreported studies:
“We re-examined Freeman et al.’s data according to dissemination type (published studies vs. dissertation studies) to evaluate the influence of reporting bias… When we considered all of the studies together we found no evidence of publication or reporting bias. However, when we considered each study by dissemination type, the strength of support for active learning differed. The published studies were far more supportive of active learning than dissertations. Our analysis of the published studies found statistical evidence of four missing studies, that is, studies which are statistically likely to have been conducted but were either not published or not captured by Freeman et al.’s search strategy or inclusion criteria. All of these studies would have reached the opposite conclusion to Freeman et al.; in addition, three of these four studies would not have reached statistical significance. Given that these studies would have gone against accepted learning and teaching wisdom, and they would have been mostly non-significant, their absence in the literature is unsurprising. However it is still problematic, as it represents bias.”
So should we all start lecturing? Not quite:
“It is important to note, however, that the effect of reporting bias would not have been strong enough to contradict Freeman et al.’s findings.”
The reporting bias just makes the findings a whole lot weaker. Let’s stick with asking questions, for now.