Lin Zhang has completed an interesting study that is soon to be published in Learning and Instruction. Nine classes of fourth and fifth grade students were randomised into one of three conditions to learn about the concept of energy transfer. Each condition centred around an experiment where two balls were rolled down a ramp from the same height, with a discussion about how far the balls would roll once they left the ramp.
Zhang described the first condition as ‘direct instruction’ and in this case, students were encouraged to make predictions, but were then told what would happen; that the balls would roll the same distance. The investigation was then demonstrated by the teacher, followed by a review.
The second, ‘explicit investigation’, condition was the same as direct instruction except that the students conducted the investigation themselves.
Finally, the ‘hands-on inquiry’ condition took the explicit investigation condition further by not telling the students in advance what would happen to the balls in the experiment. However, the correct answer was discussed in the review section, which was identical for all three conditions.
Students were excluded from Zhang’s analysis if they demonstrated good prior knowledge of the concepts of energy transfer. Students with special educational needs and those who spoke English as an additional language were also omitted from the data analysis, although it looks like they still took part in all of the activities.
Students were then given an assessment that had three types of question assessing different aspects of what had been learnt; questions on content, reasoning questions and questions applying the content to real-life situations.
In the content and reasoning questions, students in the direct instruction condition significantly outperformed those in the hands-on inquiry condition (with explicit investigation sitting in between the two and not differing significantly from either). This order was reversed for the questions on real-life applications, but the differences between the groups were not statistically significant.
So, direct instruction, as defined by Zhang, was clearly the better teaching method.
The authors mention a couple of limitations to the study but they don’t discuss the method of randomisation. Given that randomisation took place at the class level, I think we have to assume that students are initially randomly allocated to classes in order for the statistics to work. However, my experience is that students are rarely randomly allocated to classes and there could therefore be an unknown factor systematically varying between the groups. Nevertheless, this represents yet another study to add to the pile showing that, not only does inquiry learning degrade the learning of standard objectives such as content, it has a negative impact or no impact on the supposedly higher order abilities that advocates of inquiry learning claim that it promotes.
No doubt, such advocates will criticise the study for not investigating their preferred version of inquiry learning or on some other grounds. However, as I have previously suggested, it is not enough to simply dismiss evidence you don’t like – advocates of inquiry have a burden of proof and they need to be able to point to evidence supporting their claims.
If education were truly evidence-informed then teachers and school principals would be tuned-in to this discussion, eager to see what the latest studies showed about different teaching methods. Unfortunately, inquiry learning seems to be seen by many in the sector as an unqualified good thing.