More evidence against inquiry learning in science

Embed from Getty Images

Lin Zhang has completed an interesting study that is soon to be published in Learning and Instruction. Nine classes of fourth and fifth grade students were randomised into one of three conditions to learn about the concept of energy transfer. Each condition centred around an experiment where two balls were rolled down a ramp from the same height, with a discussion about how far the balls would roll once they left the ramp.

Zhang described the first condition as ‘direct instruction’ and in this case, students were encouraged to make predictions, but were then told what would happen; that the balls would roll the same distance. The investigation was then demonstrated by the teacher, followed by a review.

The second, ‘explicit investigation’, condition was the same as direct instruction except that the students conducted the investigation themselves.

Finally, the ‘hands-on inquiry’ condition took the explicit investigation condition further by not telling the students in advance what would happen to the balls in the experiment. However, the correct answer was discussed in the review section, which was identical for all three conditions.

Students were excluded from Zhang’s analysis if they demonstrated good prior knowledge of the concepts of energy transfer. Students with special educational needs and those who spoke English as an additional language were also omitted from the data analysis, although it looks like they still took part in all of the activities.

Students were then given an assessment that had three types of question assessing different aspects of what had been learnt; questions on content, reasoning questions and questions applying the content to real-life situations.

In the content and reasoning questions, students in the direct instruction condition significantly outperformed those in the hands-on inquiry condition (with explicit investigation sitting in between the two and not differing significantly from either). This order was reversed for the questions on real-life applications, but the differences between the groups were not statistically significant.

So, direct instruction, as defined by Zhang, was clearly the better teaching method.

The authors mention a couple of limitations to the study but they don’t discuss the method of randomisation. Given that randomisation took place at the class level, I think we have to assume that students are initially randomly allocated to classes in order for the statistics to work. However, my experience is that students are rarely randomly allocated to classes and there could therefore be an unknown factor systematically varying between the groups. Nevertheless, this represents yet another study to add to the pile showing that, not only does inquiry learning degrade the learning of standard objectives such as content, it has a negative impact or no impact on the supposedly higher order abilities that advocates of inquiry learning claim that it promotes.

No doubt, such advocates will criticise the study for not investigating their preferred version of inquiry learning or on some other grounds. However, as I have previously suggested, it is not enough to simply dismiss evidence you don’t like – advocates of inquiry have a burden of proof and they need to be able to point to evidence supporting their claims.

If education were truly evidence-informed then teachers and school principals would be tuned-in to this discussion, eager to see what the latest studies showed about different teaching methods. Unfortunately, inquiry learning seems to be seen by many in the sector as an unqualified good thing.


3 thoughts on “More evidence against inquiry learning in science

  1. I have just read about the EEF’s investigation of using calculators for maths. It suggests that using them with discretion helps mathematical understanding and recommends estimating solutions and using a calculator as a check and for a more accurate answer (Colin Foster on TV). The EEF’s research suggests that collaborative learning works better than streaming or setting. There is also an investigation on streaming and setting on their site which suggests high attaining pupils gain from them but lower ability/disadvantaged pupils do not so much. In the light of what you report here what is your opinion on collaborative learning and the poor effectiveness for lower abilities of setting and streaming?

  2. I am all for evidence based adjustment to techniques. I am dubious that this particular “sample” adds anything more than anecdotal to any body of evidence. Indeed I suspect it me be an example of creating a narrative to fit the story you want to tell – which I can quite sympathise.

    One could just as well interpret “This order was reversed for the questions on real-life applications, but the differences between the groups were not statistically significant.” as a commentary that any such assessment on content is irrelevant. This after all is all that matter. Or is that being too obtuse?

    I am not even sure I understand the distinction in the 3 levels of engagement with the material in the study anyhow. If the most revelatory was the “hands-on-inquiry” where self-discovery was most encouraged (and the process of science is modelled most appropriately) and application based assessment was maintained then so be it. Context over content should arguably be the preference for the very reason that this what the real world will ask of the children. I reveal my bias.

    Leave the recipe making to home cookery classes where we might have a better consensus on what tastes nice?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.