Another big fail for inquiry learning

Embed from Getty Images

The UK’s Education Endowment Foundation (EEF) has released a report about a randomised controlled trial they conducted to test the efficacy of CREST, an inquiry-based learning programme. Specifically, they offered CREST to science students and measured the impact on science performance. There wasn’t any. There also was no impact on self-efficacy in science (a key component of motivation) and the proportion of students aspiring to a scientific career, although small positive impacts were estimated for confidence and attitude to school.

These results will come as no great shock to readers of this blog who will be aware that there is no history of inquiry-based learning proving effective in randomised controlled trials and that there are reasons to think inquiry-based learning is at odds with the science of learning (see e.g. here). This finding also aligns with correlational evidence from PISA that the more students engage in inquiry-based science, the worse their PISA science score.

However, this study has a few interesting details it is worth mentioning.

Firstly, it was an ‘intention to treat’ study that focused only on students in Year 9 (Grade 8 in the U.S. and Aus) who were prepared to opt in to the CREST initiative. This meant that in the control condition, students were still asked whether they wanted to participate in CREST but after the randomisation process, were then told it was not available and were given a high street voucher to spend instead. The progress of these students was then tracked against the students who completed CREST, specifically on a standardised science assessment know as ‘GL’s Progress Test in Science’.

These are highly favourable conditions because the participants have self-selected into the CREST initiative and therefore presumably see some value in it, making the trial prone to expectation effects (like the placebo effect).

The trial was also run by the programme developers, as are all EEF trials at the first stage of development. Again, this leads to the most favourable possible conditions.

The actual content of CREST saw students working on science projects – there was a minimum commitment of 30 hours. So the trial was also a test of a form of project-based learning. Schools could choose how to deliver it. Some used time in science lessons and others ran it as an after school club.

There was a high attrition rate both at school and participant level and this reduces the security of the findings, particularly because it disproportionately hit the intervention arm. In addition, the proportion of students in the intervention who finally submitted projects was low and the trial report suggests this may be have been due to a backwash of pressure from the GCSE exams that students in England sit in Year 11.

There is a hint that the mode of delivery mattered. Students who did CREST as an after school club may have made slight gains relative to the control and students who did it in class may have performed slightly worse than the control, presumably because this displaced the actual learning of science in favour of play-acting at being a professional scientist. However, the trial was not designed to measure such differences and so these results must be considered tentative at best.

Experience suggests that proponents of inquiry-based learning are unlikely to revise their view based on the evidence from this trial. They may point out that this trial does not prove that inquiry-based learning never works in any context. That’s true – no trial could – but where is the evidence it does work in these other contexts? They may point to aspects of the CREST programme and its implementation they don’t like. Fine. They are also likely to point to the high attrition rate.

However, I am starting to view attrition as something of an outcome measure in these trials. A while back, the EEF ran a different randomised controlled trial on project based learning. Similarly, they found no effect and high attrition. If schools and students are dropping out of inquiry-based learning or project-based learning when the conditions are as favourable as they could possibly be then this is not a good sign for implementing these approaches effectively in schools that are not part of a project and who do not have access to programme developers. There will be a reason for the attrition and this is likely to be related to the efficacy of the intervention.

I would also note that CREST is almost the archetypal STEM initiative of the kind we have come to hear so much about in recent years and that are somehow meant to deliver a new generation of scientists and engineers. Back to the drawing board on that. Here’s my plan: Let’s instead focus on building a rigorous science curriculum and then explicitly teaching it. That may just work.

Finally, to the EEF – why not start testing things with a clear mechanism of action that is consistent with cognitive science? Such programmes may be a better bet. Just a thought. In the meantime, it is still quite useful to accumulate lots of evidence for interventions that fail.

Advertisement
Standard

7 thoughts on “Another big fail for inquiry learning

  1. Chester Draws says:

    although small positive impacts were estimated for confidence and attitude to school

    The proponents always tout these are positives, because “obviously” they are. But as so often, “obvious” is often wrong.

    Most students like doing easy stuff, and “fun” stuff, but that doesn’t translate through to better learning. If we gave students only what they liked doing the results would be appalling.

    Moreover, confidence is a very poor metric for improvement. One of my bug-bears is getting confident students to do enough work — their very confidence makes them think they have done enough when they clearly haven’t. Meanwhile students who are aware of their weaknesses are much more likely to plug them.

    Sure students who are bad lack confidence, but it’s getting better that makes you more confident, not vice versa.

  2. Big fan of this line: “However, I am starting to view attrition as something of an outcome measure in these trials.”

    I remember Paul Bruno making this point some number of years ago as well. It’s a very good one!

  3. Joshua Avenell says:

    Given the way these folks worship “engaging pedagogy”, it is completely fair to use attrition as an indicator. Students are voting with their feet.

  4. Discovery (Inquiry) learning is despised by the students most likely to pursue a career in the sciences or engineering. These are serious kids who don’t want their time wasted on obvious nonsense. So to call the inquiry approach counter-productive is an understatement as it turns off the very students it hopes to engage and motivate.
    This begs the question, why on Earth is discovery/inquiry learning the very focal point of the Next Generation Science Standards? Students in 19 states, including California and New York are being subjected to a full blown K to 12 science program based on this debunked, ineffective, and failed methodology. This should be an embarrassment to our profession.

  5. Dean Cairns says:

    Interesting article, I will be incorporating this CREST research in my next paper(s).

    For what its worth here are a couple my papers (open access) that do not exactly support IBL when interpreting PISA scores. I entered into the analysis hoping to find something positive but, ultimately, in terms of PISA performance IBL is negatively correlated, everywhere.

    Investigating the relationship between instructional practices and science achievement in an inquiry-based learning environment (2019)
    https://www.tandfonline.com/doi/full/10.1080/09500693.2019.1660927

    Exploring the Relations of Inquiry-Based Teaching to Science Achievement and Dispositions in 54 Countries (2017)
    https://link.springer.com/article/10.1007/s11165-017-9639-x

  6. Thanks for posting this Greg. Interesting, and seems in line with the PISA 2015 finding etc. NOTE here in Canada in 2010 PCAP (a Canadian assessment modelled on PISA and testing the same cohort 2 years earlier on the same subject domains) a similar variable construction compared Direct versus Indirect Instruction in math at Grade 8 level and found quite a strong positive effect for Direct and negative for Indirect, but found that some Direct Instruction provided compensation for Indirect. Had they parsed it more finely I think they would have found something like the 80/20 sweet spot the PISA science variables appeared to identify

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.