I’ve been aware for some time of a programme called Let’s Think in English. It has an interesting pedigree in an area of research known as cognitive acceleration.
Back in the 1980s, Philip Adey and Michael Shayer began testing a science intervention they had developed known as Cognitive Acceleration through Science Education or CASE. It was based upon a mix of Vygotsky’s and Piaget’s theories of constructivism and the aim was to attempt to accelerate students through Piaget’s posited developmental stages. The results were truly extraordinary. Transfer of learning is notoriously difficult to achieve in education and yet Adey and Shayer seemed to demonstrate that a science intervention could have positive impacts on English results, years later.
Extraordinary claims need extraordinary evidence and so it was important for other researchers to replicate these results. The replication story has been somewhat mixed. One randomised controlled trial from Finland did appear to show gains for the experimental students on cognitive, IQ-type tasks, but these washed out after three years, and the kinds of transfer effects found in the original studies seem to have been either absent or something that the researchers did not look for.
Into this scene, strode the UK’s Education Endowment Foundation (EEF). From 2013 to 2015, the EEF ran a trial of a programme known as Let’s Think Secondary Science (LTSS). This was not exactly the same as the original CASE programme. The Let’s Think Forum adapted CASE by reducing the number of lessons from 30 to 19 and by reducing the number of scientific principles to be taught. The EEF study found no effect of the programme on science attainment and it found that the English and Maths scores of participants were slightly lower than the control. There were the usual implementation problems but, other than that, the trial seems to have been pretty fair.
Let’s Think in English is a variant of cognitive acceleration that has been adapted for use in English lessons. In one lesson, students read a very short story about a woman who is shot crossing a bridge during the American civil war. Various people share responsibility, not least the woman who crossed the bridge to visit her lover. Students are encouraged to discuss who is most to blame.
The Let’s Think in English website helpfully has a section devoted to evidence of success and it is the first two entries in this list that first drew my attention. The first entry refers to CASE and Let’s Think in Science. These represent completely different domains of learning to English and there is no reason to suppose that gains will transfer from one subject to the other. Layer in the null results from independent randomised controlled trials and this is not encouraging.
The second entry in the evidence for success list refers to the EEF toolkit and its Metacognition and self-regulation strand. This provides clear evidence as to why this strand of the toolkit is so misleading. When you analyse the studies that the EEF groups together under this ungainly label, they are highly diverse. The one experimental study that has a relatively large impact is an English intervention, but it looks nothing like Let’s Think in English. Instead, in the Improving Writing Quality programme, ‘students are explicitly taught how to plan, draft, edit and revise their writing.’ So if you are looking for a way of improving English then this is the best ‘metacognitive’ bet the EEF can offer. Let’s Think in English is not one of the programmes analysed under this strand and there is little evidence to suggest that it would be a good solution.
The Let’s Think in English website also bizarrely claims that John Hattie’s large effect size for Piagetian Programs somehow provides evidence for their approach, presumably through the loose association of cognitive acceleration and Piaget. This is a clear red herring because the only thing the Hattie evidence demonstrates is that students who perform well on tests designed with Piaget’s ideas in mind also perform well on tests of maths and reading i.e. clever kids tend to be clever.
The rest of the evidence presented in support of Let’s Think in English is not hugely convincing. Students aren’t randomised and some measures are based on teacher assessment, so expectation effects are likely to play a part. These are the kinds of studies most likely to generate false positive results.
None of this means that if you have adopted Let’s Think in English, and you find that the programme works for you and your school, that you should abandon it. There are many unknowns in education and so much of what we do will necessarily be a judgement call. However, anyone that tries to sell you on Let’s Think in English with the claim that it is supported by evidence of effectiveness should be ushered off stage to a chorus of raspberries.