Education Endowment Foundation to throw money at Philosophy for ChildrenPosted: November 13, 2016 Embed from Getty Images
The Education Endowment Foundation (EEF) was set up in 2011 in England with a grant of £125 million from the Department for Education. They have just announced that they intend to spend £1.2 million on a large scale, 200-school trial of “Philosophy for Children” (P4C) – a program that replaces one literacy lesson per week with a ‘philosophy’ lesson:
“The programme consists of a one-hour session each week, facilitated by the classroom teacher, in which children discuss an interesting philosophical question. Example questions might be ‘Is it fair to have a winner?’ or ‘Is it ok for children to hit teddy bears?’”
This isn’t the worst idea in the world. Many children probably have too many literacy lessons at the moment; lessons where they redundantly practice reading comprehension ‘tricks‘. However, I would prefer that they were replaced with proper subjects like history or geography that have more of a chance of growing a child’s background knowledge.
Why have the EEF decided to spend such a large sum of public money on such an extraordinary venture? Well, they have already run a smaller trial and they claim:
“The previous EEF efficacy trial showed that children taking part in P4C made an additional two months’ progress compared to pupils receiving ‘business-as-usual’ classroom teaching in reading and maths. The trial was robust, with the results being seen on Key Stage 2 assessments and the trial receiving an evidence strength rating of three padlocks. This project will now test the programme in more schools and over a longer timeframe, providing a more robust estimate of the impact.”
Just think about that for a moment. The study found that a program where children are asked to reflect upon their moral responsibility towards cuddly toys improves not only their reading scores but their maths scores too! What astonishing news! Such far transfer effects are as rare as hens’ teeth in education, with one notable exception that I will return to later.
But perhaps it is a bit early to go reconsidering your position on the health of Elvis or whether flying saucers are about to land in the back garden. This previous trial does not yet seem to have been published in a peer-reviewed journal. A number of bloggers with expertise in the area of randomised controlled trials (RCTs) have taken a look at it and come to a quite different conclusion. For instance, Jim Thornton, a professor of obstetrics and gynaecology who is used to interpreting results from RCTs, calls the trial ‘misleading’ and concludes:
“The triallists pre-specified two primary outcomes but only reported one, which showed no difference. They pre-specified seven secondary outcomes which showed no differences either. However when they altered their analysis plan after seeing the data they noticed that two of the secondary outcomes showed a tiny shift in mean change scores favouring the intervention. The effect size was about 10% of a standard deviation, and less than half the participants had the relevant scores measured, but who cares! Without any tests of statistical significance they declared that it was unlikely to have occurred by chance!”
The point about there being no test of statistical significance is an important one. Stephen Gorard, the leader of the group commissioned to analyse the study, does not believe in them. There is an interesting argument to be had about this but the conventional view is that tests such as these tell us something useful. Surely Gorard’s personal opinions are irrelevant here – if the study is funded with public money then it is a public good and as long as there is a public interest in such a test then it should be reported. Gorard can add a footnote disavowing it if he likes.
Conventionally, if a test is not statistically significant then we assume that we haven’t proven anything. With this study, we simply don’t know. Couple this with the retrospective data-mining and it hardly seems like a good basis for spending over a million pounds.
I think I know why the EEF have found themselves in this situation. When I recently saw Jonathan Sharples, head of the EEF, speak, he was full of the benefits of ‘meta-cognitive strategies’. This is an excessively broad category of teaching interventions aimed at increasing thinking about learning. Reading comprehension tricks fall into this group and such strategies seem ripe for expectation effects (e.g. the placebo effect).
Perhaps this has led to an unconscious bias at EEF headquarters. During Sharples’ presentation he suggested that the results from trials of meta-cognitive strategies were consistently positive with a similar effect size. Yet he seems to have forgotten about the recent EEF trial of cognitive acceleration, a meta-cognitive approach to science lessons. This trial found no effect for cognitive acceleration.
And the history of cognitive acceleration is instructive. The original studies were conducted in the late 80s and early 90s. They caused quite a stir at the time because they demonstrated far transfer effects – it was claimed that a science intervention led to improvements in English Language results several years later.
Does that remind you of anything?
Correction: I asked Stephen Gorard and he said that a peer-reviewed paper had been published although he could not link me to it. I’ve now found this paper. It’s in the Journal of Philosophy of Education and is available on ResearchGate.