There is an unscientific current to education research that you don’t find in related domains such as cognitive science. It is a cultivated incuriousness – an indifference to how the thing that is being tested is actually meant to work. And the odder part of the phenomenon is that intelligent, learned researchers don’t seem to grasp the problem.
There is something that is perhaps pleasingly reductionist about declaring that you have no idea how arguing about teddy bears is meant to improve reading performance, you are just going to dispassionately run the numbers and see if it does. Yet there are a number of problems with this stance that immediately become obvious.
On a purely statistical level, you can never prove cause and effect absolutely, you can only run tests that generate some essentially arbitrary level of probability. However, if you told me that you could move a glass of water with your mind, I would need some pretty strong proof – stronger than the standard for statistical tests. Extraordinary claims do indeed require extraordinary evidence.
Back to the teddy bears – unless someone can explain to me how this is meant to work then I am minded to dismiss it as a nonsense.
At a more fundamental level, what kind of scientist really does not care about how something is supposed to work? What’s that about?
Which brings me to another old Education Endowment Foundation study – one that has been rattling around in my Twitter notifications for the past few days. This was meant to be a pilot study for a larger trial, although I am not aware that any larger trial ever took place. Unusually, it tested a curriculum – Word and World Reading. This programme was developed by the now seemingly defunct Curriculum Centre and was based on the Core Knowledge curriculum developed by E D Hirsch and others in the U.S., while incorporating a few additional ideas.
It is therefore worth revisiting Hirsch’s hypothesis. In his view, children are disadvantaged by a knowledge-poor curriculum on a number of levels, but a particular effect relates to reading comprehension. Writers do not spell everything out when they write – they assume a level of background knowledge from their readers. For instance, consider this explanatory paragraph from an item currently on the BBC news website:
“Taiwan is a self-governed democracy and for all practical purposes has acted as an independent nation since 1950, when China’s nationalist government was defeated by communist forces and fled there from the mainland.”
What the authors don’t say is that China is not a democracy, nor do they explain the difference between the nationalist and communists governments or why they were fighting or even that ‘the mainland’ means China. Moreover, the article uses ‘Taipei’ at one point, without pointing out that this signifies Taiwan in a similar way that we might use ‘Washington’ to refer to America. Without a decent level of knowledge, some of which is highly specific to this particular context, the article would be baffling. Given that functioning citizens of a democracy need to be able to consult and digest sources of information similar to the BBC website, this is worrying.
Hirsch’s contention is that middle class students will tend to pick-up much of this knowledge through enrichment from home via dinner tables conversations, questions about the news, trips to museums and so on. It is the less advantaged children who will miss out and so it is the job of schools to teach this enabling knowledge, something they don’t systematically do at the moment.
I feel slightly foolish pointing this out, but notice that knowledge of North American trees will not help you read that passage about China and Taiwan. There may be some vocabulary words that a student might learn in the context of North American trees – ‘practical’ perhaps? – but these will only provide limited help. They certainly won’t aid the building of a ‘situation model’ – a mental picture of what is happening – in the same way as specific knowledge relevant to China, Taiwan, democracy and so on. Hirsch has spent a lot of time arguing that general purpose, transferable skills do not exist and if knowledge of North American trees really did help you comprehend passages about Taiwan then this would effectively operate as a transferable skill.
So Hirsch’s process is a slow, cumulative one. First, we need to identify which knowledge is going to be most powerful in helping students unlock the world and then we need to teach it over a sustained period of time.
How can we test if Hirsch is right? I don’t know. Perhaps we would need to do long-term quasi-experimental or regression discontinuity studies – the attrition on a long-term randomised controlled trial involving disadvantaged kids would probably be too great. I’m not sure such research is necessary because the cognitive science is already in. I suppose there could be some wicked effects of implementing such a curriculum, but if we do actually manage to grow students’ general knowledge then that has to be a good thing, right? What’s the worst that could happen?
I do know how not to test it. That is to teach children knowledge about geography and history, perhaps about the Mesopotamian civilisation or something, and then test their reading comprehension in a different context entirely, because that would be like testing whether knowledge of North American trees helps students comprehend passages about Taiwan.
But that’s what the Education Endowment Foundation did. How was it supposed to work? They don’t say. I suspect nobody really cares. We are back to an incuriousness about how things work.
And I’ve noted before that the researchers essentially already accept Hirsch’s hypothesis. The original trial protocol called for students to be given a bespoke reading test set in the context that students had been learning about. However, the researchers ruled this out because they thought it would give the students who had received the experimental curriculum an advantage over control students:
“A bespoke test designed by the Curriculum Centre was suggested in the protocol, but this was not included in the analysis here because the test was deemed invalid by the evaluators as the content of the test covered materials that were taught explicitly to intervention children.”
So there you are. What a pointless waste of everybody’s time.
I don’t really know why they did this, but I suspect ideology played its part. Transferable skills are part of the dominant groupthink of the corner of education research that accepts quantitative analysis and randomised controlled trials. That’s why researchers keep looking for them and that’s why the Education Endowment Foundation looked for an effect of talking about teddy bears on reading comprehension. But this is not Hirsch’s hypothesis.
And I have also had people patiently explain to me that standardised tests provide a more rigorous measure than some bespoke test cooked-up by the programme developers. I really do understand this and it is a fair point, but only if the standardised test can actually provide data relevant to the thing you are trying to test.
It’s like saying, “Yes, we know you have been studying science, but the science assessment we have available is not properly standardised so we are going to assess your performance using a maths assessment instead. Maths and science are kinda related, right? Oh, who cares how it’s supposed to work…”