You have probably heard of core knowledge. This is based upon E D Hirsch’s thesis that, once students have mastered decoding (turning printed letters on the page into imagined sounds), reading comprehension largely depends upon background or general knowledge. Hirsch developed the Core Knowledge Sequence in the U.S. as a way of trying to systematically teach the most widely shared knowledge in our society; the kind of background knowledge writers often assume that intelligent readers will possess. In his most recent book, Hirsch has generalised this idea as the concept of ‘core knowledge’ (lower case) which represents any systematic approach to teaching background knowledge and not just his own scheme. But does it work? Does systematically teaching background knowledge really improve reading comprehension and unlock academic achievement more generally?
A while back, the Education Endowment Foundation (EEF) released the results of a pilot randomised controlled trial (RCT) of, “Word and World Reading.” This is a core knowledge approach to teaching history and geography that was developed in the UK by a charity known as “The Curriculum Centre” and that is based upon Hirsch’s own scheme. The study found zero effect.
I had a brief look at this study when it was released and concluded that it didn’t provide much evidence one way or the other. Seventeen primary schools were selected to take part and nine were randomised to receive Word and World Reading which they delivered in a variety of ways over the course of the year. For instance, “some had geography and history every week, some had geography and history on alternate weeks, while others had geography in one term and then history the next.” Notably, many of the teachers involved in delivering Word and World Knowledge did not know the content themselves or learn it before teaching it to their classes:
“…where teachers attempted to initiate discussions, their lack of general knowledge and confidence in taking the discussions beyond the text was sometimes apparent. Evaluators noted some factual errors during the observation visits, such as giving pupils wrong information about the topic.”
This is an example of an implementation problem and implementation problems bedevil large-scale trials of this kind. They are not trivial and could even sink a concept. For instance, if we don’t believe that it is possible at scale to ensure that teachers possess the required knowledge then the program is a dead duck. Yet, given that this is the sort of knowledge that we are attempting to teach to elementary school students, such a belief would be an indictment of the quality of primary school teachers.
The second major problem is the big difference in characteristics between the students in the Word and World Reading program compared with the control. We should not expect these groups to be identical – you can’t have both randomisation and completely equivalent sets of student characteristics. You have to choose one or the other and the benefits of randomisation often outweigh matching the characteristics. However, randomising at the school level does risk very large variations.
The main measured outcome of the study was a battery of literacy tests known as “Progress in English” (PiE), consisting of a spelling, grammar and two reading comprehension assessments. One of the comprehension tests was a narrative and the other was not. Based upon Hirsch’s theory, we might expect any effect of the Word and World Knowledge history and geography programme to show up most on the non-narrative comprehension test. The problem was that 24% of the students in the control group spoke a first language other than English whereas this rose to a whopping 45% for the students in the schools that were implementing Word and World Knowledge. This makes the results on a test of English almost impossible to interpret. There also happened to be more students eligible for free school meals in the experimental group and a big difference in ethnic background (77% non-white British versus 45% in the control).
So I forgot about the study because it didn’t seem to say very much.
And then I ran into Jonathan Sharples of the EEF last week at the E4L Evidence Exchange and he mentioned it. So I thought I would take another look and I now realise that there is a much more profound issue that needs to be addressed. It hinges on this little section from the report:
“A bespoke test designed by the Curriculum Centre was suggested in the protocol, but this was not included in the analysis here because the test was deemed invalid by the evaluators as the content of the test covered materials that were taught explicitly to intervention children.”
Bespoke tests are generally avoided in favour of standardised tests in these kinds of trials and for good reason. But this hints at something more. Essentially, the researchers are explaining that these tests needed to be excluded because the researchers assumed that Hirsch’s thesis is right.
Imagine that you teach a core knowledge course for just one year and cover Meso-American cultures and rivers. You then give students a standardised reading test where the comprehension text is an article about vitamins. Should you expect a boost from core knowledge? No. Hirsch’s whole point is that, beyond decoding text, reading comprehension skills are not transferable. That’s why he proposes the long and laborious process of building knowledge from first grade upwards. At some point, vitamins might appear, at another we might look at rivers. Knowledge gained mainly orally in first grade might then aid in understanding a Grade 4 reading comprehension exercise.
This is partly due to the fact that oral comprehension outstrips reading comprehension for much of the early grades. Reading tests at these levels are therefore heavily influenced by decoding whereas it’s from Grade 4 upwards that background knowledge comes to the fore. If you want to test core knowledge on standardised reading tests then you would need a very long-term RCT that tracked students over the first four or five years whilst they were cycled through a much more substantial core knowledge curriculum that also covered science and literature. This would be very hard to do, expensive and likely to suffer from high drop-out as students move schools.
Am I arguing that core knowledge is therefore untestable? Perhaps testing it in this way is impractical. If so, should we give up on the idea?
Let’s go back to the researchers’ argument. Why is it unfair to use a reading comprehension text set in the context of the topics that the Word and World Knowledge students have studied? The assumption is that such a test would advantage the Word and World Knowledge students because their better background knowledge will allow them to better comprehend the text. Yet this assumption is basically Hirsch’s argument. So we all sort-of agree already.
And this assumption is probably right. A new study of reading comprehension tests that’s just been released found that, “better vocabulary and background knowledge were the most important reader characteristics in accounting for reading comprehension.” If the claim was that something nebulous like ‘resilience’ was the most important characteristic then we may well question whether it is something that we can teach. But we know that we can teach knowledge because we have been successfully doing this for some time. What we don’t know is what type of knowledge building approach is optimal.
So I think this is one instance where we already know part of the answer. We know that better background knowledge causes better reading comprehension and we know that schools can teach at least some of this knowledge. It would be great if better-designed RCTs could add something to this but, given the practical issues, we might look to correlational studies or horse-race designs instead; designs that pit one type of curriculum sequence against another. We can then tease out the more effective from the less effective and set about solving some of the implementation problems that this study has brought to light.