If I had the chance to buy both Ken Boston and Michael Roberts a book, it would be Dylan Wiliam’s new offering, Creating the Schools Our Children Need. Wiliam’s tome is written with an American audience in mind, but it is highly relevant to the controversy stirred-up by the Gonski 2.0 report in Australia. After dealing with a number of ideas for improving educational performance that will not work, Wiliam alights on two that he believes will; instituting a knowledge-rich curriculum and ensuring that teachers continually improve by constantly collecting and analysing evidence of learning.
The Gonski 2.0 review appears to have completely neglected the importance of a knowledge-rich curriculum, instead emphasising development along a series of ‘learning progressions’ that lack context. To the review panel, literacy, numeracy and even critical thinking are cumulative skills that can be mapped onto linear pathways.
This is not a new idea. England used to have the kind of levels that Ken Boston advocates for in today’s Sydney Morning Herald. And this is no coincidence because Boston was a key figure in the UK’s 2007 reform of its national curriculum which, like Gonski 2.0, involved emphasising a more skills-based approach and downplaying the importance of subject knowledge. Suffice it to say that this reform did not lead to an improvement in the performance of English students on international assessments. The shortcomings in the curriculum were addressed with a sense of urgency after the 2010 election of the Conservative-Liberal Democrat coalition and the kind of learning progression ‘levels’ that Boston is proposing for the future of Australian education were unceremoniously dumped in England in 2014.
There are a number of reasons for this. Firstly, parents couldn’t really understand them. Secondly, there is simply no linear pathway to expertise in a domain such as writing. Writing is heavily context dependent. If we look, for instance, at the new Australian National Literacy Learning Progressions, we read that at Level CrT8, a student, ‘writes for a range of learning area purposes’, whereas at CrT9 he or she, ‘writes informative texts for a broad range of learning area purposes that describe, explain and document’. This is not a clear, objective way of distinguishing between degrees of performance. Instead, it is highly subjective and arguable. Moreover, any given child may easily demonstrate a higher level in an area that he or she knows a lot about, but struggle to demonstrate a much lower level in an unfamiliar area. Assessment driven by levels of this kind therefore drives teachers to select easier and more accessible content for students to write about – the opposite of the logic of a knowledge-rich curriculum.
Where the literacy learning progressions are more specific, there is an even worse problem to confront. For instance, if we believe that a student at CrT6, ‘writes simple and compound sentences related to a topic using conjunctions (and, but, so, because, when),’ then all we have to do is train the student to do this a few times and we can tick that level off. A measure that may once have been a good indicator of a particular quality of performance is now a hurdle to be minimally cleared:
The thought of extending this already flawed process to the creation of a ‘learning progression’ for the ‘general capability’ of critical thinking represents a paradigm shift of fallacious thinking. As Wiliam explains:
“Critical thinking in mathematics requires learning, in mathematics class, what it means to thinking critically in math, and this ability does not transfer from one school subject to another. No amount of training students to think critically in history makes them any better at critical thinking in math.”
If we follow Gonski panel member Michael Robert’s advice and remove most subject content from the early primary curriculum then we will be missing many opportunities to build the kind of vocabulary knowledge and world knowledge that are needed for later reading comprehension and that more advantaged children can mitigate to some extent at home. Again, the mistake is the recurrent one of seeing domains like literacy and numeracy as cumulative skills that can be developed completely independently of content. What are these students reading and writing about? What are the maths word problems about?
It is almost as if the Gonski panel were entirely unaware of these arguments. But they cannot have been, because they were present in the public submissions.
According to Ken Boston, it seems like the panel were strongly influence by Andreas Schleicher of the OECD, the organisation that runs the Programme for International Student Assessment (PISA), one of the sets of international comparison tests that many countries participate in.
I am sure that Schleicher is a very nice man. However, it is worth pointing out that his proposals are not often supported by his own data. He talks a lot about the woes of ‘memorisation’ but the data on this is patchy at best. More significantly, perhaps, the OECD define effective teaching as being student-oriented and yet PISA reveals that the more student-oriented the teaching, the worse the maths scores (a similar find to science and the use of inquiry learning).
The Gonski 2.0 panel members should have taken heed of all of the submissions. The panel members should have reviewed robust evidence and made conclusions based upon this evidence; conclusions that included an analysis of cost and benefit. It is a shame for Australia that they did not.
Note: when I first published this piece I confused Michael Roberts’s name with the journalist who wrote the story. That’s fixed now but it’s still reflected in the URL and Twitter and email previews.