This week, the education ministers representing Australia’s states and territories met with the federal education minister, Dan Tehan, and agreed a plan for improvement. This comes on the back of disappointing PISA results for Australia and so politicians are feeling the pressure to do something. That doesn’t necessarily mean, however, that they will do something useful or effective. Some of the reforms they have committed to are to be commended, such as boosting the teaching of phonics in teacher education, but one seems incongruous. If these reforms are a response to poor PISA performance, then where is the evidence that developing a system of ‘learning progressions’ will help boost this performance? I am not aware of any.
One of the odd aspects of this proposal is that it looks similar to an approach abandoned in the UK nearly a decade ago and yet the UK is showing tentative signs of progress on PISA.
When the national curriculum was first introduced in the late 1980s to England and Wales, it came with a set of ‘Attainment Targets‘. These were rubrics that described a set of levels (1-8 plus ‘exceptional performance’) against which students were meant to be assessed at the end of each ‘key stage’ i.e. the ends of Years 2, 6 and 9 (equivalent to Years 1, 5 and 8 in Aus). The descriptions initially did not matter much in English, maths and science because there were tests at the end of the key stage and so these tests were used to derive the levels. However, over time, they grew into a monster. The tests were removed at Year 9 and schools started to assess and report ‘sub-levels’ to try to describe progress. So a student may move from 5c to 5a in the course of a year. The validity of these assessments was highly dubious. Then the government introduced something called ‘Assessing Pupil Progress’ or APP which doubled-down on this idea. Now teachers had A3 sheets of descriptors to fill-in with evidence to show when each student had demonstrated each part of every level. It was bureaucratic madness and not in any way valid. Eventually, the whole system of levels was abandoned with the election of the Conservative-Liberal Democrat coalition government in 2010.
There are two main reasons for the lack of validity of level descriptions such as the ones developed in England and the ones currently used in New South Wales. In turn, this lack of validity leads to a paradoxical inequity.
Firstly, levels are rarely derived from actual data drawn from the kind of progress real students make. Even when they are, they end up averaging student progress. This means, for instance, that if ‘decimals’ is a topic at Level 6 and we have a student who is currently working to attain Level 5 then we might choose not teach teach decimals to this child. However, this particular individual may be perfectly capable of understanding decimals. We cannot know. As a teacher, it never ceases to amaze me just how bad our predictions are of what students can and cannot cope with. That is why a key component of effective teaching is to constantly check by asking questions as you teach. If some students in a year group are learning about decimals and some are not, it is impossible for the latter group to ever catch-up with the former. I therefore believe firmly in having a curriculum based on the year the student is in and then intervening with those students who struggle to cope with it through a model called ‘response to intervention‘*. This levels-up the students rather than giving them a different diet based on their starting point. Critics of having a curriculum based on year level will point out that there is a huge variety of ability within any group of children who just happen to have the same chronological age. To them I ask: well what are you going to do about that? Are you going to further entrench it by teaching them different things?
This is the paradox. Advocates of learning progressions think that they are taking a monolithic curriculum and tailoring it to the specific needs of individuals, but they attempt to do this by imposing an external model of progress on each child that may bear little resemblance to the actual trajectory of progress the individual child is capable of making.
Similarly, learning progressions impose a hierarchy where they may be none. Some choices of what to teach next in a sequence are arbitrary and do not represent a further development of something that has gone before. Which leads to the next point.
The second conceptual problem with learning progressions is that a single continuum that is intended to represent progress through a subject inevitably sees that subject as a skill that you gradually develop over time, analogous to going to the gym and developing your biceps and triceps. This can make some sense for very early reading, for instance, but most school-level academic subjects rapidly branch into different, parallel aspects, becoming more complex than a linear model can allow for. Writers of learning progressions often cope with this by making their criteria vague, reducing the validity of any assessment we can make. Professor Royce Sadler, an academic from Queensland, has written extensively on the limitations of using rubrics to assess complex, multi-faceted products such as pieces of writing. Just one of the issues is that rubrics tend to pick arbitrary features from a much wider range of possible factors and the focus on those features. This can lead to teaching to the features, a problem I attempted to summarise a few years ago with this diagram:
Once you set out to write a learning progression and you end up with statements like this: “presents a position and supports it with one or two simply stated arguments.” This is meant to be a description of persuasive writing at level CrT8 of the NSW literacy learning progression. It is entirely possible to imagine applying that statement to the writing of a five-year-old and of a PhD student because context is lacking. Once we decontextualise writing and see it as a skill, we miss the critical issue of what the writing is about.
It is far easier to get students to demonstrate this skill, if that’s what it actually is, by writing about whether they should have to wear school uniform than to write about whether it was justified to intervene militarily in Afghanistan or the role of the Chorus in Medea. Therefore, the imposition of this kind of learning progression incentivises teachers to select simple, banal contexts for writing. This is exactly the same problem as the one that afflicts the NAPLAN writing assessment. Due to the fact that we cannot guarantee that any Australian children have been taught any specific content, NAPLAN assessments ask them to write about whether dogs or cats make better pets or something like that. This takes us in precisely the opposite direction to that of a knowledge-rich curriculum. It drives inequity because privileged kids still learn powerful knowledge around the dinner table, on family trips to museums or through going to schools that still teach rich curricula. It is the schools with low-SES profiles that are under accountability pressure to improve progress who end up getting kids to write endlessly about nothing of any consequence.
In fact, injecting complex knowledge back into the curriculum is one plausible way to, over time, improve our PISA standing. And yet Dan Tehan’s plan involves somehow de-cluttering the curriculum and adding in these bureaucratic and invalid learning progressions.
*there are other alternatives to learning progressions that I would also endorse such as mastery learning, but I won’t explore that here.