The concept of ‘Learning Progressions’ seems to be at the heart of the new Gonski 2.0 proposals for Australian schools because they appear to play an important role in the way that the personalisation envisaged by the Gonski panel will be achieved. We can even view a prototype in the system currently being piloted in New South Wales.
The idea of plotting progressions through different areas of learning reminds me of the UK’s defunct National Curriculum levels and, in particular, a project called Assessing Pupil Progress (APP) which was all the rage in about 2008. Moreover, the idea of trying to map learning progressions for ill-defined ‘general capabilities’ reminds me of the Personal Learning and Thinking Skills (PLTS) introduced in the UK National Curriculum reforms of 2007-2008; reforms that have since been abandoned. This is perhaps no coincidence because Ken Boston was a key figure in both the UK National Curriculum reforms of 2007-2008 and the Gonski 2.0 review.
Attempts to chart individual students’ progress through such levels is deeply flawed, although, if properly constructed, they do have some uses.
A learning progression is equivalent to a rubric or a set of ‘levels’ that it is thought students will pass through as they gain proficiency in some area of learning.
The first question we must ask is the provenance of these progressions. The vast majority of such progressions are essentially made-up. In some cases, schools attempt to construct their own. In other cases, there may be an element of drawing upon the research. The Australian National Literacy and Numeracy Progressions appear to fall into this category as they ‘were developed using evidence-based research in consultation with literacy and numeracy experts and practising teachers’. The risk in such a system is that we impose our views of how we think or would like learning to progress, rather than actually mapping a genuine pathway.
An alternative method would be to derive these progressions empirically. For instance, if we wished to develop a writing progression, we could look at lots of samples of writing that respond to the same prompt and ask English teachers to rank them, perhaps using comparative judgement software. Such a ranking would rely on a shared concept of quality that the experts possess – with the software we can even remove the judgements teachers who make idiosyncratic selections. Once we have our ranking, we can interrogate the samples to see how they differ and derive a progression from that. This still runs the risk of these criteria becoming targets and so it raises the question of what we would use them for.
Jumping forward and looping back
One significant flaw in all learning progressions is that they impose a linear pathway on learning when learning is rarely linear. A student may be able to do something today that he or she cannot do in two weeks’ time. Nevertheless, if you reteach the concept then the same student may pick it up very quickly the second time around. At what stage do we tick off the relevant box on the learning progression?
A linear pathway also assumes that all learning is cumulative and hierarchical and that’s not true either. Some learning objectives are simply new. Others mix the new with the old. If we teach a student about trigonometric ratios for the first time then how do we quantity this in terms of progress? The names, definitions and relationships are entirely new but they depend on some understanding of fractions and ratios. Nevertheless, a student may be able to solve problems using these ratios even if she doesn’t understand how they work.
That last point is an important example. Many Western mathematics teachers assume that understanding must come before procedural knowledge and will design their progressions accordingly. East Asian teachers, while still prioritising understanding, are more relaxed about the order. So who has this right?
Moreover, how do you show understanding? For many Western teachers this may be through discussion or the use of some invented strategy, but what of the talented maths student who struggles with articulation, perhaps because English is not his first language? What of the student who uses conventional strategies competently and efficiently? Are they somewhere further back along the progression because we deem them not to understand?
Even empirically derived progressions represent a kind of average route through a learning area. There is good reason to speculate that, at the individual level, pathways vary greatly.
Context is crucial
Another major problem for learning progressions is that they often neglect context in cases where context a far greater factor than any difference between adjacent levels on a learning progression. For instance, at CrT6 on the National Literacy Progression students write, ‘four or more sequenced and clearly connected ideas’ whereas at CrT7 they are able to, ‘support ideas with some detail and elaboration’. Imagine a child writing about a family fishing trip versus a child writing about the role of The Senate in Australian government. It is far easier to support ideas about the fishing trip with detail and elaboration than it is to write four or more sequenced and clearly connected ideas about The Senate. This is not surprising because writing is essentially a record of thinking and so writing about ideas that are harder to think about will be more challenging than writing about easier ideas.
Any teacher who therefore wants to demonstrate that children in his or her school are making a maximum rate of progress along the writing learning progression would therefore be well advised to choose really simple ideas for children to write about. We have an incentive to dumb-down the curriculum. And that’s why, without a shared understanding of the curriculum that students will be exposed to, learning progressions are meaningless. Gonski 2.0 even militates against high quality curriculum by misguidedly seeking to decouple it from particular years of schooling, focusing instead on chasing decontextualised learning progressions and therefore making it even harder to ensure that students are exposed to important ideas.
Can we measure the progress of an individual child?
Becky Allen, a professor of education from the UK, has recently written a blog post casting doubt on whether it is even possible to measure the progress of individual students from performances on high quality, standardised tests. Whether you are ideologically inclined towards standardised tests or not, there is no doubt that the level of psychometric sophistication that goes into their construction far exceeds that of any quick-and-dirty classroom assessment. So it is hard to see that learning progressions could ever assess anything other than the illusion of progress.
As a tool for planning rather than a tool for assessment
To summarise, main problems with learning progressions are that they don’t describe real learning pathways, that they neglect the critical role of context, that they can distort teaching and that they cannot really be used to measure progress. Does this mean they are completely useless? I don’t think so.
Instead of trying to measure progress, we can use them to try to predict and preempt progress. The kind of progression I am referring to would need to be empirical, so that it has some basis in reality, and contextualised. For example, imagine we are planning this year’s Romans unit in which students are to write an expository text on the rise of The Roman Empire, and imagine that we have last year’s responses to hand. We could rank last year’s responses using comparative judgement and use these, alongside other sources of information, to derive criteria to work on this year. The progression would help form a prediction, in advance of teaching, as to what the range of students will be able to do with the task; a starting point rather than an end point. In enacting this plan, we could then use formative assessment to vary teaching to the needs of the students; the jumping and looping.
Such a use of a learning progression avoids the trap of not describing real learning because it is empirically based and open to adjustment. It avoids the trap of decontextualisation because it is derived from one specific context. It is also temporary and a part of our planning rather than our assessment, meaning that there is less danger that demonstrating the features of the progression becomes the sole target of our teaching.