The fallacy of learning progressions

Embed from Getty Images

The concept of ‘Learning Progressions’ seems to be at the heart of the new Gonski 2.0 proposals for Australian schools because they appear to play an important role in the way that the personalisation envisaged by the Gonski panel will be achieved. We can even view a prototype in the system currently being piloted in New South Wales.

The idea of plotting progressions through different areas of learning reminds me of the UK’s defunct National Curriculum levels and, in particular, a project called Assessing Pupil Progress (APP) which was all the rage in about 2008. Moreover, the idea of trying to map learning progressions for ill-defined ‘general capabilities’ reminds me of the Personal Learning and Thinking Skills (PLTS) introduced in the UK National Curriculum reforms of 2007-2008; reforms that have since been abandoned. This is perhaps no coincidence because Ken Boston was a key figure in both the UK National Curriculum reforms of 2007-2008 and the Gonski 2.0 review.

Attempts to chart individual students’ progress through such levels is deeply flawed, although, if properly constructed, they do have some uses.

Made-up

A learning progression is equivalent to a rubric or a set of ‘levels’ that it is thought students will pass through as they gain proficiency in some area of learning.

The first question we must ask is the provenance of these progressions. The vast majority of such progressions are essentially made-up. In some cases, schools attempt to construct their own. In other cases, there may be an element of drawing upon the research. The Australian National Literacy and Numeracy Progressions appear to fall into this category as they ‘were developed using evidence-based research in consultation with literacy and numeracy experts and practising teachers’. The risk in such a system is that we impose our views of how we think or would like learning to progress, rather than actually mapping a genuine pathway.

An alternative method would be to derive these progressions empirically. For instance, if we wished to develop a writing progression, we could look at lots of samples of writing that respond to the same prompt and ask English teachers to rank them, perhaps using comparative judgement software. Such a ranking would rely on a shared concept of quality that the experts possess – with the software we can even remove the judgements teachers who make idiosyncratic selections. Once we have our ranking, we can interrogate the samples to see how they differ and derive a progression from that. This still runs the risk of these criteria becoming targets and so it raises the question of what we would use them for.

Jumping forward and looping back

One significant flaw in all learning progressions is that they impose a linear pathway on learning when learning is rarely linear. A student may be able to do something today that he or she cannot do in two weeks’ time. Nevertheless, if you reteach the concept then the same student may pick it up very quickly the second time around. At what stage do we tick off the relevant box on the learning progression?

A linear pathway also assumes that all learning is cumulative and hierarchical and that’s not true either. Some learning objectives are simply new. Others mix the new with the old. If we teach a student about trigonometric ratios for the first time then how do we quantity this in terms of progress? The names, definitions and relationships are entirely new but they depend on some understanding of fractions and ratios. Nevertheless, a student may be able to solve problems using these ratios even if she doesn’t understand how they work.

That last point is an important example. Many Western mathematics teachers assume that understanding must come before procedural knowledge and will design their progressions accordingly. East Asian teachers, while still prioritising understanding, are more relaxed about the order. So who has this right?

Moreover, how do you show understanding? For many Western teachers this may be through discussion or the use of some invented strategy, but what of the talented maths student who struggles with articulation, perhaps because English is not his first language? What of the student who uses conventional strategies competently and efficiently? Are they somewhere further back along the progression because we deem them not to understand?

Even empirically derived progressions represent a kind of average route through a learning area. There is good reason to speculate that, at the individual level, pathways vary greatly.

Context is crucial

Another major problem for learning progressions is that they often neglect context in cases where context a far greater factor than any difference between adjacent levels on a learning progression. For instance, at CrT6 on the National Literacy Progression students write, ‘four or more sequenced and clearly connected ideas’ whereas at CrT7 they are able to, ‘support ideas with some detail and elaboration’. Imagine a child writing about a family fishing trip versus a child writing about the role of The Senate in Australian government. It is far easier to support ideas about the fishing trip with detail and elaboration than it is to write four or more sequenced and clearly connected ideas about The Senate. This is not surprising because writing is essentially a record of thinking and so writing about ideas that are harder to think about will be more challenging than writing about easier ideas.

Any teacher who therefore wants to demonstrate that children in his or her school are making a maximum rate of progress along the writing learning progression would therefore be well advised to choose really simple ideas for children to write about. We have an incentive to dumb-down the curriculum. And that’s why, without a shared understanding of the curriculum that students will be exposed to, learning progressions are meaningless. Gonski 2.0 even militates against high quality curriculum by misguidedly seeking to decouple it from particular years of schooling, focusing instead on chasing decontextualised learning progressions and therefore making it even harder to ensure that students are exposed to important ideas.

Can we measure the progress of an individual child?

Becky Allen, a professor of education from the UK, has recently written a blog post casting doubt on whether it is even possible to measure the progress of individual students from performances on high quality, standardised tests. Whether you are ideologically inclined towards standardised tests or not, there is no doubt that the level of psychometric sophistication that goes into their construction far exceeds that of any quick-and-dirty classroom assessment. So it is hard to see that learning progressions could ever assess anything other than the illusion of progress.

As a tool for planning rather than a tool for assessment

To summarise, main problems with learning progressions are that they don’t describe real learning pathways, that they neglect the critical role of context, that they can distort teaching and that they cannot really be used to measure progress. Does this mean they are completely useless? I don’t think so.

Instead of trying to measure progress, we can use them to try to predict and preempt progress. The kind of progression I am referring to would need to be empirical, so that it has some basis in reality, and contextualised. For example, imagine we are planning this year’s Romans unit in which students are to write an expository text on the rise of The Roman Empire, and imagine that we have last year’s responses to hand. We could rank last year’s responses using comparative judgement and use these, alongside other sources of information, to derive criteria to work on this year. The progression would help form a prediction, in advance of teaching, as to what the range of students will be able to do with the task; a starting point rather than an end point. In enacting this plan, we could then use formative assessment to vary teaching to the needs of the students; the jumping and looping.

Such a use of a learning progression avoids the trap of not describing real learning because it is empirically based and open to adjustment. It avoids the trap of decontextualisation because it is derived from one specific context. It is also temporary and a part of our planning rather than our assessment, meaning that there is less danger that demonstrating the features of the progression becomes the sole target of our teaching.

Advertisements

3 thoughts on “The fallacy of learning progressions

  1. It never ceases to amaze me that educators persist in measuring learning through pupil’s writing. Highly-talented people in all walks of life may be utterly incapable of writing a coherent sentence explaining what they do, let alone an essay–we don’t give a damn whether our roofer can explain in writing how he will stop water coming through our ceiling. Although the ability to write well is a valuable skill, it’s also relatively rare. Because writing reflects speech, people with articulate parents have a huge advantage over those whose parents seldom utter a well-formed sentence.

    This is not to say that schools can’t improve children’s writing. For a start, the ease with which they can transcribe words on paper (or electronically) is critical–to the extent that it is effortful, it detracts from the attention available for spelling, grammar and punctuation, let alone trying to construct a meaningful discourse. Yet at the same time, very few primary schools teach even the most basic handwriting skills, such as how to hold a pencil or the correct sequence of strokes needed to form letters and numerals. One of the most valuable courses I ever took was touch-typing, which was (and probably still is) standard practice for most pupils in Amercan schools; now, I merely have to think about what I want to say, and the words magically appear on my screen. The cognitive load is as near zero as possible. Another area where teachers can make a huge difference is spelling. Sadly, they seldom have any ideas beyond sending children home with a list of words and teaching the look, cover, write and check method.

    Now that schools in England have abolished levels, they’ve left something of a vacuum: hence, the old levels haven’t really died. I think it’s about time that advocates of a knowledge-rich curriculum stopped talking about wheezes like comparative judgment and started routine testing of knowledge with short answers and MCQs. This, at least, is relatively unproblematic. This kind of testing also has huge advantages in terms of motivating all pupils and securing learning in long-term memory.

  2. Hmm there’s a lot here and I don’t know if I agree with the general thrust.
    No, learning is not purely linear but I don’t know if people advocating learning progressions think it is any more linear than people more in favour of age-based standards or a knowledge-based curriculum. Time is linear so it helps to think of things in a linear way even if we know it is more of a web than a line and there is a need to consider other factors such as interleaving, retrieval, etc. One ‘line’ may be less effective than another but that would be a reason for research that we could all learn from.
    I take your point about dumbing down to reach a progression (already happens with the achievement standards) but I feel that is more an issue for what the progressions actually say and how schools implement it rather than the idea of progressions per se.
    And yes some students may be better in some topics rather than others but I suspect they would recognise this as well by having multiple, rather discrete, progressions. I don’t think anyone would think that this student can’t begin trig at all because they keep making mistakes in ratios.
    I did read the blog on progress and I left that questioning the point of that too. Of course there is error in assessment and as there are 2 assessments, value add models of individual students has potentially twice as much error (which is why they are mostly used for comparing groups instead) but this is only a problem if you are reporting on an individual with very fine measurements compared with what I suspect would be the case 3 or 5 bands.

    1. “I suspect they would recognise this as well by having multiple, rather discrete, progressions.”

      I agree. The more you try and impose these progressions, the more progressions you have to create as you butt up against reality. This is why systems such as APP in the UK and ALAN in NSW became/are so complicated and bureaucratic. In NSW, news reports are suggesting that schools are bringing in CRT teachers to cover regular teachers while they input the data; data that cannot possibly be the result of a serious analysis of assessment evidence.

      “I feel that is more an issue for what the progressions actually say and how schools implement it rather than the idea of progressions per se”

      I disagree with this. I think it is a problem inherent in *all* such sets of levels. Royce Sadler is good on this. He gives a moderate estimate of at least 50 criteria on which a piece of writing can be judged. You cannot list all 50+ exhaustively and so you either end up writing vague statements that can apply to anything (at a conference, Sadler produced a ‘university’ writing rubric and asked us to study it before revealing that it was for Year 8 – it seemed completely plausible for undergraduate writing) or you end up sampling from the criteria, in which case, criteria that were once a good proxy for expert performance now become a target and therefore cease to be a good proxy for expert performance.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.