Learning progressions are invalid and inequitable


This week, the education ministers representing Australia’s states and territories met with the federal education minister, Dan Tehan, and agreed a plan for improvement. This comes on the back of disappointing PISA results for Australia and so politicians are feeling the pressure to do something. That doesn’t necessarily mean, however, that they will do something useful or effective. Some of the reforms they have committed to are to be commended, such as boosting the teaching of phonics in teacher education, but one seems incongruous. If these reforms are a response to poor PISA performance, then where is the evidence that developing a system of ‘learning progressions’ will help boost this performance? I am not aware of any.

One of the odd aspects of this proposal is that it looks similar to an approach abandoned in the UK nearly a decade ago and yet the UK is showing tentative signs of progress on PISA.

When the national curriculum was first introduced in the late 1980s to England and Wales, it came with a set of ‘Attainment Targets‘. These were rubrics that described a set of levels (1-8 plus ‘exceptional performance’) against which students were meant to be assessed at the end of each ‘key stage’ i.e. the ends of Years 2, 6 and 9 (equivalent to Years 1, 5 and 8 in Aus). The descriptions initially did not matter much in English, maths and science because there were tests at the end of the key stage and so these tests were used to derive the levels. However, over time, they grew into a monster. The tests were removed at Year 9 and schools started to assess and report ‘sub-levels’ to try to describe progress. So a student may move from 5c to 5a in the course of a year. The validity of these assessments was highly dubious. Then the government introduced something called ‘Assessing Pupil Progress’ or APP which doubled-down on this idea. Now teachers had A3 sheets of descriptors to fill-in with evidence to show when each student had demonstrated each part of every level. It was bureaucratic madness and not in any way valid. Eventually, the whole system of levels was abandoned with the election of the Conservative-Liberal Democrat coalition government in 2010.

There are two main reasons for the lack of validity of level descriptions such as the ones developed in England and the ones currently used in New South Wales. In turn, this lack of validity leads to a paradoxical inequity.

Firstly, levels are rarely derived from actual data drawn from the kind of progress real students make. Even when they are, they end up averaging student progress. This means, for instance, that if ‘decimals’ is a topic at Level 6 and we have a student who is currently working to attain Level 5 then we might choose not teach teach decimals to this child. However, this particular individual may be perfectly capable of understanding decimals. We cannot know. As a teacher, it never ceases to amaze me just how bad our predictions are of what students can and cannot cope with. That is why a key component of effective teaching is to constantly check by asking questions as you teach. If some students in a year group are learning about decimals and some are not, it is impossible for the latter group to ever catch-up with the former. I therefore believe firmly in having a curriculum based on the year the student is in and then intervening with those students who struggle to cope with it through a model called ‘response to intervention‘*. This levels-up the students rather than giving them a different diet based on their starting point. Critics of having a curriculum based on year level will point out that there is a huge variety of ability within any group of children who just happen to have the same chronological age. To them I ask: well what are you going to do about that? Are you going to further entrench it by teaching them different things?

This is the paradox. Advocates of learning progressions think that they are taking a monolithic curriculum and tailoring it to the specific needs of individuals, but they attempt to do this by imposing an external model of progress on each child that may bear little resemblance to the actual trajectory of progress the individual child is capable of making.

Similarly, learning progressions impose a hierarchy where they may be none. Some choices of what to teach next in a sequence are arbitrary and do not represent a further development of something that has gone before. Which leads to the next point.

The second conceptual problem with learning progressions is that a single continuum that is intended to represent progress through a subject inevitably sees that subject as a skill that you gradually develop over time, analogous to going to the gym and developing your biceps and triceps. This can make some sense for very early reading, for instance, but most school-level academic subjects rapidly branch into different, parallel aspects, becoming more complex than a linear model can allow for. Writers of learning progressions often cope with this by making their criteria vague, reducing the validity of any assessment we can make. Professor Royce Sadler, an academic from Queensland, has written extensively on the limitations of using rubrics to assess complex, multi-faceted products such as pieces of writing. Just one of the issues is that rubrics tend to pick arbitrary features from a much wider range of possible factors and the focus on those features. This can lead to teaching to the features, a problem I attempted to summarise a few years ago with this diagram:

Once you set out to write a learning progression and you end up with statements like this: “presents a position and supports it with one or two simply stated arguments.” This is meant to be a description of persuasive writing at level CrT8 of the NSW literacy learning progression. It is entirely possible to imagine applying that statement to the writing of a five-year-old and of a PhD student because context is lacking. Once we decontextualise writing and see it as a skill, we miss the critical issue of what the writing is about.

It is far easier to get students to demonstrate this skill, if that’s what it actually is, by writing about whether they should have to wear school uniform than to write about whether it was justified to intervene militarily in Afghanistan or the role of the Chorus in Medea. Therefore, the imposition of this kind of learning progression incentivises teachers to select simple, banal contexts for writing. This is exactly the same problem as the one that afflicts the NAPLAN writing assessment. Due to the fact that we cannot guarantee that any Australian children have been taught any specific content, NAPLAN assessments ask them to write about whether dogs or cats make better pets or something like that. This takes us in precisely the opposite direction to that of a knowledge-rich curriculum. It drives inequity because privileged kids still learn powerful knowledge around the dinner table, on family trips to museums or through going to schools that still teach rich curricula. It is the schools with low-SES profiles that are under accountability pressure to improve progress who end up getting kids to write endlessly about nothing of any consequence.

In fact, injecting complex knowledge back into the curriculum is one plausible way to, over time, improve our PISA standing. And yet Dan Tehan’s plan involves somehow de-cluttering the curriculum and adding in these bureaucratic and invalid learning progressions.


*there are other alternatives to learning progressions that I would also endorse such as mastery learning, but I won’t explore that here.

6 thoughts on “Learning progressions are invalid and inequitable

  1. A considerable number of fallacies and misunderstandings in this post.

    1. A ‘straw man’ argument has been used. The author isn’t talking about developmental rubrics written according to best-practice guidelines, which are found here: https://reliablerubrics.com/2015/02/09/rules-for-writing-quality-criteria/

    2. The author assumes a lot of other actions must go hand in hand with rubric use. Skill-based rubrics are a great way to improve skills. Rubric use does not imply ignoring content knowledge.

    3. Overinterpretation of the PISA data. You cannot assume that because one small pedagogical technique (rubric use) was dropped, this led to an improvement in PISA results in the UK. This is a major flaw in between-country PISA comparisons -> there are simply too many variable between countries for comparisons to be of much use.

    4. Regarding the diagram, yes of course rubrics sample from the total number of skills on display in any performance. This is hardly new. Students are not capable of receiving feedback on every tiny skill they might employ in performing a task. Assessment samples from the range of skills needed. This is true of all assessment practices.

    5. I am not sure the author grasps the difference between a theoretical progression (rubric teachers make up based on their knowledge) and an empirical progression (that same rubric refined using student data). The progressions that the LPOFAI project would undertake would of course be empirically validated. We are even able to empirically validate our own school-based progressions just using student data and some spreadsheets. It is not hard.

    6. Does the author understand the difference between current achievement and progress? ” If some students in a year group are learning about decimals and some are not, it is impossible for the latter group to ever catch-up with the former.” This is simply false. Students might progress at different speeds.

    7. Trying to claim that rubric use is unethical is demonstrably false. Using rubrics to diagnose student current ability allows teachers to teach students at their point of readiness. Otherwise the vast majority just teach ‘to the middle’ and provide a little extra support for those at the bottom and those needing extension… well, ‘they’ll be right’. Since using rubrics written according to ARC guidelines we have found a huge improvement in the abilities of our bottom and top students as they can be given learning activities at their level. Rubric use is in fact much MORE equitable.

    8. Whether some people use simpler content knowledge to develop skill is again not a problem that should be put at the doormat of rubric use.

    9. I find it interesting that the R Sadler article does not reference any of the work from Patrick Griffin or his team of researchers at Melbourne University. Is he aware of their work?

    To conclude, the author commits a common mistake in critiquing teaching strategies:

    (a) doesn’t fully understand the thing he is criticising
    (b) criticises the strategy because it has previously (and perhaps currently) been used incorrectly
    (c) reasons that because it doesn’t cure all ills, it must be faulty (rubric use should be seen as one small piece of the puzzle)

    1. Interesting. I like the way you have reversed the burden of proof here. My point about PISA was not that there is evidence that rubrics do not work but that I am not aware of any evidence from PISA that they do work. I am not aware, for example, of education systems that implemented rubrics and then improved on PISA. This is relevant to the discussion in Australia because ‘learning progressions’ are being advanced as a way of improving PISA scores. A few points:

      1. I am going to ignore the obvious ad hominem about not understanding what I am criticising. People can read our comments and judge for themselves as to who has the better understanding.

      2. “Whether some people use simpler content knowledge to develop skill is again not a problem that should be put at the doormat of rubric use.” Why not? It’s a major issue, in my experience, and one that is intrinsic to simplistically viewing something as complex as writing as a linearly developing skill.

      3. I have no idea if Sadler has read Griffin. If this causes him to be in error, point out what that error is. If not, who cares?

      4. If students at the less advanced end of a progression move at a different rate to those at the more advanced end, and that rate is slower, then, yes, I stand by my claim that they will never catch up. That’s simple logic. I see nothing in rubrics and learning progressions that would cause students at the less advanced end to move more quickly. Response to intervention, on the other hand, addresses this issue.

      5. Yes, I do understand the difference between theoretical and empirically derived progressions. I wrote about that. And I explained that even empirically derived rubrics impose an averaged model of progress on an individual. And I explained why that was a problem.

      6. You have accepted that rubrics sample from a domain but you do not think this is a problem because all assessments sample from a domain. However, other forms of assessment can constantly change the sample. Rubrics cannot and so this leads to teaching to the features highlighted in the rubric.

      Returning to my original point, it really is for proponents of rubrics to supply positive evidence of effectiveness. If all I have done is merely point out examples of where they have been used incorrectly, proponents should be able to point to examples of where they have been used correctly and the positive effect they have had. Otherwise, they may not be effective or, even if they are effective, they may be extremely difficult to implement correctly.

  2. The author didn’t respond to the main two criticisms:

    1. The rubric use that is being discussed is not the one developed by the University of Melbourne’s Assessment Research Centre and outlined here:

    https://reliablerubrics.com/2015/02/09/rules-for-writing-quality-criteria/

    This is not an ad hominem argument, it is the straw-man fallacy. The author has chosen a weak version of rubric use and criticised that.

    A perfect illustration is this statement “presents a position and supports it with one or two simply stated arguments.” The word ‘simply’ here is hugely ambiguous and renders the entire statement meaningless. This is not what I am referring to when I talk about rubrics.

    2. Saying an assessment system is at fault for bad pedagogy is not the fault of the pedagogy. Was splitting the atom a bad idea because… Nagasaki? No. I see a lot of rubric usage that is far and away more equitable than typical teach-to-the-middle pedagogy.

    Moreover, there is ample research highlighting the effectiveness of using rubrics as I am suggesting. Because the field is relatively new (NOT badly written rubrics, they have been around for ages), they aren’t named as such in e.g. Hattie’s effect sizes. Feedback, formative evaluation and response to intervention are all part of rubric use as outlined by the ARC.

    Other furphies;

    – why can’t rubrics change which skills are sampled from the domain? We do that all the time.
    – who specifically is saying that learning progressions are going to improve our PISA ranking?

    1. It was the ‘Gonski 2.0’ report that suggested a move away from an age-based curriculum and a focus on ‘Learning Progressions’ instead. After the 2018 PISA results were published, a number of news articles appeared that suggested that Dan Tehan was going to renew the focus on Learning Progressions in response to the PISA results and this would be a key focus of the upcoming COAG meeting. See eg here:

      https://www.google.com.au/amp/s/amp.smh.com.au/national/be-ambitious-refocus-the-agenda-tehan-on-flagging-student-results-20191204-p53go0.html

      Why would Tehan push Learning Progressions as a response to poor PISA results if he did not think they would improve PISA results?

      It is my understanding that the Gonski 2.0 Learning Progressions have not yet been developed, but that the NSW Learning Progressions that I quote from above, and you quote from in your reply, would be a prototype. That’s why I focused on them. I have no reason to focus on the ones developed by the University of Melbourne’s Assessment Research Centre unless you know something that I do not know and that these will be the basis of the government reforms.

      Rubrics that constantly change and sample across different aspects of the domain are certainly an improvement on the static Learning Progressions developed by NSW. This is what exams tend to do and why exams are often a fairer form of assessment – you cannot buy access to what this sample is in advance or engage a tutor to help craft your answer, you have to respond individually, in real-time, based on what you know.

      However, Gonski 2.0 and Tehan have made no mention of Learning Progressions being dynamic. Given that, and given the vast majority of such progressions that are extant are not dynamic in this way, why would we assume the new ones will be? Again, perhaps you know something I do not know.

      The fundamental weakness of your argument is that it seems to be based upon the idea that good versions of things are better than bad versions of things. Which is a tautology. What matters is what makes it into schools and classrooms.

  3. “good versions of things are better than bad versions of things. Which is a tautology.”

    I’m saying that the version of rubric use you discuss probably isn’t that great. The version I have linked to is good. I don’t see how that is a meaningless statement.

  4. Okay I promise this is the last thing I will say on this!

    Certainly some of the points you make about how teachers use rubrics are valid – but as with any pedagogy, if the worst version of it is used or it is not implemented correctly, it won’t work. That’s nothing new…

    You are welcome to look at how the Assessment Research Centre guidelines suggest using them in the links provided above. Only then do the many benefits of rubrics use flow. As referenced in some articles and PowerPoints here:

    https://lawlesslearning.com/articles/

    https://lawlesslearning.com/pd/

    Thanks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.