John Bush recently shared an article that he wrote for his organisation, Social Ventures Australia. It’s worth reading and I agree with much of it. Bush is a good thinker who is involved in bringing the Education Endowment Foundation (EEF) toolkit and approach to research to Australia. However, the piece reminded me of a pressing problem we have with assessment.
There is much to say about assessment. Working in schools, it is clear that any kind of high stakes assessment will drive the curriculum. In many cases, this is benign. I believe that the UK phonics check, for instance, has helped school to focus on this key set of skills. But other tests are more dangerous. This is when the assessment is of a complex performance built on many interacting elements. The ultimate pathway to success is to train all of these element; a difficult and time consuming process. Many teachers therefore try to train students directly in the complex performance almost from the outset. Sometimes this is due to a simplistic reliance on learning through doing past papers. At other times there is a whole methodology underpinning it.
A good example is reading comprehension. Reading comprehension tests are largely a composite measure of decoding skill and background knowledge. E D Hirsch has written about this at some length and so has Dan Willingham. Such assessments are valid because reading comprehension is a key academic activity. Yet improving a student’s background knowledge is a big task that stretches across an entire curriculum. We can’t really hold English teachers accountable for this. And so this encourages us to train students in the performance itself. There are certainly gains to be had by practising reading comprehension tests and by prompting students to deploy supposed reading comprehension ‘skills’ such as asking themselves questions as they read. However, as Willingham points out, these gains are a one-off, are fairly limited and so a large part of the many hours currently spent on such training is wasted.
Bush reminded me that the PISA tests conducted by the OECD intend to branch out and start to measure things like collaborative problem solving skills. We know that PISA looms large on the agenda of politicians and we also know that politicians are silly. The potential for this kind of assessment to cause damage is therefore significant. And unlike reading comprehension, problem solving isn’t even a valid construct to try to measure in this way.
Problem solving isn’t really a thing. What problems are to be solved? You need to know very different things in order to solve different problems. Any generic test of problem solving is not going to measure what you think it is because our ability to solve problems or to collaborate is largely innate. Instead, it will generate differences between students based upon some mixture of their fluid intelligence, their background knowledge and their familiarity with the test format and question types. You can’t do much about the first two but you can boost familiarity by giving lots of practice. This won’t happen as a result of PISA itself but it will occur once school systems start replicating this stuff in their internal standardised testing regimes. We will be training our kids to jump through arbitrary hoops.
Precisely the same argument applies to attempts to assess critical thinking, although this is like reading comprehension in that it is a key academic activity and therefore has more validity. If, however, such assessments prompt teachers to train students in critical thinking procedures and heuristics then this will probably be another waste of effort.
A lot of these issues could be avoided if we made people aware of the topic areas that these tests would address. Problem solving starts to make sense if you know that it’s a mathematical problem about shapes and area (although we are now approaching a maths test) or a interpersonal problem within a business. Hirsch has advocated that reading comprehension tests should align with the content of a national curriculum. Critical thinking has more of a logic to it if you know you will be analysing Second World War government information sources. By controlling the knowledge base in this way we have a chance of assessing the ability in question without assessing a whole lot of other things at the same time.
The trouble is that the myth has taken hold that such abilities are general and transferable and so I can’t see this idea gaining much traction. In this context, the new PISA tests could drive us in quite the wrong direction.