I have written a few posts about the Programme for International Student Assessment (PISA) and, in doing so, I have become aware of a number of myths that are linked to it. These are worth challenging because they get in the way of us taking anything useful from these results. The myths are:
1. We should look to Finland as an exemplary education system
Hardly a month can go by without a glowing newspaper article about education in Finland. This is a worldwide phenomenon. For instance, a December 2016 piece in the Hindustan Times suggests that India should copy Finland’s new cross-curricular approach to teaching. Sometimes PISA is not even referenced in these articles. Yet any search for the source of the idea of Finland as an education powerhouse takes us back to Finland’s performance in PISA in the early 2000s.
Some of this is logically baffling. Finland haven’t yet fully implemented phenomenon-based learning – their new cross-curricular approach – and so it can’t possibly be the cause of high test scores more than ten years ago. Perhaps we are basing our views on trust: they must know what they’re doing so this new reform will work. Yet this is a very shaky idea, particularly given that Finland has significantly declined in PISA since the start of the century:
These scores (which I’ve averaged across Reading, Maths and Science) are PISA’s best attempt at creating a measure that means the same thing in different rounds of the test. A score of 530 in 2000 should therefore reflect the same level of knowledge and skills as a score of 530 in 2015. In reality, it’s not a simple as this because it is difficult to compare between different tests. However, the evidence does seem to indicate an absolute decline in Finland’s performance. In other words, Finland isn’t slipping simply because it is being overtaken by other countries, it is slipping because of a decline in knowledge and skills in Finland.
If we want to look to Finland as an example then we should be looking at their education system prior to about 2000. Given that PISA tests 15-year-olds and that PISA test questions tend to have a heavy reading load (more later) then we should probably also go back considerably further than 2000 to look at what was happening when these children were in primary school. Any reforms taking place since the early 2000s could actually be the cause of Finland’s decline and yet it is these reforms that edu-tourists see when they visit Finland and that generate breathless newspaper reports. The education system in Finland is not stable and many of the features that are trumpeted in the press are not likely to be associated with the performances of the early 2000s. Tim Oates has written an excellent piece about this which is worth reading.
In my view, it is generally more insightful to look at the trajectory of a particular country or region than to compare between different countries. Finland is small and relatively homogeneous. It is unlikely that it is feasible to replicate its system in a larger and more multi-faceted country such as the U.S. or India. The same argument also applies to Shanghai, Singapore or any of the other stand-out countries. Instead, rather than trying to copy the features, we should be looking at how countries have lost, gained or maintained ground over time and what practices are associated with this.
PISA also offers us some insight into practices that are linked with higher performance within individual educations systems and this leads us to the next myth.
2. Math Students From High-Performing Countries Memorize Less
This claim is quite catchy and fits with the fashionable constructivist view of maths teaching promoted by Dan Meyer, Jo Boaler and others. It originates in a 2016 report aimed at maths teachers and based on the 2012 round of PISA testing. It was further fueled by an editorial in Scientific American. The problem is, I can find no evidence to support it.
PISA created a very odd measure of memorisation. Normally, when assessing attitudes, researchers tend to use a Likert scale so that subjects can indicate the degree to which they agree with a particular statement. However, for PISA’s measure of memorisation, they forced students to choose one of three options, potentially magnifying the effect of a weak or even non-existent preference. From this survey, they created an ‘index of memorisation’.
If you plot the scores of each country on this index against their PISA 2012 maths score there is no correlation at all:
The claim made in the Scientific American piece is actually a little more subtle:
“In every country, the memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.”
This essentially relates to data that is not publicly available. We don’t know what proportion of students in each country were classified as ‘memorisers’. Instead, we have the memorisation index. However, it seems unlikely the number of memorisers in any given country would not correlate closely with the score on the memorisation index. If you plot the proportion of low achievers in PISA 2012 maths against memorisation index then, again, there is no correlation:
Until and unless data on the number of memorisers is released then I think we have to judge this claim as unproven and unlikely.
3. PISA is just a multiple-choice test of factual recall and so it can be dismissed
Some people are not keen on the findings of PISA at all. On social media I often see it dismissed, particularly in response to any evidence I present from the programme. Some people seem to assume that, in principle, multiple choice tests can only be tests of factual recall; which is obviously wrong. And while PISA does use multiple choice, it also uses other forms of questions.
In fact, the folks behind PISA are much more interested in application of ideas than factual recall. This means they need to describe a context and this leads to the pretty high reading load of PISA test questions. I understand that the maths questions are based upon the Realistic Maths Education philosophy from the Netherlands – a constructivist approach.
It’s worth taking a look at some of the test items that PISA make publicly available. For instance, this science question is quite complex and is about conducting investigations. It is not a test of recall and it is not multiple choice.
Things that are true but that you won’t hear about
Oddly, there are some very clear findings in PISA that don’t seem to garner much publicity. In 2012, students were surveyed on the extent to which their maths lessons had a ‘student-orientation’; which roughly maps onto ideas that might be described as constructivist or child-centred (yes, I know that all teaching is child-centred but this term is also associated with a specific set of facilitative teaching practices). There was a strong negative correlation between the index of student-orientation and maths performance – the more student-oriented the learning experience, the worse the maths performance. This was true when looking within individual countries and so was not due to different cultural factors. PISA also surveyed students on other maths teaching experiences and found some interesting results, but nothing as strong as the one for student-orientation.
We saw a similar pattern when the PISA 2015 data was released in December. This time, the focus was on science and so PISA surveyed students on the degree to which their science lessons were ‘enquiry-based’. Again, this is essentially a measure of constructivist or child-centred teaching and, again, it correlated negatively with scores in the highly application-based PISA science test, suggesting that if you want students to be able to apply scientific principles, you might want to avoid enquiry-based learning.
You can speculate as to why these findings have not been promoted as much as the myth about memorisation but I think a hint might be that PISA define good teaching as being student-oriented.