Imagine an expert in public health gave a media conference and said, “We have to stop this obsession with COVID-19 testing. It is causing too much stress. Instead of focusing on testing, we should focus on preventing the spread of COVID-19. Testing does not prevent the disease and it does not cure it.”
You may think the expert had lost their mind.
Clearly, this is a false choice. Nobody has ever claimed that testing a person for COVID-19 would cure or prevent the disease in that person. Testing instead gives information that tells officials how widespread the problem is or how effective their attempts to suppress the virus have been. You cannot use a thermometer to heat a room. This is a category error. Only someone operating perpendicular to reality would propose that a solution to rising COVID-19 infections would be to perform fewer tests.
Nevertheless, people make such claims about educational testing all the time. The latest example is in a piece in The Conversation that is ostensibly about the problem of convincing more students to study maths to a high level at school.
There are a number of problems with this article. Firstly, it takes a naïve approach to international assessments, assuming that the position a state occupies in a league table of results gives us information about the quality of that state’s education system rather than, say, a whole battery of other demographic and cultural factors that can affect this ranking. As a result of this, the authors suggest looking to Singapore and Estonia (the new Finland) for answers.
The authors then make questionable claims about these education systems. Singapore apparently eschews rote learning in favour of supposedly deep learning. This claim seems to originate in the fact that many years ago, the Singaporean ministry of education drew on the work of the psychologist Jerome Bruner, an advocate of discovery learning, when developing its maths curriculum, leading to the famous bar-model approach that has since been adopted elsewhere. However, this does not validate the entirety of Bruner’s views.
When you examine the detail of the Singapore mathematics syllabus, it includes statements like, “use strategies such as ‘count on’, ‘count back’, ‘make ten’ and ‘subtract from 10’ for addition and subtraction within 20 (before committing the number facts to memory) and thereafter, within 100,” for students in the first year of primary school and, “achieve mastery of multiplication and division facts,” for students in the third year. So memorisation is clearly a critical feature of the Singaporean approach.
The authors also discuss high-performing Estonia and suggest it has, “almost no high-stakes tests for school children.” But, as Mike Salter pointed out on Twitter, education in Estonia does not appear to lack assessment:
But perhaps the key is in the term ‘high-stakes for school children’. As a modifier, this may be doing a lot of work. In Australia, we only really have one assessment that is ‘high-stakes for school children’ and this is at the end of Year 12. Elsewhere the authors refer to Australian NAPLAN assessments in Years 3, 5, 7 and 9. These can be high-stakes for schools, with school results published on the MySchool website, and it is possible that schools and perhaps some parents put pressure on students to perform well in NAPLAN, but NAPLAN assessments are not intrinsically high-stakes for the students who sit them.
NAPLAN has its flaws. I have written about the changes I would make to improve this suite of assessments (e.g. here and here). But I am clear that I would rather they exist in their current form than not at all. Yes, they can distort the curriculum, particularly for reading and writing where schools attempt to directly teach students how to answer assessment questions rather than teach reading and writing more broadly – a strategy that is frankly not very successful (see e.g. here). But this is not an argument for removing assessment. It is an argument for better professional development, better teacher knowledge of the available evidence and more and better forms of assessment targeting a wider range of knowledge and skills.
In fact, an Australian state government that was serious about improving outcomes could draw on assessment as a lever. Curriculum documents are often vague, abstract and aspirational. Assessments define the curriculum in more concrete terms, but until the final year of schooling, we only really have such assessments in maths and a misleadingly decontextualised form of literacy. A state education department could develop a suite of assessments in English, maths, history, science and maybe a few more key academic subjects and then offer them to schools on an optional basis, in a similar way to the voluntary phonics check. Schools who opt in would then be given comparative data i.e. a full analysis of how their school results compare to other schools who have opted into the assessment.
You can imagine, for instance, assessments at Year 10 related to the physics of motion or a classic work of literature. At the other end of the scale, there could be assessments of basic general knowledge or sentence construction at the primary school level. Such assessments would encourage the adoption of a broader approach, less focused on NAPLAN. Coupled with a plan for school improvement, these assessments could help schools gradually become more effective as they target evidence-informed approaches on areas of weakness. By avoiding imposition, the schools that took part would be those who are most interested in learning from this evidence and so most likely to integrate it into a wider plan. Over time, other schools may become convinced of the value of opting in. The public accountability of NAPLAN would still exist but it would be supplemented by a more fine-grained layer that would be of greater us to schools.
Setting assessments in opposition to learning is fallacious. In fact, low-stakes quizzing has been shown to enhance learning and so these processes are not as distinct as the medical analogy at the start of this post implies. Nevertheless, measurement through NAPLAN is not enough to improve outcomes, even if it is capable of raising the alarm when things are going wrong. Rather than making large-scale, global and probably invalid comparisons and attempting to emulate what we imagine countries such as Singapore and Estonia are doing, I suspect a far more promising approach is to develop assessments that enable us to engage with small details and local comparisons.
After all, if you add together a lot of small improvements…