Testing is not the problem, it is part of the solution


Imagine an expert in public health gave a media conference and said, “We have to stop this obsession with COVID-19 testing. It is causing too much stress. Instead of focusing on testing, we should focus on preventing the spread of COVID-19. Testing does not prevent the disease and it does not cure it.”

You may think the expert had lost their mind.

Clearly, this is a false choice. Nobody has ever claimed that testing a person for COVID-19 would cure or prevent the disease in that person. Testing instead gives information that tells officials how widespread the problem is or how effective their attempts to suppress the virus have been. You cannot use a thermometer to heat a room. This is a category error. Only someone operating perpendicular to reality would propose that a solution to rising COVID-19 infections would be to perform fewer tests.

Nevertheless, people make such claims about educational testing all the time. The latest example is in a piece in The Conversation that is ostensibly about the problem of convincing more students to study maths to a high level at school.

There are a number of problems with this article. Firstly, it takes a naïve approach to international assessments, assuming that the position a state occupies in a league table of results gives us information about the quality of that state’s education system rather than, say, a whole battery of other demographic and cultural factors that can affect this ranking. As a result of this, the authors suggest looking to Singapore and Estonia (the new Finland) for answers.

The authors then make questionable claims about these education systems. Singapore apparently eschews rote learning in favour of supposedly deep learning. This claim seems to originate in the fact that many years ago, the Singaporean ministry of education drew on the work of the psychologist Jerome Bruner, an advocate of discovery learning, when developing its maths curriculum, leading to the famous bar-model approach that has since been adopted elsewhere. However, this does not validate the entirety of Bruner’s views.

When you examine the detail of the Singapore mathematics syllabus, it includes statements like, “use strategies such as ‘count on’, ‘count back’, ‘make ten’ and ‘subtract from 10’ for addition and subtraction within 20 (before committing the number facts to memory) and thereafter, within 100,” for students in the first year of primary school and, “achieve mastery of multiplication and division facts,” for students in the third year. So memorisation is clearly a critical feature of the Singaporean approach.

The authors also discuss high-performing Estonia and suggest it has, “almost no high-stakes tests for school children.” But, as Mike Salter pointed out on Twitter, education in Estonia does not appear to lack assessment:

But perhaps the key is in the term ‘high-stakes for school children’. As a modifier, this may be doing a lot of work. In Australia, we only really have one assessment that is ‘high-stakes for school children’ and this is at the end of Year 12. Elsewhere the authors refer to Australian NAPLAN assessments in Years 3, 5, 7 and 9. These can be high-stakes for schools, with school results published on the MySchool website, and it is possible that schools and perhaps some parents put pressure on students to perform well in NAPLAN, but NAPLAN assessments are not intrinsically high-stakes for the students who sit them.

NAPLAN has its flaws. I have written about the changes I would make to improve this suite of assessments (e.g. here and here). But I am clear that I would rather they exist in their current form than not at all. Yes, they can distort the curriculum, particularly for reading and writing where schools attempt to directly teach students how to answer assessment questions rather than teach reading and writing more broadly – a strategy that is frankly not very successful (see e.g. here). But this is not an argument for removing assessment. It is an argument for better professional development, better teacher knowledge of the available evidence and more and better forms of assessment targeting a wider range of knowledge and skills.

In fact, an Australian state government that was serious about improving outcomes could draw on assessment as a lever. Curriculum documents are often vague, abstract and aspirational. Assessments define the curriculum in more concrete terms, but until the final year of schooling, we only really have such assessments in maths and a misleadingly decontextualised form of literacy. A state education department could develop a suite of assessments in English, maths, history, science and maybe a few more key academic subjects and then offer them to schools on an optional basis, in a similar way to the voluntary phonics check. Schools who opt in would then be given comparative data i.e. a full analysis of how their school results compare to other schools who have opted into the assessment.

You can imagine, for instance, assessments at Year 10 related to the physics of motion or a classic work of literature. At the other end of the scale, there could be assessments of basic general knowledge or sentence construction at the primary school level. Such assessments would encourage the adoption of a broader approach, less focused on NAPLAN. Coupled with a plan for school improvement, these assessments could help schools gradually become more effective as they target evidence-informed approaches on areas of weakness. By avoiding imposition, the schools that took part would be those who are most interested in learning from this evidence and so most likely to integrate it into a wider plan. Over time, other schools may become convinced of the value of opting in. The public accountability of NAPLAN would still exist but it would be supplemented by a more fine-grained layer that would be of greater us to schools.

Setting assessments in opposition to learning is fallacious. In fact, low-stakes quizzing has been shown to enhance learning and so these processes are not as distinct as the medical analogy at the start of this post implies. Nevertheless, measurement through NAPLAN is not enough to improve outcomes, even if it is capable of raising the alarm when things are going wrong. Rather than making large-scale, global and probably invalid comparisons and attempting to emulate what we imagine countries such as Singapore and Estonia are doing, I suspect a far more promising approach is to develop assessments that enable us to engage with small details and local comparisons.

After all, if you add together a lot of small improvements…

Standard

6 thoughts on “Testing is not the problem, it is part of the solution

  1. Kevin Wheldall says:

    It’s the old ‘weighing the pig does not make it any fatter” argument and just as stupid.

    I think my El Trumpo did blame too much Covid testing 🤷‍♂️

    ________________________________

  2. Stan Blakey says:

    You should have another year of making the connection to Trump’s Covid testing statements and education experts testing statements.
    Can’t see how they can survive being called on that comparison for long.

    I don’t see that in the comments to the conversation piece.

    I do see people doing fitting the data to their theory. For example the title is based on two data points about number of students taking HSC math courses. But it doesn’t attempt to control for changes in the absolute number of HSC students or whether the increase in the number taking the easiest was at the expense of taking the hardest or a positive step of more taking at least some math.

    These a pretty basic problems in the first few sentences. I looks like the Conversation is emulating Fox news in its approach to publishing what its audience loves, although even Fox is struggling with reality getting in the way these days.

  3. Stan Blakey says:

    The more you look in the conversation piece the more it looks evidence sprayed rather than evidence based:

    “The proportion of students choosing advanced (calculus-based) maths subjects has declined sharply in the last 20 years. In every year of the last decade, fewer than 30% of students chose intermediate or higher mathematics.”

    So is this good or bad? Is this a large proportion of the population studying in those years with the growth not taking advanced math or a drop in the population taking advanced math. Is the fewer than 30% a sign of a trend or stability?

    (I found the term evidence sprayed here – a great article:

    Click to access 4545%20IFG%20-%20Showing%20your%20workings%20v8b.pdf

    It is a great way to capture the problem of someone inserting evidence of something while claiming a conclusion of something else.)

  4. drjanetm772 says:

    I agree with you regarding testing – it provides useful information when it is used properly.

    Regarding the article itself, I would suggest that a reduction in the proportion of students doing higher level maths is a result of the competition between high-performing and private schools to keep their rankings in the HSC “league tables.” These schools often prevent students from choosing higher level maths unless they have proven they will score highly in the subject.
    (Disclaimer: first-hand experience of this phenomenon – daughter who got into a selective high school but ended up going to a private school has been told to do the lowest level of maths as maths is not her strong suit. She could have risen to the challenge of Ext 1 but is instead doing what we used to call vege maths.)

    • Stan Blakey says:

      This highlights the issue the anti test folks seem to avoid. Currently tests are used to limit access (for good or bad reasons). So the better students do on tests the more opportunities they will have. Testing sooner to find out a student needs help to get better and not miss an opportunity is the easiest way to know who to help.

Leave a reply to manyanaedPeter Blenkinsop Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.