From 1997 to 2010, I taught in the UK. During this period, my 16-year old students completed GCSE exams and my 18-year-old students sat A-Levels. In fact, due to the modular nature of these exams at the time, students sat them continuously through a period spanning the ages 15 to 18. Until 2008, 14-year-old students also completed SAT exams in English, maths and science until they were abolished in 2008, much to my personal frustration. I arrived in Australia at the same time as NAPLAN literacy and numeracy testing was being rolled-out. My U.K. experience meant that the idea of such tests was as familiar to me as it was apparently strange to some of my Australian colleagues.
My teaching career has therefore been one of 20 years of standardised testing. In this time, I have worked in schools that gained high scores and I have worked in schools that gained low scores. I have also worked in schools that have improved their scores and I have run departments that have been subjected to the various tests. This is my view.
I find it odd that people make standardised testing into a pantomime villain. This has led to the kind of reflexive reactions we saw on the recent edition of Q&A.
Tony Jones: Can I have a ‘hard-working teachers’ everybody!
Crowd: [Cheers and whoops]
Tony Jones: Can I have a ‘standardised testing’ people!
Crowd: [Boos and hisses]
Standardised testing is a neutral concept. Some standardised tests are well-designed and some are rubbish. The fact that they are standardised is not a bug but a feature. It means that there is some external standard to compare a school’s results with. For instance, as a head of maths, imagine I find that my students perform at about the state average on number but their scores on statistics are lower than the state average. I can start asking questions about why this is the case: maybe our number programme is strong or our statistics programme is weak? Perhaps we have deliberately under-emphasised statistics in order to get the number work right. If so, is that what we want? I cannot ask these questions if all I have to go on are internally created assessments.
However, in order to give me useful information of this kind, the tests need to be well-designed. A Grade 4 or above reading test that uses a randomly selected topic is likely to be as much a test of general knowledge as it is of reading. And privileged children have an advantage in gaining general knowledge due to dinner table conversations, trips to museums and all the rest of it. That’s why as well as arguing for a knowledge-rich curriculum, I have called for the NAPLAN reading and writing contexts to be set in the previous year’s Australian Curriculum content.
Poor responses to tests
Standardised tests are not, by themselves, a guide as to how to improve. A lot of policymakers seem to have fallen for the idea that teachers and schools know how to teach more effectively, are choosing not to (why?) and that standardised testing would therefore provide the impetus for them to choose these more effective practices. That is simply not the case.
Education is unfortunately awash with bad ideas. Setting aside, for now, those educationalists who are genuinely hostile to any agenda of improving academic performance, those of us who wish to be more effective run a gauntlet. One common idea is that children should be taught reading comprehension strategies and that these will enable children to access any text. This idea contains a grain of truth. Limited training in reading comprehension strategies does finesse performance on reading comprehension tests, but such training offers diminishing returns. Extended practice of these strategies provides no additional improvement because reading performance is ultimately limited by general knowledge.
A school that takes time away from science, history and the arts in order to expand a literacy programme that focuses on drilling these strategies is therefore making the wrong call.
Similarly, children should be made familiar with the format of a standardised test before taking it and there is nothing wrong with saying things like, ‘notice that this is a two-mark question so you have to make two distinct points,’ but endlessly drilling and rehearsing exam questions is not going to be as effective as teaching the relevant subject content in a logical and coherent sequence.
And we saw after the introduction of the phonics check in England that some teachers were drilling children in nonsense words. Not only is this a misunderstanding of how the test works, it is highly unlikely to improve performance.
So we should not assume that the responses of teachers and schools to any standardised test will be to reach for more effective practices. The point is that the test will help inform us whether they have been more effective.
Reward and punish
Policymakers are also capable of responding counter-productively to standardised testing. My pay has never depended on standardised test scores and the idea of giving less money to schools that do badly on these tests seems perverse. If anything, these schools need more resources. But ideas like these seem to be out there and often become conflated with arguments about the inherent value of the tests, such as in the G.E.R.M. conspiracy theory.
The G.E.R.M. conspiracy theory and the politics of testing
One narrative that I don’t think we should pay too much attention is the Global Education Reform Movement conspiracy theory, G.E.R.M. If it sounds like something that the baddies in a comic book might call themselves then you are probably thinking along the right lines. According to this theory, standardised testing is just one part of a sinister global agenda to standardise everything about education in the interests of private companies or something. It sounds like an argument from the political left but…
Who introduced NAPLAN into Australia? Julia Gillard. Not only was she a Labor education minister at the time, she was a member of the ‘left faction’ of the Labor party. In contrast, Pasi Sahlberg, arch-enemy of G.E.R.M., has recently been collaborating with Adrian Piccoli, touring New South Wales and working with Piccoli and the new Gonski Institute. Piccoli is a former Liberal education minister (‘Liberal’ means the opposite in Australia to what it means in America and the Liberals are our mainstream right-wing party, roughly equivalent to the Republicans in the U.S. and the Conservatives in the U.K.). There is nothing wrong with such a collaboration, but it does confound simplistic left-right characterisations of the issue.
The accountability question
Gillard also introduced the MySchool website. This is probably the most contentious component of the NAPLAN programme because it allows access to parents and other members of the public who are then able to compare the results of different schools (although not in a simple league table as is often suggested).
I think this is a question of democracy. If information is available about the punctuality of a publicly-owned railway company then, as a taxpayer, I think I have a right to know. If information is available about the death rate at my local publicly-funded hospital’s accident and emergency department and how this compares with national figures then, as a taxpayer, I think I have a right to know. This is not just my opinion, it is so central to our current understanding of democratic accountability that Australia, the U.K. and the U.S. have all instituted freedom of information laws to give various levels of access to information about public services. If we do not have this information, how do we make informed decisions at the ballot box?
Collecting standardised test information and refusing to share this with stakeholders therefore strikes me as a little authoritarian. However, there are legitimate concerns about exactly what is reported and we could probably make improvements. Do we need to report at the individual school level? What kinds of measures make the most sense?
Focusing on growth
Throughout my experience with standardised testing, I have always focused on growth. I have tended to view this in two ways.
Firstly, I have the crude aim that next year’s results be an improvement on last year’s and I ask the question of what this might involve. Sometimes, cohorts of students will vary over time and sometimes, as was the case with G.C.S.E.’s in England in the 2000s, grade inflation may make improvement easier to achieve than it should be. However, focusing on improvement has always served me better than focusing on arbitrary targets.
Even when I worked in a school that had the lowest standardised test scores in the local area, I did not pay too much attention to what other schools were scoring. You can learn from other schools if you have a relationship with them, but you won’t learn much by studying their numbers.
Secondly, sophisticated approaches to analysing standardised test results enable a look at the aggregated progress of individual students. NAPLAN and MySchool are able to do this, giving an even clearer picture of how a school may be going in a particular subject area. For instance, this plot of a randomly selected school (not my own) shows how reading has improved from Year 3 to Year 5 and compares this with similar SES schools and schools with similar starting points:
This is not a ranking or a league-table. As a teacher, I think it is useful to have this information as part of the mix, provided we do not place too much emphasis on one single measure.
Let’s not subscribe to simplistic conspiracy theories. Let’s not throw out helpful data as part of some ideological crusade. No, standardised testing alone will not fix education, but it does provide information that I have found useful over the years. Don’t throw that away.