Should we scrap standardised testing?

Embed from Getty Images

From 1997 to 2010, I taught in the UK. During this period, my 16-year old students completed GCSE exams and my 18-year-old students sat A-Levels. In fact, due to the modular nature of these exams at the time, students sat them continuously through a period spanning the ages 15 to 18. Until 2008, 14-year-old students also completed SAT exams in English, maths and science until they were abolished in 2008, much to my personal frustration. I arrived in Australia at the same time as NAPLAN literacy and numeracy testing was being rolled-out. My U.K. experience meant that the idea of such tests was as familiar to me as it was apparently strange to some of my Australian colleagues.

My teaching career has therefore been one of 20 years of standardised testing. In this time, I have worked in schools that gained high scores and I have worked in schools that gained low scores. I have also worked in schools that have improved their scores and I have run departments that have been subjected to the various tests. This is my view.

Badly designed

I find it odd that people make standardised testing into a pantomime villain. This has led to the kind of reflexive reactions we saw on the recent edition of Q&A.

Tony Jones: Can I have a ‘hard-working teachers’ everybody!

Crowd: [Cheers and whoops]

Tony Jones: Can I have a ‘standardised testing’ people!

Crowd: [Boos and hisses]

(I paraphrase)

Standardised testing is a neutral concept. Some standardised tests are well-designed and some are rubbish. The fact that they are standardised is not a bug but a feature. It means that there is some external standard to compare a school’s results with. For instance, as a head of maths, imagine I find that my students perform at about the state average on number but their scores on statistics are lower than the state average. I can start asking questions about why this is the case: maybe our number programme is strong or our statistics programme is weak? Perhaps we have deliberately under-emphasised statistics in order to get the number work right. If so, is that what we want? I cannot ask these questions if all I have to go on are internally created assessments.

However, in order to give me useful information of this kind, the tests need to be well-designed. A Grade 4 or above reading test that uses a randomly selected topic is likely to be as much a test of general knowledge as it is of reading. And privileged children have an advantage in gaining general knowledge due to dinner table conversations, trips to museums and all the rest of it. That’s why as well as arguing for a knowledge-rich curriculum, I have called for the NAPLAN reading and writing contexts to be set in the previous year’s Australian Curriculum content.

Poor responses to tests

Standardised tests are not, by themselves, a guide as to how to improve. A lot of policymakers seem to have fallen for the idea that teachers and schools know how to teach more effectively, are choosing not to (why?) and that standardised testing would therefore provide the impetus for them to choose these more effective practices. That is simply not the case.

Education is unfortunately awash with bad ideas. Setting aside, for now, those educationalists who are genuinely hostile to any agenda of improving academic performance, those of us who wish to be more effective run a gauntlet. One common idea is that children should be taught reading comprehension strategies and that these will enable children to access any text. This idea contains a grain of truth. Limited training in reading comprehension strategies does finesse performance on reading comprehension tests, but such training offers diminishing returns. Extended practice of these strategies provides no additional improvement because reading performance is ultimately limited by general knowledge.

A school that takes time away from science, history and the arts in order to expand a literacy programme that focuses on drilling these strategies is therefore making the wrong call.

Similarly, children should be made familiar with the format of a standardised test before taking it and there is nothing wrong with saying things like, ‘notice that this is a two-mark question so you have to make two distinct points,’ but endlessly drilling and rehearsing exam questions is not going to be as effective as teaching the relevant subject content in a logical and coherent sequence.

And we saw after the introduction of the phonics check in England that some teachers were drilling children in nonsense words. Not only is this a misunderstanding of how the test works, it is highly unlikely to improve performance.

So we should not assume that the responses of teachers and schools to any standardised test will be to reach for more effective practices. The point is that the test will help inform us whether they have been more effective.

Reward and punish

Policymakers are also capable of responding counter-productively to standardised testing. My pay has never depended on standardised test scores and the idea of giving less money to schools that do badly on these tests seems perverse. If anything, these schools need more resources. But ideas like these seem to be out there and often become conflated with arguments about the inherent value of the tests, such as in the G.E.R.M. conspiracy theory.

The G.E.R.M. conspiracy theory and the politics of testing

One narrative that I don’t think we should pay too much attention is the Global Education Reform Movement conspiracy theory, G.E.R.M. If it sounds like something that the baddies in a comic book might call themselves then you are probably thinking along the right lines. According to this theory, standardised testing is just one part of a sinister global agenda to standardise everything about education in the interests of private companies or something. It sounds like an argument from the political left but…

Who introduced NAPLAN into Australia? Julia Gillard. Not only was she a Labor education minister at the time, she was a member of the ‘left faction’ of the Labor party. In contrast, Pasi Sahlberg, arch-enemy of G.E.R.M., has recently been collaborating with Adrian Piccoli, touring New South Wales and working with Piccoli and the new Gonski Institute. Piccoli is a former Liberal education minister (‘Liberal’ means the opposite in Australia to what it means in America and the Liberals are our mainstream right-wing party, roughly equivalent to the Republicans in the U.S. and the Conservatives in the U.K.). There is nothing wrong with such a collaboration, but it does confound simplistic left-right characterisations of the issue.

The accountability question

Gillard also introduced the MySchool website. This is probably the most contentious component of the NAPLAN programme because it allows access to parents and other members of the public who are then able to compare the results of different schools (although not in a simple league table as is often suggested).

I think this is a question of democracy. If information is available about the punctuality of a publicly-owned railway company then, as a taxpayer, I think I have a right to know. If information is available about the death rate at my local publicly-funded hospital’s accident and emergency department and how this compares with national figures then, as a taxpayer, I think I have a right to know. This is not just my opinion, it is so central to our current understanding of democratic accountability that Australia, the U.K. and the U.S. have all instituted freedom of information laws to give various levels of access to information about public services. If we do not have this information, how do we make informed decisions at the ballot box?

Collecting standardised test information and refusing to share this with stakeholders therefore strikes me as a little authoritarian. However, there are legitimate concerns about exactly what is reported and we could probably make improvements. Do we need to report at the individual school level? What kinds of measures make the most sense?

Focusing on growth

Throughout my experience with standardised testing, I have always focused on growth. I have tended to view this in two ways.

Firstly, I have the crude aim that next year’s results be an improvement on last year’s and I ask the question of what this might involve. Sometimes, cohorts of students will vary over time and sometimes, as was the case with G.C.S.E.’s in England in the 2000s, grade inflation may make improvement easier to achieve than it should be. However, focusing on improvement has always served me better than focusing on arbitrary targets.

Even when I worked in a school that had the lowest standardised test scores in the local area, I did not pay too much attention to what other schools were scoring. You can learn from other schools if you have a relationship with them, but you won’t learn much by studying their numbers.

Secondly, sophisticated approaches to analysing standardised test results enable a look at the aggregated progress of individual students. NAPLAN and MySchool are able to do this, giving an even clearer picture of how a school may be going in a particular subject area. For instance, this plot of a randomly selected school (not my own) shows how reading has improved from Year 3 to Year 5 and compares this with similar SES schools and schools with similar starting points:

This is not a ranking or a league-table. As a teacher, I think it is useful to have this information as part of the mix, provided we do not place too much emphasis on one single measure.

Let’s not subscribe to simplistic conspiracy theories. Let’s not throw out helpful data as part of some ideological crusade. No, standardised testing alone will not fix education, but it does provide information that I have found useful over the years. Don’t throw that away.

Advertisement
Standard

13 thoughts on “Should we scrap standardised testing?

  1. I agree with pretty much all of this apart from the bit about MySchool being an example of democratic accountability. There are two problems with this, in my view:

    1. NAPLAN is not externally administered, and this in fact is one of the major problems with it. You therefore have transparency about results without the concomitant transparency regarding how the tests were administered, which makes the data potentially very misleading. And make no mistake, plenty of schools are either cheating with NAPLAN (several have already been caught, and plenty of others play similar tricks, they just haven’t been pinged yet) or administering it incompetently.

    2. The difference between accountability on NAPLAN and other public services (such as postal delivery, hospital A&E etc.) is that we have a dual system where government schools are competing against private schools which have a number of inbuilt advantages for such a measure. When the results appear on the MySchool website without any of this accompanying context, parents will draw conclusions without the context. At the risk of sounding like Jane Caro, it has unfairness built into it.

    But in any case, the key point is that Gillard made it very clear that NAPLAN was not going to be used in this way. If she had been frank about the MySchool aspect initially, it may never have gotten off the ground in the first place.

  2. Tom Burkard says:

    Standardised tests played a major role in the synthetic phonics revolution in the UK. In 1990, Martin Turner–an educational psychologist from Croydon–conspired with 8 other LA ed psychs to release confidential reading test scores which demonstrated that between 1985 and 1990 results at 7+ had dropped by 7 months. This caused quite a storm and reading pedagogy became front page news. Turner’s claim that the widespread introduction of ‘Real Books’ was largely responsible was entirely credible. After all, the notion that children will learn to read ‘naturally’ just like they learn to speak was manifestly dotty, and rested entirely upon assertions like guru Kenneth Goodman’s claim that word identification wasn’t important because young children went straight to the ‘meaning’ of text. However, half of the texts on ITT primary English reading lists advocated his ‘psycholinguistic’ theory of learning to read.

    Although the early 1990s saw the beginnings of a seismic shift back to phonics, this was only in the context of an ‘eclectic’ strategy similar to that advocated in the 1975 Bullock Report. Once again, standardised tests played a significant role. At Woods Loke Primary School in Lowestoft, Sue Lloyd had written the Jolly Phonics programme, the first published synthetic phonics programme. The school gave me their results on the Suffolk Reading Test, which was administered to all Suffolk schools at 6+ and 8+, and the TES published my letter that demonstrating that they were outperforming Suffolk averages by a huge margin. I was invited to write this up in an academic journal, and not long after Newsnight picked the story and ran a 4-part special on Jolly Phonics.

    However, the real biggy was the Clackmannanshire trials, which of course used standardised tests that demonstrated vastly better results for synthetic phonics. These results were released shortly after the Government unveiled the National Literacy Strategy, which contained no less than 315 objectives for KS1. It was a political document designed to mollfy all factions in the ‘reading wars’, but it was largely dead on arrival when even the TES suggested that the test results from Clackmannanshire “ought to spark a serious rethink” of this flagship policy. The NLS limped on for 7 years on the basis of Sir Michael Barber’s claims of wondrous gains based upon the government’s deeply flawed tests–which in turn were debunked by Prof Peter Tymms at Durham’s CEM. Their tests showed that reading results had barely changed at all. But for this evidence, it is unlikely that Tony Blair could have prevailed over die-hard resistence within the DfEE.

  3. ijstock says:

    You have largely answered your own question. The problem is not testing, but what the tests have been used for. Knowing the ‘problem’ does not necessarily show you the answer. But zealot managers have repeatedly used standardised test results to beat teachers with. In my own case, they were used to remove me from the job I had been doing well for years, and my career destroyed, on the basis that ‘I’ had missed targets that all around me, and common sense, could see were grossly inflated, purely on cost-saving grounds. *That* – and the cramming and cheating – is what is wrong with such tests, not the general principle.

  4. Pingback: Should we scrap standardised testing? — Filling the pail – The Learning Project

  5. I wonder if the resistance to standardization in tests and delivery is in part that the the perception from our first experience of what teachers do is that they are highly autonomous.
    People don’t think it odd that legal engineering or medical procedures and tests are standardized.

    (This is also why it is so weird hearing teachers stress the importance of group work. Teachers have to be one of the professions who do the least amount of their job with any group work.)

  6. Pleased to see I am not the only person who thinks GERM is nonsense, and just an excuse to go on not improving academic attainment. I also despair when my union bangs in about creative subjects being about self expression and don’t see that this attitude to the disciplines of painting and music etc is why they get sidelined .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.