A first look at TIMSS 2015

TIMSS is a series of international tests in maths and science that first took place in 1995 and that has been repeated every four years since then. The 2015 data has just been published and I have been trying to quickly digest the Australian version of the report. This post therefore has a bit of an Aussie slant – I will comment on other countries but I lack data on statistical significance.

It’s worth noting that TIMSS is a more abstract kind of assessment than the better known PISA. PISA sets questions in contexts, for instance by using mathematics to solve a practical problem. This means that there is quite a heavy reading load for PISA test items. In comparison, TIMSS has a more traditional feel, asking some context-free textbook questions such as 42.65 +5.728 = ?.

TIMSS assessments test maths in Grades 4 and 8 as well as science in Grades 4 and 8. The headline for Australia is that its overall performance is pretty stagnant :

  • Grade 4 Maths – mean of 517 – Significant improvement on 1995 but no significant change since 2007
  • Grade 8 Maths – mean of 505 – About the same as in 1995
  • Grade 4 Science – mean of 524 – About the same as in 1995
  • Grade 8 Science – mean of 512 – About the same as in 1995

So we haven’t gained much traction in these areas in the past 20 years. Why not? This is the kind of question that education research should be addressing.

It seems reasonable to look at the performance of a single country over time like this and try to draw a few inferences but I am more sceptical about comparing the performance of different countries. For instance, Shanghai is often cited for its PISA results but this is a city and not a state. In Australian terms, it would be fairer to compare Shanghai with Canberra. Similarly, it seems unfair to compare countries with smaller and more homogeneous populations with places like the United States. However, I still find the following results to be quite stunning:

  • Singapore, Korea, Japan, Hong Kong and Chinese Taipei are the only East Asian countries represented. They take out the top five places in both maths assessments and five out of the top six places in the two science assessments
  • In the Grade 4 maths test, the highest performing country outside East Asia is Northern Ireland (which did not take part in the Grade 8 assessments).
  • In the Grade 8 maths test, the highest performing country outside East Asia is Russia.
  • Not only did the countries listed above significantly outperform Australia in these areas but countries such as the United States, England, and Kazakhstan all significantly outperformed Australia in all areas.
  • 30% of Australian students – nearly a third – hit the ‘low’ or ‘below low’ benchmark in Grade 4 maths compared to 21% in the U.S., 20% in England, 14% in Northern Ireland and just 2% in Hong Kong (Hong Kong has none in the ‘below low’ category).
  • Interestingly, Finland was outperformed by the United States and England in Grade 4 maths, although I don’t know if this was significant. However, Finland did better than these countries in Grade 4 science (Like Northern Ireland, Finland only entered students at Grade 4).

Perhaps we might pause before sending more delegations of worthies to Finland to marvel at phenomenon-based learning. Instead, Australians might be better to head to Kazakhstan.

I await next week’s PISA results with interest.

Advertisements

All that marking you do? Waste of time.

One of the worst myths we have in education is not learning styles or that we only use 10% of ours brains, it is the myth that feedback is the same thing as marking.

John Hattie has done much to popularise the idea that feedback is highly effective but this conclusion highlights one of the problems with Hattie’s kind of meta-analysis – there’s a whole bag of quite different things sitting under that label.

Hattie himself acknowledges that of all forms of feedback, feedback to the teacher is one of the most powerful kinds. Yet we continually think of feedback as something that teachers supply to students, in writing. And Dylan Wiliam points out that, while the effects of feedback are large, a worrying proportion of them are negative. It seems that telling a student that she has done something right or wrong can have unpredictable consequences.

Imagine a classic physics question. Students are presented with a diagram of a book on a desk complete with an arrow to show the weight of the book and an arrow to show the push of the desk back up on the book. The question is: what is the Newton’s third law pair of the weight of the book?

desk-and-book

I’ll give you two options for answering this question:

1. The students write the answer in an exercise book, perhaps at home. You then collect in the books and mark them.

2. The students write their answers on a mini whiteboard and hold them up during the lesson.

From experience, a lot of students will get this question wrong, even after correct instruction. The right answer is ‘the gravitational pull of the book on the Earth’ but this feels weird. The students’ eyes are drawn to the other arrow and they choose the push of the desk on the book.

So if you follow option 1, you’ll get a load of exercise books full of the same error which you will need to explain and correct, in writing. These explanations will have to be brief if you’re ever going to get to bed. Moreover, if this question was set as homework then some students who had help with their homework won’t get your written feedback, even though they probably need it.

Teaching has been reduced to the teacher corresponding individually and in writing with different members of the class.

But if you choose option 2 then you, the teacher, gain instant feedback. Students are present in front of you so you can ask them why they gave the answers that they gave. You can then tailor a more extensive explanation to address the issues that the students raise, and you can monitor and adjust for the emotional impact at the same time. All of this is feedback but none of it is marking.

English teachers are probably thinking that this is all very well but it won’t work in English. It’s not as straightforward, no. But the same principles apply: Correct what you can with the students in front of you. It helps if you can break things down rather than always relying on assessing whole pieces of writing. The traditional approach where a teacher circles and highlights parts of a written response before writing a paragraph at the end, is likely to be ineffective because there is too much for the student to take on.

Although there is plenty of evidence for feedback, there is a general lack of evidence for simply marking. This is why the English schools inspectorate have now issued new guidance to inspectors to stop asking for ever more detailed marking.

So feedback is potentially very powerful. But if you’re spending loads of your time marking then you might want to have a think about what you’re trying to achieve and if there is a better way of achieving it.


A principled objection #AARE2016

mcg

Last year, we met the ‘phallic teacher’. This year it’s the ‘phallic lecturer’. Australian education’s annual festival of daftness – The Australian Association for Research in Education (AARE) conference – has come to the Melbourne Cricket Ground. Neoliberal imaginaries and French philosophers are all the rage.

I’d quite like to go to one of these gigs but it comes at a time of year when we’ve rolled over our classes to the 2017 timetable. I have two new Year 12 groups to teach today and that concentrates the mind.

Instead, I will be following via the Twitter hashtag #AARE2016. When I mentioned this on Twitter, alongside the fact that I would be highlighting the funniest tweets, I provoked something of a backlash. One associate professor commented that, “Academic freedom is important, but what you are doing here is anti-intellectual. You are trolling an acad prof assoc.” Linda Graham offered some career advice on the subject of respecting the expertise of academics:


What has caused this loss of a sense of humour? Perhaps there are some things too precious to poke fun at. Perhaps it is with immense solemnity that we should contemplate presentations on, “Queer(y)ing ‘agency’ using a Butlerian framework of thinking: What might alteration ‘look like’ through this prism of thought?”

But this is not just about highlighting silly conference papers. There is a serious point here – a point that I have a right to make.

I am not opposed to blue skies research (as long as we are allowed to poke fun at it, should we wish). The world is enriched by the pursuit of philosophy or art. But AARE is not a pure mathematics conference. It is about education and education is one of the largest social enterprises we have. Governments pump vast quantities of taxpayer money into it; money taken from the pay packets of nurses and bus drivers. Yet, in the anglophone world at least, we don’t seem to be seeing much improvement. Why not? What is all this research achieving?

I think I know why we are in this position. If you take a look at the AARE 2016 program and strip it of all the posturing about, “Bourdieu’s theory of social practice and Vygotsky’s cultural historic activity theory,” then you will find papers about practical approaches to teaching. The trouble is that the methods pursued seem to fly in the face of what we already know about effective practice. For instance, ‘direct’ or ‘explicit’ instruction has a strong track record dating back to the process-product research of the 1960s. You might think that researchers would be trying to improve and refine these methods but there is no reference to either term in the entire program.

The use of phonics is mentioned in the title of just one presentation, despite being the teaching method with probably the strongest evidence base in the whole of education (see here, here and here) and a topic of considerable importance given the current proposal for a phonics check. And this single mention is in a presentation on teachers’ beliefs about ‘commercial’ and ‘pre-packaged’ phonics programs. For those of you who aren’t up with the lingo, commercial = bad.

So, what practices are being promoted at the conference? Well, there’s lots of inquiry based learning and makerspaces (the latter apparently being a tool to ‘engage’ women in STEM subjects). This is despite such approaches being based upon the kind of constructivism that even serious constructivists have moved away from (see the discussion here). We have papers that classically ‘beg the question’ such as, “How does inquiry-based pedagogy motivate students to learn mathematics?” What if it doesn’t? What if it’s useless? What if it is creates situational interest – ‘motivation’ is too broad a term to use in the context – but leads to poor learning outcomes? Let’s just hold on and examine a few assumptions here.

This is why the AARE enterprise is so fruitless. Utility should not be the only aim of education research but it should at least feature. Somewhere. Instead, we have lots of derivative research that sits entirely within a jargon-laden, self-congratulatory, self-referential bubble.

Do you want to get ahead in education research? The first rule is to learn how to eduwaffle. The second rule is to respect your elders and betters (whilst voicing platitudes about critical thinking).


Back to basics will not work

Last week, Jennifer Buckingham of the Centre for Independent Studies (CIS) released a report calling for an early phonics check in Australia similar to the one that takes place in England. Some commentators immediately dismissed the idea on the grounds that CIS is a right wing think-tank. Yet if you look at the report it makes a very good point: since the introduction of the check in England in 2012 (following a large pilot in 2011), Year 2 reading scores have improved.

So the government introduced a phonics check with the intention of improving reading more generally and reading scores improved. Yes, I get that this is a correlation but I think it’s a highly suggestive one. Perhaps these scores would have risen anyway? We can’t rule it out. Perhaps we should run a randomised controlled trial in order to nail causation. Seems like a good plan to me. Is that what the critics want? No, they just want the idea to go away.

I think that improved reading scores are worth having and I think that, on the balance of probabilities, this is what a phonics check will do if introduced in Australia. So I’m backing it. And I hope the Australian Labor Party backs it too.

But it’s not enough.

In The Guardian, phonics check critic, Misty Adoniou, almost had a point when she stated that, “If the government wanted to panic and put its money somewhere, I’d suggest they put it into year four and put it into deep comprehension.” Clearly, the idea of somehow teaching children the skill of understanding things deeply is both absurd and dangerous; absurd because it reminds me of a spoof I once wrote about teaching children the general skill of being good at things, and dangerous because the heart sinks at imagining the kind of dreary close-reading activities that might be manufactured in the vain effort to achieve this goal.

And yet comprehension is a genuine problem that phonics does nothing to fix. You might be able to turn the letters of a word into sounds but if you don’t know what that word means or you don’t understand the context then that won’t help much. The phonics check might improve reading at Year 2 while doing little to affect the situation at Year 4 when comprehension becomes more important.

When we had the recent review of the Australian Curriculum, instead of cutting out the bloated and flawed general capabilities, the decision was made to cut content, especially at primary school. The logic was that this would create space for more reading and numeracy – a back to basics approach. History was lost in favour of a lightweight and retro social studies curriculum of the kind debunked by Kieran Egan back in 1980.

In this, we are following the example of the American “No Child Left behind” program that saw content excised from the elementary curriculum in favour of ever more guided reading and numeracy.

We should have learnt the lesson from the U.S. No Child Left Behind has not led to massive strides in reading ability because it neglected comprehension, just as the new Australian Curriculum does. Yes, plenty of reading comprehension strategies were over-taught in guided reading lessons but that is not the solution because they are, ultimately, of very limited value.

To understand what you read, the main thing you need is extensive, if not very deep, background knowledge. Which is exactly what you gain from studying science, history, music and all those other subjects that were diminished when the U.S., and then Australia, decided to go back to basics.


Slick salesmen of the intellectual apocalypse

Dumb is as dumb votes.

We are in a bind, aren’t we? If educated people speak out against populism then we prove their point; that we belong to an elite that is out of touch with ordinary people. And so by speaking, we make it worse. But if we apologise and say we understand why people feel this way then that’s not right. And it doesn’t point to a way out. Good people are meant to oppose fascism.

I don’t have an answer. It’s a quandary. And I don’t know what’s coming next from north of the Tiber.

But I’m pretty sure I know how we got here.

Firstly, the academy lost its mind. In a post-truth world we need heavyweights standing-up for truth. We need passionate veridicists willing to point out facts from history and about society. We need practical philosophers who will challenge political whimsy with professorial gravitas. But the social sciences fell, to varying degrees, for relativism. There is no truth any more, only signifiers. Right and wrong is an unfashionable binary. Impenetrable jargon is such stuff that careers are made on. So the academy has both bound and washed its hands.

Then there are the systems we have in place for educating our peoples – systems created, at least in part, to strengthen our democracies by giving the populace the knowledge they need to make good decisions.

But these systems have been led into error by false prophets. These are the ones who look at the pinnacle of disciplined human endeavour – an invention, a cure, a play, a painting – and declare that these works are the product of ‘creativity’. If only, they claim, we could get students rehearsing their creativity in a trillion mediocre and worthless ways then we would prosper.

It is as if the hoard were marching on us and yet we were in thrall to oracles urging us to teach our soldiers the quality of fightiness and the need for good heart rather than teach them how to handle their swords, hold their shields and maintain a defensive line.

Slick salesmen are selling us mirages. None of these will lead to better educated peoples. Instead, we will frivolously waste the time of young people as they mess around with computer games or lego or some other engaging gimmick prior to dropping out and wondering why the world has gone to crap.

We can do better than this. Do many of our students even know enough of history to know what is at risk? As teachers, it is up to us to pass on to our students the fruits of civilisation – knowledge that is worth defending. We know how to do this, we just need to stop hating ourselves long enough to give it a go.

Our modern democratic empire is guilty of many evils, hypocrisies and contradictions. But it is better than the chaos that awaits its demise.

Rage against that chaos.


Is it right to label high achieving students as ‘nerds’? No.

In Australia, students sit exams at the end of Year 12. These exams are different in each state but they are all used to calculate an Australian Tertiary Admission Rank (ATAR) for those students who intend to apply to university. This is a number from 0.00 to 99.95 and it is meant to give a student’s relative position in the Year 7 cohort to which they belonged. In other words, an ATAR of 80 places you in the top 20% of that group. Due to the fact that some lower achieving students don’t complete Year 12 or don’t apply to university, the average ATAR is around 70.

Many universities offer teaching degrees. There is no quota system in Australia and teaching degrees are relatively cheap to run, offering a revenue source to the universities. This has led to mushrooming numbers of education students which, in turn, has resulted in an oversupply of teachers – although this is somewhat distorted by subject area. It has also led to low admissions standards. In Victoria this year the average ATAR on entry to a teacher education course was 57.35 with some students gaining places with a rank as low as 30.

James Merlino, education minister for the Victorian Labor government has therefore made the perfectly sensible decision to insist on a minimum ATAR for entry into teaching of 70. In other words, teacher education students will be required to be in the top 30% of the general population in terms of academic performance. Given our current levels of oversupply, with many teachers struggling on short-term contracts, this seems like a great idea and I commend it.

I am less convinced by the plan to test students’ ‘problem solving, leadership and empathy skills’. There is no such thing as a general skill of problem solving and while some personality tests might have validity if used in a low threat, low stakes way, if used for admissions purposes they can and will be thoroughly gamed by participants.

Yet it is the ATAR plan that has ruffled the most feathers. Stephen Elder, the Executive Director of Catholic Education Melbourne remarked that, ‘Nerds don’t necessarily make good teachers.

I am shocked that anyone involved in education would say such a thing, let alone someone so senior.

Academic achievement is often seen as not cool and this leads to a cultural war in which teachers are front-line troops. Labeling kids as ‘nerds’ sends precisely the wrong message. Yes, some students have reclaimed the term – a bit like the gay community have with the word ‘queer’ – but it is still pejorative when used by others to describe a person or group. We know what the school yard complaint that, ‘she’s such a nerd!’ means and it’s not pleasant. If I heard a student say this then there would be trouble. Some synonyms that Google offers for ‘nerd’ include; ‘bore’, ‘dork’, ‘dweeb’, ‘loser’.

Stephen Elder should retract his use of this label and instead explain why he wants a continued oversupply of lower achieving teacher.

nerd


A phonics check for Australia?

In May, the Australian government suggested adopting a phonics check similar to the one used in England.

The point of such a test is to figure-out whether children have mastered the ability to decode simple words. The check uses real words and pseudo-words. The latter are presented to children as the names of aliens but they follow conventional decoding rules, for example, “beff”, “shup”, “doil” and “charb”.

One of the problems with any kind of assessment is validity – does it test what we are actually trying to test and give us accurate diagnostic information? This is why there are pseudo-words in the phonics check. They can only be decoded using phonics. Standard words can be remembered by students as a ‘sight’ word and so don’t necessarily tell us about a child’s ability to decode using phonics.

Why is this important? Phonics is probably the most well researched topic we have in education and the evidence is overwhelming – the best approach to getting the greatest number of children reading is to teach them systematic synthetic phonics (see here, here and here). This is the process of learning how to build up words from blending together the individual graphemes that represent distinct sounds. This is the process needed to decode the pseudo-words in the phonics check.

Jennifer Buckingham of the Centre for Independent Studies has released a report today on how we might bring the phonics check approach to Australia (see here and here). I was generally aware of the success of the check in the UK but I didn’t know the details, which are pretty stunning. After the introduction of the check in 2012, the pass rate rose each year, for students in both in Year 1 and Year 2. This ultimately means little unless it leads to an increase in reading ability more generally and the evidence suggests that it did. Having been fairly stable between 2005 and 2011, the proportion of students reaching Level 2 and the proportion reaching Level 3 on the end of Year 2 reading assessment has steadily increased since the introduction of the check.

Reading is critical. It is a skill that unlocks other academic skills. Reading is not just about decoding – you also need knowledge of the world and a good vocabulary, and we probably could be doing more than we are at present to develop these. But failure to decode will limit a child’s progress. A phonics check, properly implemented, would mean that we can assess this essential skill early and intervene if necessary. It would provide a useful tool for teachers in designing interventions.

In her report, Buckingham makes a number of recommendations and cautions us against expanding the check into a more comprehensive literacy assessment. I agree. That would be premature and might even set up a few red-herrings. We need to keep it simple, valid and focused, if we are to obtain the same benefits as England.