Fiddling with NAPLAN while PISA burns

It is the bleakest of midwinter and your house feels cold. Despite apparently having the heating turned on, you check the thermometer and you see that it is just 10°C and the temperature even appears to be dropping over time. Do you:

A. Decide to check the thermometer less often?

B. Check the thermometer but then hide the temperature measurements from the rest of your family?

C. Declare that thermometers do not work because your house is getting colder?

D. Fix the heating?

The correct answer is D, but answers A through to C may appeal to the education ministers of a number of Australian states who have commissioned a report that makes similar suggestions about NAPLAN, Australia’s national literacy and numeracy assessment programme.

Currently, students sit NAPLAN assessments in Years 3, 5, 7 and 9. The review suggests reducing this to Years 3 and 7 or perhaps 4 and 8. It also floats the idea of making it harder for parents to find NAPLAN information about schools through the MySchool website. All of this appears to be under the assumption that NAPLAN doesn’t work because it has not fixed literacy and numeracy outcomes in Australia.

I am critical of NAPLAN in its current form. The numeracy tests have far too few questions for students to attempt without the use of a calculator, sending the signal that calculation by hand is no longer important. The reading and writing assessments select from random content rather than the content of a rigorous curriculum. Partly, this is because the Australian curriculum is not rigorous and is instead knowledge-lite and vague. However, even if the curriculum were to be reformed, I suspect the authors of NAPLAN would still select random contexts for reading and writing rather than context from the previous year’s curriculum content because they have a view of reading comprehension and writing composition as wholly generic skills. They are not. A large component of both is domain specific – you can read and write better about stuff you know about. The current arrangements therefore privilege the already privileged – the students who have family discussions about the news and family visits to museums.

I would also add a science test. I suspect that science content is not taught rigorously in Australian schools but it would be good to know and track this.

Instead, the review floats the prospect of assessing non-existent ‘general capabilities’ such as critical thinking through the writing assessment. You can certainly teach students to think critically within a subject discipline by essentially teaching that discipline to a high level, but the idea that critical thinking is somehow a general capability that can be applied equally well from one domain of knowledge to another is simply wrong. Attempting to assess it as a generic skill will therefore not help.

In the context of this week’s depressing PISA news, this review seems particularly incongruous. NAPLAN is not perfect, but I would rather have it than not and I believe parents have a right to the information it produces. If individual schools are spending months and months doing NAPLAN preparation or freaking their students out about the assessments then that is on them. Not only is it morally dubious, it won’t work. They should focus on building and delivering a rich and robust curriculum and then explicitly teaching it.

Education ministers should either focus on helping schools deliver these goals or get out of the way, give them more autonomy and let them figure it out for themselves.

It is time to tackle Australia’s entrenched behaviour crisis

One suggested cause of Australia’s underwhelming performance in the recently released PISA 2018 results is a lack of suitably qualified teachers, particularly in the area of mathematics. Such an argument is highly plausible, but imagine if we did recruit a whole lot of excellent mathematicians to the teaching profession and then placed them in something akin to a vision of hell painted by a renaissance artist. I think we would still have a problem.

At the same time as assessing academic performance in reading, mathematics and science, the PISA researchers asked 15-year-old students a series of questions about their experience of school. One of these was about bullying. 13% of Australian students reported frequent bullying. Of the relatively wealthy group of countries that belong to the OECD, only New Zealand students reported a higher rate. Overall, Australia’s issues with bullying were worse than most other countries in the OECD (OECD countries are black in the table below):

That is worrying enough, but PISA researchers also asked questions about the disciplinary climate in class. In 2018, they asked students about the climate in their language-of-instruction lessons (i.e English lessons in English speaking countries). They asked students about the frequency of events such as noise and disorder, students not listening to the teacher, students not being able to work well or students having to wait a long time for the lesson to start. From this, researchers constructed an ‘index of disciplinary climate’. Again Australia fared badly:

Unsurprisingly, and consistent with previous findings, the researchers found that a worse classroom climate was correlated with worse academic results.

Anyone familiar with previous rounds of PISA would not be surprised that Australia fares badly on classroom climate. In 2015, researchers asked similar questions about the climate in science classrooms.

At that time, some commentators wondered whether this issue was isolated to science lessons. We now know the answer to that. When the results became known, a number of educationalists came out to claim there was no problem – nothing to see here. Why?

The first thing that anyone needs to appreciate if they seek to understand Australia’s behaviour crisis is that, for ideological reasons, most educationalists are deeply opposed to talking about it or tackling it. They would even object to me characterising it as a ‘behaviour crisis’ despite the evidence presented above. This is because they have adopted a romantic view of childhood in which children, rather than being complex individuals who sometimes do the wrong thing, are entirely innocent and blameless. This is understandable in the context of the early 20th Century when progressive educators were pushing back against the use of physical punishment in schools, but it makes little sense in today’s science or English classroom.

When you adopt this ideology, attempts to manage behaviour are seen as sinister and coercive. Poor behaviour is not taken as a signal that a child has made the wrong choice, but as an act of communication. ‘All behaviour is communication‘ is the mantra. The child is communicating to the teacher that his or her needs are not being met. This is usually translated as meaning that the lesson is not entertaining enough (educationalists would use the word ‘engaging’, but in practical terms they mean ‘entertaining’). This shifts the blame for behaviour to the teacher. Yet academic learning, just as with pretty much anything worth doing in life, will always contain an element of hard work and slog. Part of the role of school involves reconciling students to that reality.

Getting the teaching right

Entertaining lessons may keep students quiet for a while – “let’s make a poster!” – but they will not necessarily lead to much learning and they will not tackle the root cause of the issue. However, this does not absolve teachers and schools of their responsibilities. Many of the negative externalising behaviours that our notional new cohort of maths teachers might experience will be rooted in a long history of educational failure. I can only imagine what it is like to come to school every day as a 15-year-old, troop from lesson to lesson and constantly be confronted with the fact that I cannot read fluently. I would certainly become frustrated and I would look for an outlet and a different field in which to excel.

We should not be putting students in this situation. We need to use effective teaching methods to ensure all students learn to read and do basic mathematics. Explicit, structured teaching that is highly interactive and seeks a high success rate is the backbone of this approach.

Classroom management

Nevertheless, young people who can read and write will still choose to misbehave. This is because they are human. The good news is that there is plenty that teachers can do to prevent behaviour issues from arising in the first place and then to manage them when they do. This knowledge is not widely shared with teachers because the ideology of educationalists is the ideology of our schools of education and can be briefly summarised as: What kind of monster seeks to control a child?

That’s why the second chapter of my book for new teachers is devoted to classroom management techniques. These mostly stem from an approach known as ‘behaviourism’ that many educationalists are keen to deride. In decreasing order of frequency and emphasis, behaviourist techniques seek to manipulate conditions to prevent poor behaviour arising, positively reinforce desirable behaviour and provide a cost to poor behaviour. Teachers should set-up routines for how to start the lesson, create seating plans, reward good behaviour with a smile and a positive word, draw attention to the students who are doing the right thing rather than the ones who are not, walk towards an area of the room where misbehaviour is starting to occur, admonish privately rather than publicly, avoid sarcasm and use mild punishments such as short detentions for repeated infractions.

These strategies work at the ‘Tier 1’ level of a model known as ‘response to intervention‘. They will not always work for all children because some have very complex needs. These students may need small-group or individual support or they may require specific accommodations in the classroom (such as a pass that allows them to leave class if they are losing their temper). That’s why classroom management must be integrated into a wider, whole-school approach.

School culture

Unfortunately, many schools do not have a wider, whole-school approach and instead have a culture that undermines teachers. In too many schools, leaders do not take responsibility. In some schools, there is a behaviour policy in place but teachers are not supposed to use it. If a teacher goes through all the appropriate steps and ends up asking for help or setting a detention, a school leader is likely to see this as a sign that the teacher is not teaching ‘engaging’ enough lessons. It’s all the teacher’s fault.

You can get by in such schools, but you have to adopt some kind of coping mechanism. Making posters is one. Being charismatic, old, male and in a position of seniority helps. If you don’t want a confrontation with parents then a good strategy is to not chase missing homework and to give out high grades even if they are unearned. Essentially, in bad schools you need to tell jokes, lower your expectations and work your way up the pecking order.

Our notional new maths teachers may choose to do this, or they may choose to quit.

There is another way. School autonomy policies in England have led to a new breed of ‘free school’ that are state-funded but largely free to follow their own course without interference from educationalists and bureaucrats. Interesting, Andreas Schleicher of PISA recently visited one such school in London, Michaela Community School. It has a whole-school policy that is both warm and welcoming, while being firm. I don’t think we would want to exactly replicate Michaela in Australia, but at present it is hard to imagine creating a school that is anything close to it.

If behaviour is a form of communication, then the behaviour captured by PISA is communicating to Australia’s politicians and education officials that tackling this problem is long overdue. They need to stand-up, be strong, ready themselves for the onslaught from the ideologues and do the right thing.

Does PISA success come at the expense of life satisfaction?

Those in England who are unhappy with government education policy have a new strategy for spinning the recent PISA results. The suggestion is that the small improvement in reading, the significant improvement and maths and the small decline in science have all come at the expense of students’ life satisfaction. All that testing and cramming is getting them down.

This is an interesting argument.

The OECD have even produced one of their strange graphs that seems to back this claim. I am never entirely sure what they are plotting on the axes of such graphs until I have done a great deal of investigation, but it does at least suggest that higher performance is associated with lower life satisfaction. Ignorance is, perhaps, bliss. Maybe that’s why Adam and Eve were warned off the Tree of Knowledge:

You will notice that UK students seem particularly gloomy, albeit that this is based upon a score of six-and-a-bit on a ten point ‘life-satisfaction’ scale on which seven-and-a-bit is above the OECD average. There are, of course, alternative hypotheses as to why this may be the case. How do we control for national temperament, for instance? And maybe it’s not about cramming exams but the fact that students who know more about the world are understandably gloomier? Maybe they are worrying about Brexit and climate change?

And maybe it is a result of having a growth mindset.

Apparently, upwards of 70% of students in the UK have a growth mindset. Nick Rose ran the numbers on this and found that having a growth mindset is negatively associated with a change in levels of ‘life-satisfaction’.

By Nick Rose (@Nick_J_Rose on Twitter) with permission

Perhaps all those posters and assemblies claiming that achievement is the result of hard work and not natural talent have fueled a dissatisfaction with life? Perhaps it has convinced some students to be harder on themselves? It’s about as plausible as any of the other speculation in this blog post.

Australia and Finland slide further in PISA 2018


The Programme for International Student Assessment (PISA) is run by the OECD which conducted its last round of testing in 2018. The long-awaited results of PISA 2018 have just been published. I am not interested in absolute scores and overall rankings because so much can vary between different countries and states: socioeconomic status, homogeneity of population, prevalence of tutoring etc. Instead, I focus on the direction of travel of different education systems.

I will post more analysis as I have it, but I will start with a few headlines.

Firstly, the darling of progressive educators everywhere, Finland, continues its unrelenting slide in performance since 2006:

Finland made its name in the early 2000s on the basis of students who had received a pretty didactic and traditional educational experience. However, when anglophone observers then visited the country, they looked at it through the lens of their own ideology and attributed Finnish success to progressivist approaches. It seems that Finland was taken in by some of this hype. They introduced ‘phenomenon-based learning‘ and a range of other more fuzzy approaches. We cannot know for sure what has caused Finland’s decline, but the data is highly suggestive that any changes made in the last 10-20 years have not been for the better.

Which is why, finally, Australian educationalists should put to rest any ideas that contemporary Finland is the system to copy. Australia has continued to decline in all but its mean reading score, which has flat-lined:

If we are going to learn from anyone, it should be systems that have been improving rather than declining. At present, our grand plan is to adopt practices that were abandoned in England nearly a decade ago, while the state governments push an agenda of reduced accountability, as if smashing the thermometer will stop your house from getting cold.

I do not yet have the dis-aggregated data for England, but, in terms of population, it forms a large proportion of the United Kingdom and the UK seems to be improving in reading and mathematics, even if the science score may be slipping a little:

England has made efforts since 2010 to adopt more of a knowledge-rich curriculum. If E. D. Hirsch Jr. is correct, this should eventually lead to an improvement in reading comprehension and so the reading result may possibly reflect such a process. There has also been a renewed emphasis on maths teaching, with the introduction of ‘maths hubs‘ to push best practice based on Chinese-inspired explicit teaching approaches. Again, we cannot be certain that these results have been caused by these policies, but it at least gives us a hint.

What we can no longer point to is any broad evidence from PISA that adopting progressivist education policies is associated with higher performance. And before you dismiss the PISA evidence, bear in mind that these are not tests of recall, but are attempts to test students’ ability to apply reading, maths and science knowledge to relatively complex real-world problems. I suspect that the intent was to give progressive practices the most favourable playing field as possible on which to compete.

What kind of improvement in a school system would not lead to an improvement in students’ abilities to apply what they have learnt to complex, real-world problems?

I cannot think of any.

How to do punk research

There’s a statistics website that enables anyone to do their own punk research: estimationstats.com. It’s good enough quality for the output to be used in published scientific papers like mine, but it is simple enough for anyone with a basic command of a spreadsheet like excel to conduct their own experiment. Here’s a worked example.

Firstly, we need to do an experiment that is small and self-contained. Perhaps you want to find the effect of drawing diagrams versus making notes on students’ learning. Next, you need to ensure that you gain any relevant ethics approvals. If you are conducting this research through a university then you would need to follow their ethics process (which can be quite laborious). Otherwise, check with your school.

The next step is to randomly allocate students to one of the two conditions. A rudimentary way to do this is to go down your register and alternate the conditions alphabetically. A more sophisticated method uses a random number generator. It doesn’t really matter. Researchers often use a quasi-experimental design where students are not randomly assigned. Instead, the teacher of Class A does something different to the teacher of Class B and so on. I would avoid such a design if you can. In this case, there are multiple confounds and the statistics we are going to use assume random allocation. So I would suggest doing something where different students in the same class form the two conditions.

In the example we are going to use, some will draw diagrams as they listen to teacher explanations and others will take notes. Make sure you label your conditions in such a way that you will remember what they are. In this case, we will use ‘D’ from drawing diagrams and ‘N’ for taking notes. This is preferable to ‘Condition 1’ and ‘Condition 2’ because the latter will cause you to try and remember which one is which, every time you return to the data.

Finally, you will need to test students. Make sure your test assesses the things that were taught during the experiment, as clearly and as objectively as possible. Avoid trying to add in questions that stray outside of these confines. You are unlikely to get any kind of effect with focused questions so adding irrelevant questions will not help. You may wish to deliberately add transfer questions that assess the same deep structure in a different context, but I would advise making it easy to separate the score on these questions from the score on questions that are more like the learning materials. That way, if there is no effect on transfer, which is likely, you can still look at the non-transfer questions alone.

So imagine that we do the experiment and get the following data (I have made this data up for the purpose of this example):

 

Now it is time to use estimationstats. First, select “Two Groups”:

You need to enter your data into the online spreadsheet. This comes populated with some mock data and the headings “Control” and “Test”:

You need to rewrite the headings, delete all of the mock data (make sure of this) and paste your data across. This is done with a Ctrl-C and Ctrl-V – I can’t get it to work by right-clicking on my mouse.

I recommend selecting a “Cohen’s d” effect size because this is the effect size that is most commonly understood in education:

The only other alteration I would make to the default settings is to label the y-axis with something a little more meaningful than “value”:

Then click “Analyse” and you should get something like this:

 

Beneath this, you will see a value for the effect size which, in this case, is an extraordinary d=-1.39. In other words, the mean of “Taking Notes” is 1.39 standard deviations lower than the mean of “Drawing Diagrams”. This is statistically significant at the p<.05 level and we can see this visually because the horizontal line that represents the mean of “Drawing Diagrams” does not overlap with the thick vertical line that extends upwards and downwards from the mean of “Taking Notes”. That vertical line represents the 95% confidence interval around the “Taking Notes” mean.

Estimationstats also gives a p-value, but it’s an unusual ‘non-parametric’ one. We won’t go into that here, but the estimationstats website provides a link if you want to read more about it.

If you do your own experiment, you are unlikely to find anything as clear-cut as this, but if you do then you should probably let us all know.

Right. Over to you.

 

 

Is Singapore a bastion of educational progressivism?

Earlier, Katharine Birbalsingh commented on Finland’s decline in performance in the Programme for International Student Assessment (PISA) and suggested it may be due to the adoption of progressivist educational methods (it will be very interesting to see how Finland went in 2018 when these figures are released next month). Peter Ford chipped in with an unusual take:

Ford linked to an article he had co-written in April with JL Dutaut for the Times Educational Supplement (TES). Oddly, this article is paywalled on my laptop but accessible from my phone, so you may not be able to gain access. It also contains no hyperlinks or references and so it is not possible to check the sources for the claims that are made.

Essentially, Ford and Dutaut claim that Singaporeans were unhappy with their educational performance in the 1980s and so instituted a series of progressivist-inspired reforms which began to bear fruit from the 1990s onwards. The earliest robust data I can find is from TIMSS assessments which date back to 1995, so I cannot check the claim about 1980s performance.

It is true that when Singapore started to produce its own textbooks in the 1980s, the writers based some of their ideas on the work of the psychologist Jermone Bruner. Probably the best example of this is the concrete-pictorial-abstract sequence for teaching basic arithmetic, also known as the ‘bar model’ approach after the pictorial representations it uses. Bruner also happened to be an advocate for discovery learning, a progressivist inspired teaching method. As John Dewey wrote in 1938, “There is… no point in the philosophy of progressive education which is sounder than its emphasis upon the importance of the participation of the learner in the formation of the purposes which direct his activities in the learning process.” However, I am not aware that discovery learning is a feature of Singaporean education. So there is an association here, but little more than that.

Where else could we look for evidence of progressivism in Singapore? PISA collect survey data alongside performance data and they specifically ask students about the kinds of teaching they are exposed to. In 2012, they asked questions to determine the extent of ‘teacher-directed instruction’ in maths lessons as well and the extent of teachers’ ‘student orientation’ in maths.

These survey questions are deeply imperfect measures, as I have explored at length in previous blog posts (e.g. here), but they roughly map on to traditional versus progressive teaching practices. PISA use the answers to the survey questions to generate an index for each teaching approach. This placed Singapore 27 out of 64 for the use of teacher-directed instruction and 37 out of 64 for student orientation.

In 2015, PISA completed a similar process for science teaching only this time, instead of the ‘student orientation’ construct, they surveyed students on the extent of ‘enquiry-based learning’ that took place in science lessons. This time, Singapore placed 11 out of 67 for teacher-directed instruction and 44 out of 67 for enquiry-based learning.

What can we conclude? As far as the evidence goes, I cannot rule-out that Singapore has become more progressivist since the early 1980s, but if it has, it is still quite biased towards a more traditional style of teaching, relative to the rest of the countries taking part in PISA and at least as far as we can believe the OECD data. Progressivism is, of course, more than a teaching style, it is a philosophy, and I cannot rule-out the possibility that this philosophy has taken hold. Singapore is the home of the concept of ‘productive failure‘, the last redoubt of constructivism, and so this must have sprung from somewhere.

Of course, it has always been a nonsense to point to a country at the top of the PISA league table and suggest that its practices must be superior to those lower down. PISA pits city states against large heterogeneous countries, wealthy states against poor and so on. Most of the time, it is unlikely to be the education system that is causing the difference. That’s why we should focus on the direction of travel and that’s why Birbalsingh’s point about Finland was valid.