Defeating maths anxiety

Embed from Getty Images

Maths anxiety is a form of anxiety specific to being placed in situations that require mathematical knowledge and it is associated with impaired performance on mathematical tasks. There has been much debate about the relationship between maths anxiety and maths performance. Correlation is not causation. Does impaired performance cause maths anxiety or does maths anxiety cause impaired performance? Perhaps the relationship is two-way, setting-up a vicious circle for the individuals involved?

Dr. Jo Boaler, a professor of maths education, has been prominent in the discussion about maths anxiety, attributing its cause to the use of timed assessments in class. However, I have previously investigated this claim and I see no reason to accept it at present.

Nevertheless, maths anxiety exists and is a significant problem. As a maths teacher, I want to do something about that. Until now, the research on how to deal with maths anxiety has not been hugely helpful, but a new study by Maria Passolunghi, Chiara De Vita  and Sandra Pellizzoni may start to change that.

To begin, I want to applaud the design of the study. Far too common in education research are comparisons between doing something and doing nothing. Doing something may turn out to be better than doing nothing just because of a placebo effect. So, a better design of study will include an active control group. Doing something that is thought to have an effect is then compared with doing something else that is not thought to have this effect but that still feels like something new to the participants. Better still are three-armed trials where two separate interventions are compared with an active control.

A three-armed trial is the only truly valid way of comparing the effects two different interventions. Over the years, attempts have been made to compare interventions from different studies using effect sizes, but effect sizes are not the stable measure that people think they are and can vary wildly depending on the subjects, the measures used, the experimental design and a whole host of other factors. This is why I have argued for three-armed trials in the past.

The Passolunghi et al. study on maths anxiety is just such a three-armed trial. Three programmes were developed for primary school students. All of them involved eight weekly one-hour sessions. The active control involved children creating comic strips. Of the two interventions, one focused on dealing with maths anxiety directly by addressing negative thoughts with a form of cognitive behavioural therapy. The other addressed maths anxiety indirectly by building maths skills and focusing on maths games, calculations and rhymes and stories for remembering maths facts.

The results were interesting. The direct approach to tacking maths anxiety reduced maths anxiety relative to the active control condition. However, the maths skills intervention resulted in pretty much the same reduction in maths anxiety. This is an unusual example of transfer – something that is rare in education research. However, the researchers also measured mathematics performance using a standardised assessment. The students who received the maths anxiety intervention did not improve their maths performance any more than the control, but the students in the maths skills intervention improved their performance more than both the control and the maths anxiety intervention.

Perhaps teaching children maths results in them improving at maths which, in turn, results in them being less anxious about maths.

It seems plausible.

Pasi Sahlberg’s diagnosis and cure for Australia’s ailing education system

Pasi Sahlberg has been opining about the Australian education system again. To summarise his thesis, back in the early 2000s, the PISA results of a group of countries including Australia caused education systems around the world to look to those countries for guidance. Since then, we have fallen from international grace. We are no longer seen as ‘progressive and future-looking’ but ‘conservative, ineffective and outdated’. This is not, apparently, due to our declining PISA scores, but because of the inequity of our system.

Sahlberg’s answer is to splash the cash with measures that include, “high-quality early childhood education as a basic right for all children, preventive support for children and families in their health and wellbeing, allocating money to schools to offer individualised help to all children, and investing in teacher collaboration and professionalism to advance school improvement.”

It is hard to establish the evidence base for Sahlberg’s claims. Who is it exactly who viewed us one way and now views us differently? If we knew, we could perhaps look at some data. There must be some data, after all, for otherwise how would Sahlberg know we are seen differently? Surely this is not anecdotal?

And why the pivot from PISA? If PISA results initially caused us to be seen a certain way, and if perceptions have changed, why would that change in perceptions have nothing to do with our PISA decline? How does Sahlberg know this? It would seem to be the most obvious reason for a change in perceptions. And if, as Sahlberg acknowledges, the issue of equity in Australia’s education system ‘is not a recent finding’, then why does he think it is the cause for a change in perceptions?

Indeed, rising inequity is not a recent trend in Australian education. According to this OECD document, the gap in reading performance explained by student socioeconomic status stayed pretty much the same between 2009 and 2018. And in another document, we find the same story for science performance between 2006 and 2015. Everywhere you look for the data the disadvantage gap for Australian students seems to be pretty close to the OECD mean and stable over time.

Is it more plausible that a stable, middling relationship between socioeconomic status and performance has caused a change in international perceptions of Australian education than a large decline in overall performance?

Yes, passions rage in Australia about state funding of independent schools, selective schools and many other issues and people are entitled to express their views about this. However, that does not give us license to imagine cause-and-effect relationships where they do not exist.

Sahlberg has obviously noticed something in the wind. At one point, he worries about reforms where teachers, “should be allowed to use only evidence-proof teaching methods,” alongside better founded concerns about performance-related pay and superstar teachers. What are these strangely named ‘evidence-proof teaching methods’ and why, if they are somehow based on evidence, would we be worried about them? Is it that they may perhaps replace teaching methods that don’t have much of a basis in evidence but that fulfil the ideological need to be ‘progressive and future-looking’?

An example of such a teaching method hails from Sahlberg’s home country, Finland. Finland introduced ‘phenomenon-based learning‘ in 2017. It does not seem to be supported by much in the way of evidence and is unlikely to arrest Finland’s relentless PISA decline, a decline that Sahlberg also attributes, perhaps inevitably, to rising inequity. If all you have is a hammer then I suppose everything looks like a nail.

What of Sahlberg’s solutions for Australian education? Well, more money is always welcome and it would be great to have expanded access to early education. However, as was the case with Britain under Blair and Brown, you can spend massively on education without shifting the dial on performance. Does that matter if children have better wellbeing? Well, for one thing, wellbeing is linked to academic performance so setting the two up in opposition to each other seems strange.

Teacher collaboration and ‘professionalism’ is definitely a good idea, but exactly what does Sahlberg have in mind? Teams working on formative assessment? Teachers co-planning lessons? Learning walks? Instructional rounds? Without the specifics, it just sounds like something to say and it has the added bonus of being impossible to critique. After all, who is making the case that teachers should not collaborate and should be unprofessional?

And money for individualised help? If we are talking one-to-one tuition then that would probably be effective if the tutors used a mastery learning approach but it would also be incredibly expensive. If it’s just about more exhortations to differentiate then it’s likely to be pointless.

In sum, Sahlberg has diagnosed the wrong problem while suggesting a cure that is simultaneously vague and expensive.

How to influence teachers

Embed from Getty Images

A lot of people want to influence teachers: politicians, researchers, salespeople, campaigners. However, teaching is notorious for being relatively impervious to influence. Once a teacher closes the classroom door, they have a great deal of autonomy. In addition, teaching involves multi-tasking which, to do successfully, requires teachers to automatise many of their routines and behaviours. It is hard to change something you are not conscious of doing, even if you are persuaded of the case.

However, all is not lost. If you want to influence the practice of teachers, I offer the following advice.

1. Offer practical solutions

Most people who seek to influence teachers make the mistake of assuming that it is mainly about persuasion. This is probably because we all have ideologies that we bring to bear on education and these ideologies cannot be ignored. However, these sit more at an abstract level.

If a teacher has a class of 25 students in which one student constantly disrupts the class, what that teacher needs most is practical solutions to a problem that is affecting everyone, not least the misbehaving child. They need to know what to do right now, how to follow it up and what long term measures can be put in place to address the root cause. They do not need a lecture on UN conventions or to be persuaded that there might be a root cause.

2. Make the thing you want them to do easier than what they are doing at the moment

If you do not recognise that teachers work hard then that’s an empathy gap you are likely to never cross. During term time, teachers burn the candle at both ends. They are often marking and planning lessons late into the evening and at weekends. And then there is all the bureaucracy. Not all of this work is optimal and that is the opportunity. If you can make your initiative less work than what teachers are doing at the moment then you have a chance of winning them over. However, if your big idea is a plan for differentiation that involves the creation of five different versions of every worksheet, you can forget it.

3. One thing at a time

Related to the previous point, don’t try to change too much in one go. I remember Dylan Wiliam talking about formative assessment and the professional learning groups he advises creating for embedding more formative assessment into lessons. His key message was that if a teacher can embed one formative assessment practice into their teaching over a given period of time then they are doing well.

4. Be aware of the emotional impact

I am starting to become convinced that all of the most important reforms to teaching involve seeking disconfirming evidence. They involve disrupting the confirmation bias that assumes that because we have taught something, students must have learnt it. So we have to check and find out the bad news. That’s what formative assessment is all about. It was never really about providing feedback to students, it was always about providing feedback to teachers – feedback that students have not learnt something.

This is hard to do. I have been teaching for 23 years. I should be confident. But seeking out the bad news makes me feel like a crap teacher. It is something I would avoid if I had not built the systems to make me do it.

Reformers need to be aware that the best reforms may not always make teachers feel good about themselves and need to have a plan for that.

Cognitive load theory is not neuroscience and it does not claim the mind is a computer

Embed from Getty Images

Google gives this definition of neuroscience:

“any or all of the sciences, such as neurochemistry and experimental psychology, which deal with the structure or function of the nervous system and brain.”

The key distinction from psychology is the emphasis on the structure of function of the nervous system and brain.

Cognitive load theory proposes a simplified model of the mind that consists of long-term memory and working memory. It does not make claims about where these components are located in the physical brain or even whether they have a specific location.

It may be interesting to know whether working memory sits at the front or back of your brain, but there seems little this could tell us about the design of instructional procedures – the key focus of cognitive load theory. Some researchers are trying to measure aspects of brain function as part of cognitive load theory research, but this is a secondary concern at present,

Right now, the chances are that if you come across diagrams of brains or images of brain scans in the context of an educational theory or approach, they are being used to lend credibility to something dubious.

Cognitive load theory also does not model the mind as a computer. It models it as a natural information processing system. The analogy researchers use is not the laptop on your desk but the process of evolution.

Computers run a program executed by a central processing unit. Evolution does not. In fact, a distinguishing feature of the model of working memory proposed in cognitive load theory is the lack of any central executive, with these functions being taken by schemas in long-term memory.


Are declining PISA scores in Australia and Finland explained by declining effort from students?

When the results of the 2018 round of the Programme for International Student Assessment (PISA) were published late last year, Australia and Finland cemented their long-term declines in performance:

This has been a bitter pill to swallow. Many in education and academia perhaps fear that these results will precipitate more government interference in the Australian education system and that they will face losses as a result. Others may have too much invested in Finland. Some traveled there in the early 2000s and wrote papers about the education system, so they have ego wrapped up in it. And in the early 2000s, Finland was wrongly presented as a example of how an education system could embrace progressivist education ideology and still be academically successful. There are those who still seek to make this case but the decline in results is becoming harder to ignore.

One rationalisation to explain away any decline in the cases of both Australia and Finland is that the students sitting PISA do not try as hard as they used to. The convincing part of this argument is that PISA is a low stakes test for those students who take it – they don’t even find out their result. So why would students try hard? However, this point only matters if there has been a change in the level of effort over time. If students were as disengaged in 2000 as they were in 2018 then this cannot be a cause of the decline in performance.

The evidence people draw upon to make this case tends to be anecdotal. It is a variation of the old ‘the kids of today’ trope where young people get blamed for being lazier or more feckless than previous generations. Perhaps it all has something to do with mobile phones.

So I wanted stronger evidence to draw upon.

PISA has sometimes collected data on the level of effort students report making on the PISA test. They then present this on an ‘effort thermometer’. I am having trouble tracking through all the relevant data because the OECD do not make it easy, but I have found data from the same question asked in both 2003 and 2018. The consistent measure I have found is the average self-reported level of effort on a scale from one to ten (the OECD also ask students to report their effort relative to the effort they would put into a school test but they report that differently for the two years).

In 2003, Finnish students self-reported an average effort of 7.3 and for Australian students it was 7.5. I had to get my ruler out and read these from a graph because they are not reported as figures and so these should be taken as approximations. For context, across all of the countries surveyed, the score ranged from about 6.2 to about 8.7. In 2018, Finnish students reported an average effort of 8.0 and for Australian students it was 7.4. If anything, this would suggest that the effort of Finnish students has perhaps slightly increased over time while that of Australian students has remained pretty static.

This is only one measure, of course, and it has its flaws. Some students might not even bother to fill-in the self-report question, although again we would need to know if the proportion doing this had changed over time in order for it to be relevant. And perhaps students interpret the question differently now because they work off a different baseline? You can pick holes in the data if you wish. But it should give us pause before we accept a blithe dismissal of these countries’ declining PISA performance on the basis of a supposed decline in effort on the test.

Meaningless standards

Australian teachers are governed by a set of largely meaningless standards. A key feature of these standards is that they discuss teaching in the abstract without detailing any specifics. For instance, under the standard, “Know the content and how to teach it,” we read that proficient teachers, “Apply knowledge of the content and teaching strategies of the teaching area to develop engaging teaching activities.” This is partly a repetition of the standard itself and partly an appeal to apple-pie statements. What do we mean by ‘engaging’, for instance? Are we talking about maximising academic learning time or are we talking about making lessons fun? It is not clear and so it is open to anyone interpreting the standards to impose their view on them. This means that a new teacher who is excellent at maximising academic learning time may be advised to make their lessons more fun.

Where the standards do make a call, it is often the wrong one. For instance, highly accomplished teachers, “Exhibit innovative practice in the selection and organisation of content and delivery of learning and teaching programs” Do we really want the ‘innovative’ selection of content or delivery of learning and teaching for the sake of it? Why is ‘innovative’ an inherent good? And there is a whole focus area on differentiation. Differentiation is an ambiguous term and many of its common manifestations lack any kind of serious evidence base. Do the standards sort the wheat from the chaff? No.

Perhaps the most obvious shortcoming is the focus area, “Understand how students learn.” Here, we are informed that proficient teachers, “Structure teaching programs using research and collegial advice about how students learn.” What research? Is all collegial advice equal?

Compare this, for a moment, with what engineers have to demonstrate on entering their profession. Thankfully, the people building your bridges and fridges have an identifiable body of knowledge that they must master. For a start, they need a, “Conceptual understanding of the mathematics, numerical analysis, statistics, and computer and information sciences which underpin the engineering discipline.” Given that these standards apply to all engineers, regardless of specialism, you may expect them to be a little vague. However, the demands are still quite, er, demanding. For instance, a new engineer, “Interprets and applies selected research literature to inform engineering application in at least one specialist domain of the engineering discipline.”

Engineers also seem concerned about warding off bad ideas. They are expected to ensure, “that all aspects of an engineering activity are soundly based on fundamental principles – by diagnosing, and taking appropriate action with data, calculations, results, proposals, processes, practices, and documented information that may be ill-founded, illogical, erroneous, unreliable or unrealistic.” I think teachers should be accountable in this way too.

Just like us, engineers are encouraged to be innovative and seek out, “new developments in the engineering discipline and specialisations.” However, they are also expected to apply, “fundamental knowledge and systematic processes to evaluate and report potential.” Because, well, some innovations may not be that great.

When it comes to teaching standards, there is a certain amount of inevitability about their vagueness. For instance, ‘effective’ teaching strategies are regularly mentioned and proficient teachers are expected to, “Apply knowledge and understanding of effective teaching strategies to support students’ literacy and numeracy achievement.” And yet it is never made clear what these effective teaching strategies are. Should teachers of early reading use a balanced literacy or systematic phonics approach? Should maths teachers adopt explicit teaching or inquiry learning? I can understand why this is not addressed because I can just imagine the fight that would erupt if any of these judgement calls were made in the teaching standards. However, by not making these calls, the standards become meaningless, effectively exhorting us to teach well and in ways that will help children learn, like that would never have occurred to us otherwise.

What is the solution? I am sympathetic to abandoning these sorts of standards entirely, while retaining standards of behaviour and ethics. Vague standards seem to have the potential to do more harm than good. Alternatively, we could try and resolve some of the differences and make the standards more specific. That would involve an almighty war with all the power in the hands of university education departments. I can’t see that ending well.

Alternatively, we could look for an innovative solution. State and federal governments could decide to give a small number of parallel bodies the power to accredit teachers and teacher education courses. Each body could have its own set of standards – balanced literacy and social constructivism in one, systematic phonics and cognitive science in another. It may sound messy, but it would give teachers the choice of exactly what kind of teacher they want to be and it would give schools a choice of what kinds of teacher they wish to employ. Time would do the weeding.

Cats, catnip, dogs and the absurd

I my previous post, I explained why I thought that Structured Word Inquiry (SWI) and its proponents’ insistence that morphology and etymology should be taught right from the very beginning of reading instruction was at odds with basic cognitive science. I was left with having to speculate on exactly what teaching in this way would look like because I also highlighted the fact that sources that supposedly demonstrate how this is done do not actually feature initial reading instruction.

I was therefore grateful for Jeffrey Bowers, a researcher and advocate of SWI, taking the time to comment and explain what he thinks this would look like:

“This is absurd… let’s consider initial reading instruction… and the case of teaching the word “cat” using SWI. The teacher would talk to children and ask them what does the word “cat” mean? And what about “cats”? and “catnip”? etc., the teacher can talk to completely illiterate children about what is the difference between cat and cats? Do you know what catnip means anyone? No, I’ll tell you. Etc. And then you can break down the word “cat” into its graphemes one at a time. Sound it out. Then add the “s”. Consider how it sounds. Say, OK, we now know that the graphemes “c”, “a”, “t”, and “s” can be pronounced… Next they might learn about the “dog” and “dogs”. And then have a conversation about why there a common letter “s” in “cats” and “dogs” when the sound differently. Etc. The idea that any of this is inconsistent with working memory and cognitive load is absurd.”

I have drawn a map of the items that I think are relevant to the teaching sequence that Bowers describes:

I agree that most, but perhaps not all, children who are about to learn to read would know what a cat and a dog are in the sense that these words would be in their aural vocabulary. That does not mean they would be able to map the written letters of ‘cat’ and ‘dog’ onto these words in aural vocabulary – a process commonly known as ‘decoding’. Similarly, they may have an implicit, or perhaps even explicit, understanding of the phoneme(s) represented by ‘s’ and that this relates to plurality. I doubt many would know about ‘catnip’, as Bowers acknowledges, and so we cannot assume this. So with a little fuzziness around exactly what relevant schemas our children already posses, I reckon there are at least 10-12 distinct items there – well above the four-item estimate of working memory capacity.

As an aside, I have used the word ‘synthesis’ to represent the understanding that the different sounds of the word-parts come together in the whole word. I know that this may remind people of synthetic phonics but I could not think of a more neutral word and the principle of a relationship between the parts and the whole stands whether you are going from the parts to the whole (synthetic phonics) or the whole to the parts (analytic phonics).

To be fair, Bowers does not appear to be suggesting that children attend to all of these items at the same time. We could perhaps imagine working our way through 3-4 of them and then another 3-4 of them and so on. However, there is more to learning than simply ensuring that the flow of information remains under the 4 item limit. Otherwise, we would all remember every detail of most of the programs or films we watch or books we read. No, learning appears to require some form of active retrieval or reconstruction by the learner. It requires practice.

For instance, in Rosenshine’s principles, drawn from the practices of the most effective teachers, we are advised to, “Present new material in small steps with student practice after each step… more effective teachers do not overwhelm their students by presenting too much new material at once. Rather, these teachers only present small amounts of new material at any time, and then assist the students as they practice this material. Only after the students have mastered the first step do teachers proceed to the next step.”

This is mirrored in the experimental work that underpins cognitive load theory. For instance, researchers have found that studying a worked example, immediately solving a similar problem and repeating this process is superior to studying a number of worked examples and then solving a number of similar problems. Many teachers have now drawn on this evidence to build ‘example-problem pairs‘ into our teaching practice.

So perhaps we could add small steps and practice to Bowers’ model. However, it all seems a little incoherent. Do we want students to be fluent with GPCs so that, in the future, they may easily decode unknown words such as ‘gruffalo’, ‘mitosis’, ‘hufflepuff’ and ‘Acidulate’? Do we put this on an equal footing with knowledge of catnip? What about knowing that the ‘s’ in ‘dogs’ represents a (slightly) different sound to the ‘s’ in ‘cats’? How critical is that in reading lesson number 1? Do we want to touch on the GPCs in ‘dog’ and ‘nip’? If not, how are the children meant to be interacting with these words? Do we want them to learn the word ‘dog’ and the ‘nip’ in ‘catnip’ whole? Conceptually, the relationship between cats and catnip is mirrored in the morphology – how important is it for students to learn this word sum? Do they need to know the origin and meaning of ‘nip’ for this to make sense?

We could potentially still go through the same teaching sequence but decide that some items are just not the target of learning. They are there for perhaps motivational reasons and so we don’t need to practice them because we don’t mind if students forget them. This would still come at a cost because we would probably need to use some of the children’s cognitive resources in signalling what is and what is not important. However, even if this worked, the evidence from cognitive load theory suggests that such extraneous information hinders learning. Making progress in learning is likely to be motivational and possibly more of a motivator than these extraneous items. In the longer term, if we use non-optimal approaches to early reading instruction and if the effect of that is more reading failure, we are storing up the issue of disengagement for some future date when the child figures out that he or she cannot read.

I therefore cannot see any advantages in the teaching process described by Bowers over one that focuses on GPCs in order to get students reading words and controlled GPC books as soon as possible. So I won’t be arguing that my school abandon Sounds-Write just yet.

However, I do think I am starting to understand where all of this comes from. I have recently become aware of a set of posts by Katharine Beals about SWI. In part iii, she notes that SWI proponents seem to favour inquiry learning due to the excitement it is assumed to induce. At one point in his comment on my last post, Bowers asks, “How can someone who agrees that morphology is relevant to instruction not at least be intrigued by the morphological matrices that organize groups of words into morphological families in a way that highlights their spelling, meaning, and phonological consistencies?” The answer to this is, “Morphological matrices don’t strike me as either particularly original or particularly interesting.” However, if you believe that is some deficit on my part and you believe that morphological matrices really are the catnip of reading instruction then you can convince yourself of vast untapped reserves of motivational potential that are going to waste. Hence:

“…there are good reasons to think that SWI can address the main criticism of phonics, namely, the view that an emphasis on grapheme-phoneme correspondences is not engaging for many children. For example, when discussing the disappointing results of some phonology-based intervention studies, Snowling and Hulme (2014) argued that intervention studies need to focus more on pupil motivation with the aim of increasing students’ enjoyment of reading. We would suggest that SWI is a promising approach in this respect given that it aims to give children an understanding of the meaningful organization of the writing system through word investigations. As noted by Dunlosky et al. (2013): Anyone who has spent time around young children knows that one of their most frequent utterances is “Why?” (p. 8) Indeed, nothing motivates like understanding.”

I have no reason to assume that SWI will be superior to phonics in motivating students. All learning requires an element of hard work that is not always fun and anything can be made relatively more or less interesting by how it is presented. I agree that understanding is motivating, but understanding what? Understanding the words on a page because you can decode them and match them to your aural vocabulary or understanding some finer point of a morphological matrix?