In my recent book, I discussed ‘ouroboric’ processes in education. I suggested that some relationships that people think are linear – that motivation leads to learning or that conceptual understanding must come before learning procedures – are actually cyclical.
The examples that I gave were of positive processes. However, a new paper by Cambridge University researchers suggests that the negative relationship between maths anxiety and maths achievement is also cyclical.
I have written about maths anxiety before. The model that I used would be called ‘the deficit theory’ by the Cambridge researchers. This basically posits that maths anxiety is caused by a lack of ability. The teaching implication of this is that we should teach maths in the most effective way possible in order to improve competence and reduce anxiety. We might also consider giving students experience of success with some relatively straightforward work rather than asking them to struggle for extended periods.
The alternative explanation for maths anxiety is ‘the debilitating anxiety theory’. People who worry about their maths performance have to devote some of their working memory to the worrying. They therefore have fewer working memory resources to devote to problems. Jo Boaler is a prominent populariser of the debilitating anxiety theory amongst maths teachers and she advises that we avoid timed tests because they have been shown to induce anxiety. However, she also advocates open-ended problem-solving which is likely to overload working memory and which would not provide the routine competence that the deficit theory implies students need.
There exists powerful evidence for both theories. Longitudinal studies support the idea that poor levels of achievement lead to future maths anxiety. A range of lab-based studies such as those that induce stereotype threat – reminding a group that there is a negative perception of that group’s mathematical ability – show that maths anxiety leads to poorer performance.
The Cambridge researchers propose that the interaction works both ways. They also point out that while the deficit theory is supported by long term studies, the debilitating anxiety theory is supported by short-term experiments. So the two interactions work at different scales.
An interesting ouroboric process.
Some might argue that there should be room for a little anxiety in school life. We don’t want to wrap students up in cotton wool because the real world is not like that. Perhaps a little anxiety helps lead to better coping strategies; more resilience. Perhaps.However, I think it is true that anxiety can disrupt learning and so we probably want to reduce unnecessary anxiety if we want to maximise learning.
In my earlier post on Jo Boaler’s remarks about multiplication tables, I noted that improvements in competence in a subject lead to improvements in self-concept; how students feel about their academic abilities. So, if we wish to reduce student’s anxiety about mathematics it would seem reasonable to try to increase their self-concept by teaching them in such a way that they become better at maths. I have used this principle, along with others from cognitive psychology, logic and experience to suggest the following four tips to reduce maths anxiety. Please feel free to add your own in the comments.
1. Have frequent low-stakes tests
We know that retrieval practice is effective at supporting learning. However, if we test students infrequently then they are likely to see these tests as more of an event and therefore as something to worry about. Instead, we should build frequent, short-duration, low-stakes testing into our classroom routines. Not only will this make testing more familiar, it will increase competence when students tackle any high-stakes testing that is mandated by states or districts and will thus reduce anxiety on these assessments too.
2. Value routine competence in assessment
If you were to spend your time reading maths teaching blogs then you might think that they only kind of maths performance of value is when students can creatively transfer something that they have learnt to solve a novel, non-routine problem. This is not the case. Routine competence is also of great value in mathematics. There is a lot to be said for being able to reliably change the subject of equations.
If we communicate to students that it is only non-routine problem-solving that matters then we are likely to make them feel inadequate. We can send such a message explicitly or we can send it implicitly by setting large numbers of non-routine problems on and making these the focus of assessment.
Non-routine problems are great for avoiding ceiling effects on tests and enabling some of the most talented students to shine. However, assessment should also include a large amount of routine problem solving to show that this is also valued. As a general rule, I would advocate a gradual move from routine to non-routine.
3. Avoid ‘productive failure’ and problem-based learning
Similarly, some educators advocate framing lessons by setting students problems that they do not yet know how to solve in the belief that this will make them keen to develop their own solution methods or receptive to learning from the teacher. Some children might find this motivating but others – and particularly those with a low maths self-concept – are likely to feel threatened. Motivational posters will not help.
It is true that some studies seem to show that this kind of approach leads to improvements in learning. However, these are often poorly designed, with more than one factor being varied at a time (see discussion here). And it is a matter of degree. In the comments on this blog post, Barry Garelick suggested asking students to factorise quadratics with negative coefficients one they have been taught how to factorise ones with positive coefficients. This still requires a little leap but it is far less of a jump than asking students to develop their own measure of spread from scratch such as in the experiments of Manu Kapur.
Given that there is a wealth of evidence in favour of explicit instruction, where concepts and procedures are fully explained to students, it seems that productive failure is risky and could backfire through its interaction with self-concept.
4. Build robust schema
It is true that you can survive without knowing your multiplication tables. You can survive without knowing most of the things that students learn in school. If you just have a particular gap in your knowledge then you can develop workarounds.
The question is; why would you want to? Knowing common multiplications by heart makes mathematics easier to do because it is one less thing to process. Building and valuing such basic knowledge is both a way of generating little successes for students to experience and a way of aiding the process of more complex problem solving. I think that this is one of the reasons why the ‘basic skills’ models in Project Follow Through were so successful at generating gains in more complex problem-solving.
A guiding principle
In reducing maths anxiety, we should focus primarily on teaching approaches that are likely to make students better at maths. Increase maths competence to reduce maths anxiety.
PISA recently released a report about the data that they have collected on maths teaching and learning strategies. I analysed some of this data and related it to the claims that PISA made. The report was quickly followed by an article in Scientific American.
The Scientific American article focused on one area of the PISA report in particular – the rate at which students report using “memorisation” strategies. In the working paper used as a basis for this report, the measure used to quantify memorisation is explained. Students were asked the following questions:
“For each group of three items, please choose the item that best describes your approach to mathematics.
Labels (not shown in the questionnaire): (m) memorisation (e) elaboration (c) control
a) Please tick only one of the following three boxes.
1 When I study for a mathematics test, I try to work out what the most important parts to learn are. (c)
2 When I study for a mathematics test, I try to understand new concepts by relating them to things I already know. (e)
3 When I study for a mathematics test, I learn as much as I can off by heart. (m)
b) Please tick only one of the following three boxes.
1 When I study mathematics, I try to figure out which concepts I still have not understood properly. (c)
2 When I study mathematics, I think of new ways to get the answer. (e)
3 When I study mathematics, I make myself check to see if I remember the work I have already done. (m)
c) Please tick only one of the following three boxes.
1 When I study mathematics, I try to relate the work to things I have learnt in other subjects. (e)
2 When I study mathematics, I start by working out exactly what I need to learn. (c)
3 When I study mathematics, I go over some problems so often that I feel as if I could solve them in my sleep. (m)
d) Please tick only one of the following three boxes.
1 In order to remember the method for solving a mathematics problem, I go through examples again and again. (m)
2 I think about how the mathematics I have learnt can be used in everyday life. (e)
3 When I cannot understand something in mathematics, I always search for more information to clarify the problem. (c)”
I am not convinced that these memorisation options represent actual memorisation strategies. Also, the questions are asked in a way that forces a discrete choice. The accepted practice in psychology is to use a scale of agreement with any given statement (e.g. a Likert scale). Without this, we have a validity and reliability problem. For instance, a student might partly agree with all three responses to question a) but when they are forced to select one response then this will be recorded as 100% agreement with that option and 0% agreement with the alternatives. This is the same reason why the Myers-Briggs personality test is invalid and unreliable.
It is therefore hardly surprising that I could find no correlation between the “index of memorisation” that PISA derive from these responses and a country’s PISA mean maths score. These questions probably do not reliably measure the use of memorisation.
Yet the Scientific American article makes a number of claims about memorisation on the basis of this data. Unfortunately, the authors provide no references and they seem to be in possession of data that is not presented in the PISA report (if either author reads this post then I would be grateful for this data). Nevertheless, I think some of these claims are highly unlikely and I wonder whether the authors may have made an error.
I will list these claims below and then comment on them.
1. In every country, the memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.
I cannot tell how a “memoriser” is defined from the PISA report. For instance, is it a person who answers with a class (m) response to all of the questions above, three of them, two of them? Similarly, data on the number of such memorisers in each country is not provided.
I would not be surprised to find out that, in any given country, these memorisers are the lowest achievers but I am not sure what this would tell us. As Robert Craigen points out in a comment on a previous post, memorisers might have resorted to some of these strategies due to poor teaching. They may also have less understanding of, or interest in, the survey questions.
However, I find it highly unlikely that countries with high numbers of memorisers correlate with teens doing poorly on the PISA math assessment. Presumably, countries with higher numbers of memorisers will have a higher overall index of memorisation. If not, this would require the remaining non-memorisers to use far fewer memorisation strategies than the overall mean. If you plot percentage of maths low achievers against index of memorisation then there is no correlation.
2. Further analysis showed that memorizers were approximately half a year behind students who used relational and self-monitoring strategies.
3. In no country were memorizers in the highest-achieving group, and in some high-achieving economies, the differences between memorizers and other students were substantial.
Again, I would like to see the data here but I can believe it.
4. In France and Japan, for example, pupils who combined self-monitoring and relational strategies outscored students using memorization by more than a year’s worth of schooling.
Why select just two countries like this? Again, I don’t have the underlying data but, if I did, it wouldn’t tell us much. It is fraught enough to try to make comparisons across many education systems of different sizes and with different cultures. At least if we include all of them then we might pick up some general trends. I’m sure it would be possible to prove almost anything with just two examples.
5. The U.S. actually had more memorizers than South Korea, long thought to be the paradigm of rote learning.
Again, we would need to know the definition of a “memoriser”.
6. Unfortunately, most elementary classrooms ask students to memorize times tables and other number facts, often under time pressure, which research shows can seed math anxiety. It can actually hinder the development of number sense.
I would love to see this research. Victoria Simms recently reviewed a book by one of the authors of the Scientific American article and found a similar claim:
“Boaler suggests that reducing timed assessment in education would increase children’s growth mindsets and in turn improve mathematical learning; she thus emphasises that education should not be focused on the fast processing of information but on conceptual understanding. In addition, she discusses a purported causal connection between drill practice and long-term mathematical anxiety, a claim for which she provides no evidence, beyond a reference to “Boaler (2014c)” (p. 38). After due investigation it appears that this reference is an online article which repeats the same claim, this time referencing “Boaler (2014)”, an article which does not appear in the reference list, or on Boaler’s website. Referencing works that are not easily accessible, or perhaps unpublished, makes investigating claims and assessing the quality of evidence very difficult.”
7. In 2005 psychologist Margarete Delazer of Medical University of Innsbruck in Austria and her colleagues took functional MRI scans of students learning math facts in two ways: some were encouraged to memorize and others to work those facts out, considering various strategies. The scans revealed that these two approaches involved completely different brain pathways. The study also found that the subjects who did not memorize learned their math facts more securely and were more adept at applying them. Memorizing some mathematics is useful, but the researchers’ conclusions were clear: an automatic command of times tables or other facts should be reached through “understanding of the underlying numerical relations.”
This claim does at least provide a clue as to where to find the evidence although it is a little odd. The neuroscience part of the claim is essentially irrelevant to teachers – why care what ‘brain pathways’ are used? Teachers generally have no opinion on this. We need to focus instead on the quality of learning.
I think I have found the paper. Unusually, it does complete both a neuroscience imaging study and a behavioural study on the quality of learning, as suggested in the Scientific American claim. The participants were 16 university students or graduates. They did a series of trials where they were given two numbers, A and B. In the ‘strategy’ condition, students were given a formula to apply such as ((B-A)+1)+B)=C in order to work out the answer, C. In drill instruction, they were given A, B and the response, C to simply memorise. Surprisingly, the memorisers did pretty well on a later test but, wholly unsurprisingly, they could not extend this to transfer tasks involving new values for A and B. This is entirely consistent with the findings of cognitive load theory were problem solving so occupies our attention that we cannot infer the underlying rule. The strategy example is much more like following a worked example.
However, none of this bears much relationship to memorisation strategies in the PISA report. Is anyone attempting to teach students all of the possible questions that they might be asked and all of the possible numerical answers to these questions? In fact, the use of formulas like in the above “strategy” condition is often criticised as the “rote” learning of formulas and I imagine that this is what maths memorisers – if well-defined – would be trying to memorise.
This research does not seem to apply to the learning of basic maths facts such as multiplication tables. Teachers attempt to teach these to the point of memorisation but the underlying rule is not withheld. Tables are built up from counting patterns, arguments about groups of the same size and so on. Patterns are highlighted like the ones in the 11 and 9 times tables and a few more facts are committed to memory through practice such as 7 x 8 = 56. But these are very simple operations and nothing like the contrivance ((B-A)+1)+B)=C. In fact, the benefit of knowing simple multiplication results ‘by heart’ is that you can then attend to the other elements of a complex operation.
8. Timed tests impair working memory in students of all backgrounds and achievement levels, and they contribute to math anxiety, especially among girls.
This is partially a repeat of claim 6 but also adds the claim that timed tests impair working memory. Again, it would be good to see the evidence to support this.
In 1989, the National Council of Teachers of Mathematics in the U.S. published the first version of its Principles and Standards for School Mathematics. It was a pivotal moment for mathematics education both in America and across the world. Despite the relatively poor performance of the U.S. in comparison to other countries and states in international tests such as PISA or TIMSS, people look to America for sexy new ideas.
The standards came to represent a movement known as ‘reform’ mathematics. The antecedents of this movement can be traced to the constructivism of Piaget and Vygotsky from earlier in the 20th century and further back to progressive education in general. John Dewey, for instance, promoted the idea of learning through experience and Paolo Freire opposed the ‘banking model’ of education where teachers transmit facts and procedures to their students. Reform mathematics is generally supportive of experiential learning and skeptical of transmission.
A chapter, written in 1996 by Catherine Twomey Fosnot, a professor of education and director of mathematics in New York, and Randall Stewart Perry, a polymath, describes the features of constructivist teaching. It could equally be a description of reform mathematics:
“Learning… requires invention and self-organization on the part of the learner. Thus teachers need to allow learners to raise their own questions, generate their own hypotheses and models as possibilities, test them out for viability, and defend and discuss them in communities of discourse and practice.
Disequilibrium facilitates learning. “Errors” need to be perceived as a result of learners’ conceptions, and therefore not minimized or avoided. Challenging, open-ended investigations in realistic, meaningful contexts need to be offered which allow learners to explore and generate many possibilities, both affirming and contradictory. Contradictions, in particular, need to be illuminated, explored, and discussed…
Dialogue within a community engenders further thinking. The classroom needs to be seen as a “community of discourse engaged in activity, reflection, and conversation” (Fosnot, 1989). The learners (rather than the teacher) are responsible for defending, proving, justifying, and communicating their ideas to the classroom community. Ideas are accepted as truth only in so far as they make sense to the community and thus they rise to the level of “taken-as shared.””
Despite its long history, there is little evidence to support reform (or constructivist) maths teaching. When trials are conducted, it is often quite different models that perform the best. For instance, in Project Follow Through, Engelmann and Becker’s Direct Instruction program, which breaks mathematics down into its component parts and then directly teaches and trains students in those parts before bringing them back together, outperformed other teaching methods in both students’ learning of procedural skills and in more complex problem solving. Those models most similar to reform mathematics – the ‘cognitive’ models – often under-performed control conditions.
Project Follow Through is not definitive but we don’t have to stop there. There is a wealth of evidence that demonstrates similar outcomes, some suggestive and correlational and some from well-controlled experiments.
This is what we might expect to find if we study the relevant cognitive science. Our working memories are very limited and approaches that break learning down into manageable, memorable chunks are more easily processed by learners than those that expect students to grapple with complex problems from the outset.
This raises the question: where do advocates of reform mathematics go from here? Should they… er… reform it? Perhaps. Alternatively, the time might be right for a relaunch.
This appears to be what Jo Boaler has embarked upon with her new book, “Mathematical Mindsets,” and her accompanying internet campaign. Here, reform mathematics is linked to the educational idea of the moment: Carol Dweck’s mindset theory. Mindset has good data to support it but the way that it is often operationalised in schools is deeply worrying and the link to reform maths is tenuous.
Neatly sidestepping issues of effectiveness, this link relies on the idea of maths anxiety, a well known effect where maths makes some people anxious. Boaler argues that this is a result of traditional maths teaching that emphasises performance under timed conditions.
Reform maths supposedly offers an alternative that is friendlier to students: “Let’s role some dice, form hypotheses and discuss our thinking – here’s a beanbag to sit on – watch that you don’t trip over my kaftan – we’re all learning this together.”
It is not at all clear that this works. Certainly, in the short term, diversion away from activities that students find stressful will reduce anxiety but would you suggest curing someone of a fear of mice by ensuring that they avoid mice? It is also likely that the kind of problem solving that we might find in reform maths classes might offer its own pressures.
Longer term interactions actually seem to show a different effect. It is a lack of mathematical achievement that leads to later maths anxiety. The logic of this should lead to us attempting to reduce maths anxiety by choosing methods that are the most effective for teaching mathematics to the greatest number of students. Those timed tests have a role in establishing the easy retrieval of math facts, leading to better problem solving. In fact, if we return to Project Follow Through, it was the Direct Instruction students who showed the greatest growth in self-esteem. So we’re back to square one.
In a new article in The Conversation, reform mathematics is referred to as a “mindset-approach”. I suppose that this was inevitable and either represents a reinvention or an attempt at a reinvention of the idea. It’s curious that flawed educational ideas have developed this habit of latching on to fashionable ones. It’s an attempt to invoke the halo effect but advocates of Mindset theories more generally should watch that they don’t start to grow horns.
Following my recent posts on questions to ask your child’s primary school teacher (here and here), I had a request to expand on my comments about the teaching of mathematics. There are a few issues surrounding maths that I believe parents should know about but before I go into that, I wish to make two points. Firstly, a fundamentally misconceived maths program taught by dedicated, evaluative teachers will always be better than one that meets the highest standards of evidence but that is taught badly. In my school we have specialist mathematics teachers and I think this is far more important than the specific details of the program. Secondly, the intention of this post is to inform parents and not to have a go at primary school teachers. Some teachers were offended at my comment that time-tables songs were not the best way to memorise tables. They thought I was suggesting that this is what many primary school teachers do. No – I was just setting up two contrasting alternatives in order to explain my point.
1. Discovery learning
Discovery learning is ineffective and most people tend to recognise this. So you don’t see many schools advertising their programs as discovery learning (apart from in AITSL’s illustrations of their teaching standards, bizarrely). Yet if it looks like a duck and quacks like a duck then it probably is a duck. And there are two powerful fallacies that drive people towards discovery learning. The first is the idea that we understand something better if we discover it for ourselves. We don’t. Secondly, we tend to assume that by asking students to emulate the behaviour of experts then our students will themselves become experts. Experts in maths are research mathematicians who make new discoveries so we should get our students doing that. Yet this is also fallacious thinking.
In primary maths, discovery learning takes on the form of ‘multiple’, ‘alternative’ or ‘invented’ strategies. Students are intended to make-up their own ways of solving problems and to solve a single problem in several different ways. Explicitly teaching a standard approach, such as the standard algorithm for addition, is discouraged. Of course, many students don’t discover much and so they pick-up these strategies from others or are led toward them by the teacher.
2. Big to little or little to big?
The kinds of alternative strategies that the students ‘discover’ are generally variations of strategies that we might use for mental arithmetic. Imagine I wanted to add 25 and 49. I would probably first add the 20 and the 40 to make 60. Then I can add the 5 and the 9 to get 14. Sometimes, little sub-moves will be encouraged as part of this e.g. take 1 from the 5 and add it to the 9 to get another 10 so that we have 70, then add the remaining 8.
Notice how this proceeds from big to little. We add the tens first and then the units. But when we add the units we find that we have yet another ten so we have to loop back and add this to the tens that we already had. This is inefficient when we get to larger numbers and is the reason why the standard algorithms generally start with the units first, then tens and so on. Indeed, students who use the standard approach seem to have more success, particular with larger and more complex calculations.
The objection to standard algorithms seems to be that kids can learn them as a process without understanding how they work. Presumably, they have to understand procedures they’ve invented themselves? This may be true if they really have invented them but I suspect such genuine invention is rare, with most children latching on to the ideas of others. In this case, these alternative procedures could also be replicated as a process without understanding.
In my view, students should be taught the standard approach and this approach should be explained to them. This requires the teachers to also understand how these processes work.
3. Words and pictures or actual maths?
Given that alternative strategies are meant to be ad hoc and contingent, there is no formal way for expressing them. You may see it done with pictures or even in words. Contrast this with the standard algorithms – their universality means that they follow a tightly defined set of notation. One way that alternative strategies may therefore be promoted is by insisting that students ‘explain their reasoning’ or draw diagrams when answering questions on homework or assessments. This is basically a way of marginalising the standard algorithms.
For instance, imagine the following question:
“A lottery syndicate of 13 people wins a total of $3 250 000. If the money is shared equally then how much would each member receive?”
A simple use of the long division algorithm is sufficient to explain what the student is doing, why they are doing it and to determine the right answer. If the student has gone wrong then the error will be easy to find. An insistence on words or pictures would be redundant unless you wish to privilege alternative strategies.
4. Maths anxiety and motivation
Maths anxiety is real. Some people struggle with maths – perhaps because they were not taught very well – and develop a fear of maths tests and maths more generally. It is complex and the chain of cause and effect is not entirely clear. Evidence does seem to point to timed tests as being associated with anxiety but perhaps better test preparation and framing would mitigate this. However, as well as advising us against timed tests, a whole raft of things that look and smell a lot like alternative strategies and related ideas such as the use of ‘authentic’ problems are proposed as possible solutions.
Authentic, real-world problem are considered good because the idea is that they will motivate children and so the children will learn more. In fact, a lot of discussion centres around motivating and engaging students. I am sceptical that many of the activities that are suggested as motivational are actually motivating for students and evidence suggests that motivation works the other way around. Maths achievement predicts motivation but motivation does not predict achievement. In other words, teach them maths, increase their competence and then they will start to feel more motivated about maths.
5. Cognitive load
Finally, it is worth mentioning that a lot of fashionable strategies are at odds with what we know about human cognition. Children should know their maths facts because that means that they don’t have to work out 5 x 8 whilst attending to other aspects of a complex problem. Those people who dismiss times-tables as ‘rote’ learning fail to take account of this. And so do those who propose big, messy, open-ended, real-world problems. Such problems have many facets and often contain information that is irrelevant to finding a solution. All of this needlessly increases cognitive load and makes learning less efficient.
Despite an element of moral panic, there is well-founded concern in Anglophone countries about a decline in the science and mathematics skills of students. International studies such as TIMSS and PISA bear out some of this decline, none more starkly than the PISA mean scores for Scotland and Australia:
This has prompted discussion from politicians and policymakers focused on so-called on STEM subjects – Science, Technology, Engineering and Mathematics. Such discussion betrays the instrumental view of education that many policymakers hold; a view that sees education purely as training for the workplace and meeting the demands of workplace skills shortages. Not only is this myopic, it doesn’t actually fix the problem that has been identified.
Many STEM initiatives are superficial and silly – like the Australian government’s notorious STEM apps. They operate under the assumption that provoking short-term situational interest by, for example, asking a scientist to speak about their work or showing a cool demonstration, will lead to a long-term personal interest in the subject. Such activities probably help, but they don’t really take into account the students’ self-efficacy; their feelings of competence in a subject area. Self-efficacy is associated with motivation in STEM subjects. Most people assume an ‘interest-first’ model where an interest in a particular subject provokes a desire to work hard in that subject which then develops self-efficacy. However, the reverse ‘competence-first’ process is also plausible, where increased feelings of competence lead to a greater level of motivation.
The interaction probably works both ways but there are some hints that competence-first is more important in early maths education. If true, we should focus more on effective teaching of maths and science and less on gimmickry.
In some ways, STEM is an odd basket of subjects. Engineering is barely taught in schools because the fundamentals rely on physics and mathematics. Traditionally, we teach students these fundamentals first before they develop specialisms at university. This is because we view these disciplines hierarchically. However, many initiatives seeks to involve students in solving ‘real world’ engineering problems as a way of promoting STEM. This is again based upon an interest-first view that if students see the relevance of STEM to everyday life then they will be motivated to study it.
There are many risks to adopting such an approach. Chief among these is the risk that students may not develop self-efficacy as a result and may become demotivated. We know, for instance, that problem-based teaching methods are not optimal for students learning new concepts so we either need to deliver explicit instruction prior to problem solving or reduce the complexity of the problem solving and run the risk of students concluding that this is not the real-world experience that they had been sold.
Far from being the solution to our downward trend, the narrative around STEM might actually be contributing to it. I don’t think it is a coincidence that Scotland’s Curriculum for Excellence embodies many trendy notions around real-world problem-solving and yet Scotland is seeing a decline in its STEM results.
To confound the issue further, some folks have decided to put an ‘A’ in ‘STEM’ to create ‘STEAM’. The ‘A’ stands for ‘Art’ or maybe ‘Arts’. Depending on your source, it could refer to the addition of a fairly contained set of notions around visual art and design or it could represent the arts more generally. In the case of the former, you often hear reference to ‘design thinking’ as some kind of desirable skill to develop, although I doubt it is anything like the generic skill that people imagine. In the latter case, there is very little in an academic curriculum that would not be covered by STEAM. Which takes the focus away from considering the selection curriculum content and much more towards teaching methods.
Because STEAM seems to prioritise certain styles of teaching such as Project-Based Learning. Project-based learning has been a central component of the progressive education agenda since at least as far back as William Heard Kilpatrick’s 1918 essay on The Project Method. Even so, there is little evidence for its effectiveness, despite the grandiose claims that are often made. A recent Education Endowment Foundation trial of Project-Based Learning found a potentially negative impact on literacy, although this finding was compromised by a high drop-out rate from the study. So it either doesn’t work or schools find it really hard to do. Either way, project-based learning is not promising.
STEAM’s old-fashioned progressivist agenda is only enhanced by its focus on collaboration, critical thinking and so on; the misnamed ’21st Century Skills’. Again, skills like critical thinking are not generic and there is little evidence that they can be developed through STEAM approaches. The claims made are ideological rather than based upon evidence.
So I think that STEAM is a cipher. It appeals to an anxiety about STEM education but then subverts it to call for old-fashioned progressive education. I suggest taking the ‘A’ back out of it, and maybe the ‘E’ and the ‘T’ too. That way, we may focus on the effective teaching of science and mathematics instead. This is the best way to arrest any decline.
I would like to offer some tools that teachers, parents, journalists and others involved in education might find useful. In particular, these tools are intended to help you evaluate the kinds of claims made by presenters at conferences or in newspapers and on blogs.
1. What evidence would prove this claim wrong?
Imagine that a speaker takes to the stage and claims that a proprietary thinking routine – let’s call it ‘four hops and a ladder’ – leads to ‘deeper’ learning. The presenter shows evidence that includes lots of bar charts and demonstrates that teachers who were trained in this technique used it more often and that teachers and students alike felt motivated by it. You raise your hand to ask a question. You are aware of a study that showed no academic gains for students who used this routine versus those who did not. Ah, the presenter explain, that’s because the test that was used to assess academic gains did not assess ‘deeper’ learning. You reply, pointing out that the test included some complex transfer problems. Again, the presenter is sceptical that these really represent deeper learning. And anyway, the teachers might not have been implementing the routine properly, he notes.
Not only has the burden of proof been reversed (see below), it is hard to think of any way that we could prove this assertion wrong. The absence of evidence for something is a form of refutation but it can never be 100% conclusive. And that’s because the inductive arguments used by science can never give absolute certainty about anything. Advocates will exploit that.
It is therefore worth considering what kind of evidence really would be sufficient to disprove the proposition to the your satisfaction and to the presenter’s satisfaction. If there is a wide gulf between the two then that tells you something.
2. What are students intended to learn from this method?
Often we read about educational activities described in the most breathless terms. We may learn of maths questions so inspiring that they make it on to T-shirts or of “Aha” moments when the lights go on for particular individuals. But can you identify what students are meant to learn as a result because, often, the intended learning is not mentioned at all. All you get is the description of an activity. By definition, education has to involve learning something. So if this is not articulated then there are two possibilities. You have either been presented with an activity that has no educational benefit or the presenter has chosen not to mention what it is. Why?
3. Are you being sold motivation?
One reason that the intended learning might not be mentioned is that you’re not being sold learning at all, you’re being sold motivation. That’s fine as far as it goes but there are many fun activities in this world and many ways to pique student interest. It is all educationally useless if it doesn’t lead to more learning. If this new, motivating approach to teaching grammar leads to more and better learning of grammar then there should be evidence of that and not just evidence that it’s fun.
I am sceptical about generating what the literature terms ‘situational interest’, that is interest in the current activity and moment. I’m sure that it can aid learning but the real goal is individual interest; a long term enthusiasm for the subject. This seems to be at least partly the result of a growing sense of competence. And a sense of competence clearly relates to effective teaching practices that lead to learning.
4. Does the suggested approach sit close to the targeted skill or knowledge?
This is a tool I have thought about more recently. Imagine you want to improve a child’s reading; do you teach him or her a breathing technique or do you use a phonics based intervention. The phonics intervention directly relates to reading and the path of influence is clear; better knowledge of grapheme-phoneme correspondences will perhaps lead to improved decoding.
The chain of influence for the breathing activity is longer and more speculative. Perhaps the breathing activity will reduce anxiety. Perhaps this will then allow the child to better access his or her knowledge. Perhaps this will aid the process of reading.
If we were going to lay bets then the intervention with the shortest chain of influences would be a good choice.
5. Is the evidence a testimonial?
Education is complex, taking place in varied contexts and with many interacting components. Some people use education’s complexity to argue that the standard of evidence used by science is inappropriate and we should instead draw inferences from the kinds of sources that science largely rejects such as personal experience or anecdote.
Precisely the opposite is true. If someone is presenting you with a method to apply in your classroom then she needs to demonstrate the general effectiveness of this method. The fact that it was perceived to work in a particular context does not provide this evidence.
Scientific approaches such as experimental trials or epidemiological studies have the capacity to provide the evidence for such a general effect. The strongest approaches, such as the use of explicit instruction, can draw positive evidence from a diverse range of trials and studies yet other popular practice, such as certain forms of differentiation, have been around for a long time without generating such evidence.
6. Where lies the burden of proof?
Arguments are not always symmetrical. If someone is advocating a revolution then they bear a greater burden of proof than those who advocate for the status quo. Current practice might not be perfect but, before we jump, we should make sure we are jumping to something better.
A surprisingly large number of advocates for change simply point to flaws in the status quo. For instance, imagine someone claiming that children leave school with poor problem solving skills so we must give them more opportunity to engage in open-ended project-work. This is a weak argument.
To strengthen it, we would need some evidence to show that engaging in open-ended project-work will lead to students developing superior problem solving skills. And before we can do that, we need an understanding of what these skills are and how we can assess them. Few gurus are prepared to do this kind of ground work. It’s far easier to decry the present because you can find fault with pretty much anything.
7. Is this an argument from authority?
The Early Years Framework for Australia requires teachers to take account of children’s learning styles. However, this does not mean that the value of taking account of learning styles has been proven. Just because something is a statute or has been asserted by a figure in authority, it does not mean that it is true. In a free society, we may question such ideas.
If an argument rests solely on the authority of the person constructing it, or on an external authority, then this is not particularly persuasive. And such arguments come in many forms. Academics have an unfortunate habit of saying things like, “When you have read as much about this subject as me then you will understand.” Again, this is an argument from authority.
Challenging such an argument is tricky because it may be taken as an attack against the authority in question. So you might want to simply note the argument, factor it in to your thinking and move on.
Writing in the Times Educational Supplement in London (the TES), David Boorman questions plans to introduce a timed test of times tables to English schools.
Boorman’s first concern is that of ‘maths anxiety’. He worries that timing tests in this way will lead to more anxious students. I have no doubt that timed tests can make students anxious – although the evidence is hard to pin down – but I also see it as part of a teacher’s job to allay those fears and present such tests as a normal part of school life.
Boorman can’t see the need for a time limit, asking, “How long did Shakespeare take to write his plays? Does anyone care?”. I think this misses a key point: We want to test whether students have retained times tables as facts in their long term memory rather than checking whether they can work them out using working memory. If we ensure that time is limited then we are more likely to test the former than the latter. This is important because in higher level maths, students are rarely asked to simply work out a single multiplication. Instead, such skills will be embedded in a larger problem. Working memory is severely constrained and so it is important to free it up to focus on the problem in hand. If students memorise times tables then they release working memory capacity to focus on other aspects of the problem.
Boorman’s second point is a concern about a ‘times table check mindset’. He points out that many students might be able to find 5 x 7 but then be unsure of the answer to 7 x 5; something I am quite prepared to accept. But he uses this example to set time tables knowledge in opposition to an understanding of multiplication.
We often see this argument in the rhetoric about early mathematics education. And yet I have seen no convincing evidence to suggest that knowing facts somehow gets in the way of understanding. Yes, knowing times tables facts is not sufficient but I don’t think anyone ever claimed that it was. Ideally, students know their facts and also have a deeper appreciation of multiplication. In the case of the 5 x 7 example, the student would need to be taught about the commutative property of multiplication e.g. by considering a rotating rectangle. Why can’t we teach times tables and also teach commutativity?
Finally, Boorman demonstrates quite a limited view of why we want student to learn times tables. He complains that they are not useful in everyday life:
“I’m often told by friends and colleagues that they can help at the shops. However, I’m sceptical at best. Take six times eight, for example. When and why would one purchase 48 of an item? Of course, it could be six items priced at 8p each, but then what items are priced at 8p?”
Boorman’s colleagues have obviously got the wrong end of the stick. What is it about maths that makes people demand that every individual skill needs to have some mundane, everyday use? We don’t do that with other subjects. Nobody goes into a primary school class, observes a session of clay modelling and exclaims, “But when will students need to be able to do this at the supermarket!?” And nobody stops children from writing stories on the basis that they will never need to write stories in real-life; that only professional authors need to be able to write stories. And yet you hear these arguments about maths all the time.
Times tables are not particularly useful on their own and they are not intended to be. Maths is not a flat subject; it’s hierarchical with basic skills feeding into more complex skills. As we have seen, a facility with times tables frees up working memory to solve other aspects of a problem. For instance, an important skill in senior maths is to be able to factorise quadratic equations and this is much easier to do if you know your times tables. If students don’t know their tables then this limits their access to higher level maths. You can do all the investigations, critical thinking activities and genius hours you like but if students can’t do maths then you are effectively shutting the door on most STEM careers.
Perhaps most worrying of all, David Boorman is a lecturer in primary education and so he has the opportunity to promote these views about times tables to the next generation of primary school teachers.
One of the most baffling currents in the education debate is the one about testing. We know that frequent testing is the best way of ensuring that students retain what they have learnt and yet prominent educationalists advise against it. Testing may cause anxiety, this is true. But it’s most likely to cause anxiety for those students who cannot answer the questions. Perhaps we should level-up and ensure that more students can succeed by teaching them better. This seems preferable to dumbing-down and removing a powerful learning tool.
Standardised testing takes things a stage further. To detractors, tests set at arms-length by state authorities are ‘neoliberal’ – a wholly surprising way to characterise a ‘big government’ initiative.
However, fans of standardised tests would point to the fact that they are less affected by biases. The horror of ‘teaching to the test’ is facilitated by knowing exactly what is on the test. Standardisation offers the chance of giving tests to kids that their teachers haven’t seen. The fact that they are standardised also means that you can compare the performance of students in your own school with students elsewhere. You may think you’re doing an excellent job but if everyone in the state can add fractions and your students can’t then it might be time to review how this is being taught.
Of course, you need to have a strategy. I think the best criticism of standardised tests is that, alone, they don’t offer any solutions. Teachers also need to know how to improve. Current educational trends might convince some teachers to adopt inquiry learning as their improvement strategy and yet this is unlikely to work.
Yet if you set your face against standardised tests then you are really suggesting that you should not be accountable. The most common argument is that standardised tests don’t measure everything; a non-sequitur that was trotted-out this week to discredit the new Australian tests for prospective teachers. So what if they don’t measure everything? The stuff that they do measure is important and it’s worth knowing whether students have learnt it or not.
It’s a bit like a football coach arguing that she should not be judged on the results of games or a car salesman suggesting he should not be held accountable for his sales figures. “It’s too reductive,” they might argue. “This work involves people and people are incredibly complex. There’s so much to this job that is simply not captured in bald results.”
In this context, it is interesting to note a new research article published by EducationNext (and brought to my attention on Twitter by @JohnYoung18 who is worth a follow). It may be true that testing is good for highlighting differences in instruction, such as who is better at teaching fractions, but this might not matter all that much for students’ later fortunes. The authors therefore set out to discover whether the future life chances of students were affected by being exposed to a standardised testing regime. The design is quite clever and I am going to quote two paragraphs that I think have wide-reaching implications for education:
“Our analysis reveals that pressure on schools to avoid a low performance rating led low-scoring students to score significantly higher on a high-stakes math exam in 10th grade. These students were also more likely to accumulate significantly more math credits and to graduate from high school on time. Later in life, they were more likely to attend and graduate from a four-year college, and they had higher earnings at age 25.
Those positive outcomes are not observed, however, among students in schools facing a different kind of accountability pressure. Higher-performing schools facing pressure to achieve favorable recognition appear to have responded primarily by finding ways to exempt their low-scoring students from counting toward the school’s results. Years later, these students were less likely to have completed college and they earned less.”
Texas have now closed the loophole that allowed higher-performing schools to exempt low scoring students by classifying them as eligible for special education.
The TES has quoted maths education professor Jo Boaler as stating that the increased focus on memorising times-tables in England is “terrible”:
“I have never memorised my times tables. I still have not memorised my times tables. It has never held me back, even though I work with maths every day.
“It is not terrible to remember maths facts; what is terrible is sending kids away to memorise them and giving them tests on them which will set up this maths anxiety.”
Boaler is obviously alluding to some research here although it’s not clear what this is. What is clear is that she is wrong.
Knowing maths facts such as times tables is incredibly useful in mathematics. When we solve problems, we have to use our working memory which is extremely limited and can only cope with processing a few items at a time.
If we know our tables then when can simply draw on these answers from our long term memory when required. If we do not then we have to use our limited working memory to figure them out when required, leaving less processing power for the rest of the problem and causing ‘cognitive overload’; an unpleasant feeling of frustration that is far from motivating.
An example would be trying to factorise a quadratic expression; tables knowledge makes the process much easier.
The fact that Boaler never uses times tables as a maths education professor tells us something but I’m not sure it tells us much about the value of tables in solving maths problems.
You can read the cognitive load argument here.
I am sure that testing can induce anxiety but it certainly does not have to. Skilful maths teachers will communicate with their students and let them know that the tests are a low stakes part of the learning process.
Tests are an extremely effective way of helping students learn, particularly for relatively straightforward items such as multiplication tables and so, appropriately used, they should be encouraged.
We also know that how students feel about their ability – their self-concept – is related to proficiency and that it is likely that proficiency comes first ie proficiency causes increased self-concept.
With this in mind, if we want students to feel good about maths and reduce maths anxiety in the medium to long term then we need to adopt strategies that improve their ability to solve problems.
Learning multiplication tables is exactly such a strategy.