# Maths teaching rhetoric that doesn’t add-up

**Posted:**April 25, 2016

**Filed under:**Uncategorized 12 Comments

The progressive tradition in education has had a particularly pernicious effect on maths teaching. It is worth highlighting the fact that this is a tradition and that it stretches at least as far back as John Dewey because proponents of progressive maths have a tendency to dress their ideas in neologisms. Thus we have the – now fading – constructivism of the 1980s and 1990s that paralleled the whole language movement in reading instruction. We can now add Jo Boaler’s latest forays into neuroscience. Whatever the invocation, you can lay a pretty safe bet that none of these developments will lead to calls for more explicit instruction or practice of basic skills.

It’s hard to tell how many teachers actually use an outright progressive teaching method. They don’t work well and so most educators will presumably default to something else, especially when standardised tests are due. The problem is that there is no alternative narrative. You either follow the education school gurus or you’re left in a guilty muddle. This has to change.

The difficulty we face, however, is that maths gurus have developed a set of rhetorical tricks that they frequently use to muddy the waters. Essentially, the divide is between those who want to explain mathematics to children, get them to practice and then test what they have retained versus those who want students to struggle with complex, open-ended problems, often derived from supposedly real-life situations. The latter strategy pretty much doesn’t work, especially for the least able, as one maths teacher explains in the comments on an interesting blog post:

“My experience has been that throwing problem solving-type tasks at students tends to result in strong students being successful and reinforcing positive beliefs, and weaker students getting frustrated and giving up or copying an answer from another student. I think it’s possible but very difficult to facilitate this in a productive way for all students.”

You don’t actually need to invoke Cognitive Load Theory to understand what’s going on. The idea of a limited amount of working memory that is easily overloaded is well accepted across the field of cognitive psychology.

So, what gives? How do people convince others (and probably themselves) that it is worth pursuing problem-based learning, constructivist maths or whatever else progressive maths education is known as these days? Here are some of the tactics I’ve spotted.

**The non sequitur**

A non sequitur is when a conclusion does not obviously flow from the statement that is intended to support it. You see this a lot with 21st century skills rhetoric. Someone might claim, for instance, that jobs of the future will require skilled problem-solvers (which is trivially true except for the fact that problem-solving is a myriad of different things) and *therefore* students must learn maths by solving problems. Huh? How did we get there? Is there no need to demonstrate that learning maths this way actually makes students better problem solvers?

And the non sequiturs get worse than this. We have bits of the brain showing electrical activity when a volunteer completes a non-maths related task and – blink and you’ll miss it – we conclude problem-based maths teaching!

**Retreat from evidence**

When it comes to evidence, we often see a retreat which is logically similar to Russell’s teapot: that celestial infuser that nobody can prove the nonexistence of. Progressive maths educators may well concede that explicit instruction works to an extent, perhaps by loading their concession with pejorative terms and saying something like, ‘Direct instruction is effective for short term recall of rote facts and procedures,’ but they will still hold that problem-based learning is better for long term understanding. This is harder to challenge because longer term studies are more expensive to conduct and so there are fewer to examine (although there have been some good follow-ups to Project Follow Through).

When these avenues are exhausted, the final gap to retreat into is to suggest that problem-based learning is superior for developing ill-defined skills such as resilience or ‘thinking like a mathematician’ or whatever. The benefit of this line of argument is that these things cannot be measured. In fact, proponents of progressive maths education often set their faces against the kind of testing that could validate or invalidate their approaches.

**Motivation**

For many progressive educators of all stripes, education is about motivation. They cannot conceive of coercing students into doing something that they may not like. It’s almost a default setting. Why would we teach Shakespeare when teenagers might find it boring! End of argument.

If you have designed a motivating maths activity then I’m afraid that my response is ‘so what?’ I could easily motivate any middle school maths class by asking them to make a poster demonstrating some strategy or other or by getting them to cut out nets for different shapes, colour them in and then stick them together. There’s loads of ideas where those came from but I have to ask; what’s the point? The students won’t *learn any maths*. Show me a motivating environment where students learn lots of important maths and then I’ll be interested.

If you *can* lead me to such a classroom then I predict that we’ll be looking at whole-class explicit instruction led by an enthusiastic and knowledgeable teacher.

**Leaving a hat on the empty chair**

This is a tactic used in many areas by educationalists promoting progressive approaches and it has been turned into a semantic art-form by some. ‘*Nobody* is against teaching phonics,’ the anti-phonics whole-language activists opine, ‘we just want to use a *balance* of approaches.’ This usually comes about two-thirds of the way through an opinion piece about how the English language is not phonetic and so phonics is next to useless.

We have the same in maths education. For instance, Jo Boaler may explain that we should not be drilling and testing kids on multiplication tables, she may even claim that she never learnt multiplication tables and that it never did her any harm, she may go further to suggest that you don’t need to know what 7 x 8 is because you can find 7 x 10 and then take away 14, but she is *not* saying that she is against children learning maths facts. Huh? No, Boaler wants them to sort-of absorb their multiplication tables incidentally through completing lots of problems.

That obviously won’t work but I suppose that if you think that problem-based learning is effective then you could probably convince yourself of osmotic multiplication tables. But then, why do we need to believe in osmotic multiplication tables if you don’t actually need multiplication tables at all? Confusing, isn’t it?

The other hat on an empty chair is ‘guidance’. There is much to say on this and it probably warrants an FAQ at some point. Suffice it to say that problem-based learning is designed so that students figure some things out for themselves. It may well have some kind of guidance or scaffolding but it is less guided than explicit instruction where key ideas are *fully* explained. The maths teacher quoted above seems to have a fairly clear view of what problem-based learning involves and the guidance is minimal. Why does he think that and what is the advantage of the approach?

A meta-analysis by Alfieri et. al. seemingly supports ‘guided discovery’ but the effect size is small given the kinds of studies that are being analysed and, interestingly, the unpublished studies that they sampled found the reverse effect.

**Dismissing Cognitive Load Theory**

For those who have read about it, Cognitive Load Theory (CLT) presents some fairly obvious challenges to problem-based maths instruction, despite being a theory still very much under development (it’s the subject of my PhD). For a start, the basic experiments underpinning the theory seem to demonstrate the opposite to what proponents of problem-based maths would claim – explicit instruction is better. CLT has been dismissed out-of-hand in the US, seemingly from the very start, and I suspect that the cognitive dissonance it provokes will ensure that this continues for some time.

It was therefore with interest that I read Michael Pershan’s scholarly, if idiosyncratic, take on CLT. Pershan is sworn to the problem-based learning camp and it is clear that he is looking for any inconsistencies he can find in CLT or disagreements between the main researchers, but he takes the theory seriously and that is to his credit. I wonder whether this marks the beginning of more concerted effort to tackle the issue.

Reblogged this on The Echo Chamber.

It’s interesting that direct instruction seems such a hard sell. It seems people really don’t value good delivery as a sufficient achievement for teachers. Odd when we are thrilled to hear the likes of Judi Dench or Ian Mckellen repeat Shakespeare’s words verbatim 400 years after his death.

Another argument might just be to look at what methods are used at tutoring/learning centres. Straight across, for kids struggling, they always use straightforward, explicit instruction. And this is what benefits kids in the long term, by having their foundational facts.

Take any child, whether he/she is struggling, or highly advanced in maths, and teach them using both conventional methods, and/or inquiry strategies. Both types of kids will benefit from the more straightforward, conventional methodology. Sad that educrats don’t want to acknowledge that.

I read the life story of Sweller – interesting !

Quote – ” As indicated above, cognitive load theory only applies to complex information that is high in element interactivity. It is not a theory of everything and cognitive load effects should

not be expected using low element interactivity information. ”

His notion of what constitutes an algebra problem is very limited. “a +b = c , solve for b”

His idea that “Show that angle ABC is equal to angle PQR” requires a higher cognitive load that an equivalent annotation of the diagram is plain wrong. It just takes longer as the solver needs to annotate the diagram him/her/whateverself. That’s of course given that they have paper and pencil !

Also, many of the wonderful “discoveries” of CLT are just “obvious”.

I would suggest thar Sweller spends his retirement studying mathematics.

Incidentally, if there are constructivists who think that one can teach problem solving skills separately from actual problem solving then they are deluded.

As you may guess half of me is on one side of the fence and the other half on the other side – it’s quite uncomfortable.

http://www.unisanet.unisa.edu.au/edpsych/research/HowObvious.pdf

I got as far as page 10 (page 690 in the actual article) and I found this gem:

“This reflects one type of egocentric belief: the belief that others construe the world in more or less the same way as oneself. Tom Gilovich and others have documented how this bias, as a natural mental default, can be quickly activated and leads people to make a host of faulty

assessments and decisions.”

Am I a little more open-minded now – yes !. At least he wrote “default” and not “defect” although he probably wanted to.

I will withdraw “many” from my comment.

Many years ago, while working in a teacher training college, while on a visit to a school one of the teachers said “I teach the discovery method”. I wonder if she used “direct instruction”.

Just thought you might find this “interesting”:

http://ww2.kqed.org/mindshift/2016/04/19/how-productive-failure-for-students-can-help-lessons-stick/

Productive failure is the subject of my PhD research. Mindshift is probably not the best source. There are lots of questions about the control conditions in these studies. The best paper is probably Kapur 2014 but, even then, the control is a bit strange. In the direct instruction group, students have to spend one hour solving a single problem as many ways as possible even though they’ve already been given the canonical solution method. This strikes me as being quite unlike the way that most teachers would deliver direct instruction.

A lot of research folk seem to lack “feet on the ground” !

About mathematics, most techniques are not methods for solving completely novel problems, but improved methods based on a more satisfactory mathematical theory, so finding an answer by a mixture of methods and approaches already “under the belt” is a good way to start. The difficulties are more connected with the student’s inability to understand the problem. Math education anyway should have much more emphasis on recognizing pattern and structure, and in problem formulation, and much less on solving problems.

This is one of my all-time favorite comments on discover learning:

“You know what’s the worst kind of instruction? The kind of instruction that makes kids feel stupid. And that’s what a lot of that discovery stuff does; their working memory gets overloaded, they’re confused. That’s bad instruction.”

— Anna Stokke, an associate professor in the University of Winnipeg’s department of mathematics and statistics.

[ Decline of Canadian students’ math skills the fault of ‘discovery learning’: C.D. Howe Institute. http://news.nationalpost.com/news/canada/decline-of-canadian-students-math-skills-the-fault-of-discovery-learning-c-d-howe-institute ]

I have a few comments about the Alfieri et al. (2011) meta-analysis (focusing on the enhanced discovery data):

Enhanced discovery seemed to be by far the most effective for physical/motor skills, with very small effect sizes for maths and science. Removal of studies that examined physical/motor skills reduced the overall effect size of the benefits of discovery learning. This might get lost by people championing enhanced discovery. The type of discovery learning also had a significant impact on it’s effect size: generation discovery learning had a very slight negative effect, whereas elicited explanation and guided discovery found positive ones. So it depends how it is done. There was also a variety of ways in which the dependent variable was measured, not all of which found a positive effect e.g. reaction times. This flexibility could be problematic as potentially the studies could have used all the measures and then reported the ones which reached significance (would need to analyse the original data to check for that). Alternatively it could not be a problem at all.

Looking at the number of participants in each group and the reported effect size, there seems to be quite a few instances where the larger the sample size, the smaller the effect. Whilst this comment is by no means proof of publication bias, I think an analysis of this data using a funnel plot and the Egger test would be informative (as this wasn’t done in the original study). They could have conducted more thorough analyses for publication bias as is discussed in this paper here: https://github.com/Joe-Hilgard/Anderson-meta/blob/master/meta_ms.pdf

The meta-analysis found that it was least beneficial for adolescents and the most beneficial for adults (with children falling in between). But I imagine most of the students who learn from discovery learning are children and adolescents so the utility of it may be somewhat limited.

The meta-analysis did find that analysing the unpublished data eliminated the positive effect size but that was because 4 of the 5 used generation discovery. More unpublished studies should have been included to see if it reversed the effect for the other types of discovery learning.

In conclusion, this meta-analysis offers very weak support for using discovery learning for maths and science for younger students. There were also insufficient checks for publication bias.