Barry Garelick and Katharine Beals wrote an excellent piece in The Atlantic on their scepticism about requiring students to write explanations for maths problems. Dan Meyer then wrote a response and I, along with many others, jumped in on the comments. It’s a great discussion but I wonder whether a couple of points touched on by Garelick and Beal have been missed along the way. They are:
1. Understanding is latent and cannot be measured directly
2. There is no reason to think that prose explanations are better at exposing understanding than correct or incorrect solutions
It seems that many people – let’s call them the ‘explanationists’ – think explanations are important because they give us direct access to what the student understands. Well, they might. In class, I ask my students to explain their thinking all the time. However, in the numerous examples that people tend to post online (this is my favourite), these explanations sit in assessments and the debate in the US seems to centre around Common Core ‘aligned’ tests.
The issue raised by the explanationists is that perhaps students have ‘rote’ learnt a procedure. If this is the case, they will be able to get the right answer without understanding why. Even showing the correct mathematical steps – the ‘workings’ as I would call them – is not sufficient for the explanationists because a student might have ‘rote’ learnt these too. I am quite sceptical about this elevation of ‘understanding’ as the primary aim of maths and maybe we could leave it at that, as some more traditionally-minded teachers are inclined to do.
Yet strangely, we may take the argument of the explanationists and actually use it against explanations. Who is to say that these cannot be ‘rote’ learnt either? Have these teachers never taught any other subjects? I do. I teach VCE physics and I teach lots of explanations. For instance, if asked to explain the role of a split-ring commutator in a D.C. Motor, I tell students to write:
“The split ring commutator reverses the direction of the current through the coil every half turn, thereby keeping the torque in the same direction and therefore the coil rotating in a constant direction.”
I could rely on them to construct a response from their own understanding. However, this would be fraught. It’s a pretty tricky thing to explain, there is a chance that they won’t cover all of the points and they might say something like “keeping the torque the same” rather than “keeping the torque in the same direction”. This would be technically wrong for the kind of motor that students are typically asked about. And so I teach them an answer that I’ve derived from examiners’ reports.
I therefore cannot really tell whether students understand the principle of the split-ring commutator by analysing responses to this question, although I suspect it is far easier to memorise an answer if you have a good understanding of what it means.
You may disagree with the principle of me teaching this explanation in this way. You might hiss that I am “teaching to the test”. Perhaps, but that’s not what this post is about. The point is that it is quite possible to memorise an explanation. If we think students might be motivated to ‘rote’ memorise a procedure for a test then the same motivation might make them ‘rote’ memorise an explanation. Of course, a skilful questioner will vary the questions to avoid some of this. This is why externally set tests are so valuable – you don’t know what’s going to come up. But a good question will expose a misunderstanding just as easily as asking students to write an explanation.
For instance, I am going to draw heavily on Dylan Wiliam here and propose the following question:
A student who selects ‘B’ is likely to have the misunderstanding that the larger the denominator, the smaller the fraction. This is a classic maths misconception because it results from overgeneralising something that is true. A student who picks ‘C’ is very likely to have the correct understanding. Of course, we can’t be sure. It could be chance. Or, the student might have been trained in a procedure for answering this kind of question without really understanding how it works (I’m not sure what that would look like). Even something as simple as seeing a question like this before might prompt a student to pause, remembering that it wasn’t as straightforward as they had first thought, rather than just writing down ‘B’.
Judicious use of such questions is at least as likely to expose a students’ understanding as requiring written explanations.
Of course, everyone irrationally hates multiple-choice questions. Perhaps we should be assessing students using real-world investigations that are marked against rubrics? Would such an extended response solve the problem of memorising procedures and/or explanations and ensure that we are assessing true understanding? No. Talk about this with your colleagues who teach English or history and you will realise that assessment through rubrics is no picnic. Rubrics are just as gameable as any other form of assessment:
For some students, maths is a respite from literacy-based activities; reports, investigations, researching stuff on Google. If we require written explanations in maths tests then we are making maths performance contingent on literacy ability. This will disadvantage those who can do the maths and understand the maths but have low literacy, such as those who are still learning English. It means that our test is not valid; it is not measuring the thing that it is supposed to be measuring.
Given that the language of maths – the symbols and operators – have been developed over time to precisely describe notions that are often quite difficult to put into words, it also seems a little perverse to insist on such backward translation from maths into English as evidence of understanding when there is a good chance that it is nothing of the sort.