Evidence that inquiry maths worksPosted: November 24, 2015
A piece of econometric research by David Blazar was recently brought to my attention on Twitter (thanks to @dylanwiliam via @drlindagraham). It seems to show that, “inquiry-oriented instruction positively predicts student achievement,” in maths. In other words, the more inquiry that teachers use, the better the students’ maths performance.
This conflicts with the evidence that I tend to present which shows that explicit instruction is more effective than inquiry learning and so I was intrigued.
I cannot comment on the complicated statistical methods used (I’m still learning how to do a t-test). However, I noticed a few things worth commenting on.
The evidence that inquiry learning works
The study analyses a number (N=111, I think) of Grades 4 and 5 teachers working in three unnamed districts in the eastern US. Blazar has clearly gone to a great deal of effort to deal with the fact that this is not a randomly controlled trial. Students are not randomly assigned to teachers and there are school effects and so on. The fancy statistics are there to take account of all this.
When done, Blazar finds a positive effect size of about d=0.1 for something called ‘ambitious mathematics instruction’ which he links with NCTM reform maths and ‘inquiry-oriented instruction’. He also finds a negative effect for teachers’ mathematical errors and imprecisions and, surprisingly, very little effect for things like classroom climate and behaviour management.
Ambitious mathematics instruction? What’s that?
Interestingly, Blazar only uses the term ‘inquiry’ in the abstract, introduction and discussion. The proxy for inquiry that he measures in the actual study is a construct known as ‘ambitious mathematics instruction’; an odd, approbatory name for a set of teacher characteristics. This derives from a classroom observation instrument known as the Mathematical Quality of Instruction (MQI).
I looked-up the MQI and found that it didn’t mention the term ‘ambitious quality of instruction’ at all. Instead, the items grouped under this heading in Blazar’s paper are grouped under several different heading in the MQI instrument itself. Rereading the paper, I realised that Blazar had contributed to an earlier study that showed that the items on the MQI cluster into two main factors; ‘ambitious quality of instruction’ and ‘mathematical errors and imprecisions’. So Blazar is using a construct that he was involved in developing. Unfortunately, I don’t seem to have access to the study that shows this clustering.
Blazar’s claim is that these factors are closely related to both the NCTM standards and the new Common Core standards. This is interesting because I’m not sure that NCTM and Common Core are meant to be quite the same things. In addition, when you look at what is grouped under this heading, it is not clear that these are all elements of inquiry-oriented instruction. Explicit instruction would have equal claim to ‘Linking and connections’, ‘Explanations’, ‘Generalisations’, ‘Math language’ and ‘Remediation of student difficulty’.
A focus on ‘Multiple methods’ and ‘Student explanations’ would certainly be more of a preoccupation of reform maths than explicit instruction, particularly if formally assessed. However, these would still be part of the repertoire of explicit teaching, although maybe less so than in a reform classroom. I wasn’t sure what ‘Use of student productions’, ‘Student mathematical questioning and reasoning’ and ‘Enacted task cognitive activation’ meant so I went to the MQI itself. The first seems to be about the teacher interacting with the students and the last two seem to be the closest fit to standard meanings of ‘inquiry’ because they involve students posing mathematical questions, identifying patterns and so on.
How was the teaching measured?
I have already noted that teaching was scored according to the MQI rubric. This was done by videoing each teacher three times and then scoring the videos. The teachers knew when they were being videoed because they were allowed to choose the dates in advance. Blazar notes that they would have had no incentive to vary their instruction for these observations and quotes evidence from the MET project that indicated that rankings were similar whether teachers knew in advance about an observation or not. However, I am sceptical about this argument. A position in a ranking is different to the balance of learning strategies used in an individual lesson. And the ranking would stay the same if everyone’s performance dropped by the same amount for surprise observations.
Reform maths has been promoted in the US since the release of the NCTM standards in 1989 and so I think the ‘right’ way of teaching maths would be pretty clear to teachers with typically nine years of service. Despite having no incentive, it is natural for teachers to want to look good and so I can imagine them emphasising reformy elements in these taped lessons. Presumably, the more able teachers would be differentially more aware of and able to do this.
Did teachers know the MQI criteria on which they would be judged?
Indeed, a lot of psychology research depends upon the principle that people will expend effort without an incentive. Consider a typical randomised, anonymous trial with a post-test. There is no incentive at all for students to try-hard on the post-test. They could simply give nonsense answers. And yet we routinely use evidence from such studies when comparing instructional approaches.
How was performance measured?
Maths performance was measured using a test that Blazar notes was ‘developed by researchers who created the MQI’. This is interesting. We might expect a greater correlation between higher teacher performance on the MQI and higher student performance on a test developed by the same people because they would both prioritise the same things. For instance this test is a “cognitively demanding test,” in which items previously analysed by researchers, “often asked students to solve non-routine problems, including looking for patterns and explaining their reasoning.”
It would therefore seem likely that teachers that emphasise these aspects of maths will prepare their students better for these tests.
Given my previous writing on the matter, I am obviously biased against a result that shows a positive impact of inquiry-learning. So perhaps I am not the best person to draw a conclusion here. Instead, I will point out the obvious contrast to another econometric study with a different methodology, call for replication of this research and suggest that you make up your own minds. But please, read the paper first. You can follow the arguments even if you don’t follow the statistics.