Ten years ago a randomised controlled trial (RCT) took place in the U.S. The trial pitted four early years maths programs against each other. Two of these programs, Saxon Math and Math Expressions, used explicit instruction. The other two programs, Investigations in Number Data and Space (commonly known as ‘TERC’ after its developers) and Scott Foresman-Addison Wesley Mathematics, did not, preferring a ‘constructivist‘ approach. For instance, TERC encourages students to ‘develop their own strategies for solving problems and engage in discussion about their reasoning and ideas’. This stands in contrast to teachers explicitly teaching students how to solve problems.

The RCT found that the explicit programs were more effective than the constructivist ones. This is hardly surprising given the wealth of evidence we have in favour of explicit instruction generally, as well as specific experiments that have found that explicit maths is superior to constructivist maths.

Eric Taylor, an assistant professor of education at Harvard, has now reviewed the data from the original study and completed an additional analysis that seems to show something pretty interesting (thanks to @Smithre5 for pointing this article out to me).

The teachers in the study were all assessed on something called the “Mathematical Knowledge for Teaching” test (MKT). This essentially assesses teachers’ maths knowledge but through the lens of teaching maths. For instance, some of the questions provide sample student responses and then ask questions about those responses.

Taylor found evidence that when teachers had a low score on the MKT test, it did not really matter whether they used the explicit or constructivist maths programs. Instead, it was for teachers who scored more highly that a difference emerged in favour of the explicit approach.

There is a common sense explanation for this. In a program where the teacher has to stand up and actually teach maths, their maths skills matter, but when the students have to figure things out for themselves then the more skilled teachers have no way of making use of their greater skill level.

If this finding stands across other studies then I think it has three implications:

- Primary teachers must pass a maths skills test if they are to teach mathematics (schools could perhaps reorganise so that maths was taught by specialists to get around the problem of getting
*all*teachers to this level) - Primary teachers who lack maths skills should be given training in this area
- Explicit programs for teaching maths should be adopted in primary schools

We already have masses of evidence for point three and it seems that education systems might be waking up to points one and two.

This finding perhaps explains other interesting results. For instance, the literature is full of studies that seem to tell contradictory stories about the effect of the level of teacher education on student results. This might be expected if bad teaching methods cancel out any gains from having better qualified teachers.

I suspect there is another benefit to explicit teaching — it improves the Maths ability of the teacher.

My maths was always good, but years of teaching it has sealed what gaps I had. My understanding of logs, for example, went from shaky to a point where they are natural to me. I could always solve the problems, but years of teaching them has made me much better at them, which feeds back into that I now teach them better too.

We’re always told that students helping other students is good, because it helps the students cement their knowledge, but the leap to teacher is just the same.

If this is true then any longer term study is going to show this. You certainly read anecdotal reports from the jumpmath.org people that teachers find it easier to fill the gaps in their own knowledge using explicit instruction material such as jump math.

Also just noticed an interesting section on the JumpMath site

At the bottom of this page

https://jumpmath.org/jump/en/supporting_research

They list 10 barriers they identified and then used as input when designing their material.

It is quite a good list.

Another RCT on materials here.

http://www.tandfonline.com/doi/full/10.1080/0020739X.2015.1029026

Surprise explicit instruction on top.

From the jump math page so to be fair they may report only those findings that they like.

https://jumpmath.org/jump/en/research_reports

http://www.ascd.org/publications/educational-leadership/sept11/vol69/num01/The-Perils-and-Promises-of-Discovery-Learning.aspx

Using Expressions here: It matters. I’ve seen colleagues misidentify fraction terminology thanks to 1) weak math background and 2) “Teachers Pay Teachers” website errors. Yes, they pay other teachers who also didn’t get a strong math background. Furthermore, Expressions uses plenty of “real world” problem solvers that elementary students who are becoming multilingual cannot access yet. The list of indignities is endless. Having a “direct instruction” or “explicit instruction” textbook adoption does not make up for poor math concepts and/or skills.

I won’t argue about explicit vs constructivist for a number of reasons from it being oversimplification to no one will change their opinions. But there are a couple of points in this post that I’d like to address.

First, when we talk about a program and methodology being effective, we need to agree what we consider effective. Are we measuring how effectively students can apply mathematical reasoning in novel situations (aka problem solve)? Or something else?

Second, I think assuming that teachers who utilize constructivism influenced pedagogies don’t need the content knowlege because they don’t really teach is a serious (and somewhat offensive) misinterpretation. Students don’t just “do whatever”, experiences need to be designed to allow students to explore particular concepts so that at some point a teacher can step in with formalizing these experiences. All the planning, formative assessment and adjustment cycles in the classroom where the learning is happening through the dialogue rather than a monologue requires high content knowledge. It’s like reading a pre-written speech vs engaging in a conversation; you have to be very comfortable with the subject to sustain a quality conversation…for nine months a year.

If you are right then I would expect that more skilled teachers would do a better job using constructivist approaches. But they don’t seem to. I have addressed some of the other points you raise here:

https://gregashman.wordpress.com/2015/09/13/if-constructivist-teaching-is-the-aspirin-then-what-exactly-is-the-headache/

Thank you for your reply, Greg. I think there are multiple reasons why they don’t. I think to better use constructivism inspired pedagogies content knowledge is not sufficient. While there are some skilled teachers that “don’t seem to” it also doesn’t mean that there aren’t enough skilled teachers that do. It often depends on what research we are considering and are we aware of the biases that are affecting the research design.

On topic of “What is a headache?” I disagree that poor understanding is NOT a headache. But that’s the difference in educational philosophies and the understanding of the goals of the education.

I certainly want to improve students understanding I just see no evidence that constructivist approaches do a better job of this than explicit ones.

Saying no one will change their mind is quite an insult. You are saying that everyone here is too closed minded to hear a counter argument and accept any valid points.

Yet you take offence at what comes down to the use of the word no in “have no way” instead of the same point being made by saying “have much less way”. The point being made is obvious: in a case where the teacher leads and does most of the talking they have more opportunity to draw on their knowledge of the subject. There is simply no way for 20 students to be discovering things for themselves and at the same time getting the attention of a single expert. You might have one or two of them at a time getting some attention from the expert but its an either or proposition.

Also, if you are interested enough in what works to follow the links it is clear what is used to assess the students here.

I think the same is probably true in other subjects. When I first began teaching history I simply lacked the subject knowledge to explicitly teach my A level classes and so I relied on activities to convey the material. As my subject knowledge grew my teaching approach changed as I had the confidence to develop points and explain complexities myself.

What’s your take on the WWC page you linked to? The WWC report says the findings of that RCT were indeterminate because of the small size of the effect, and the other report that meets their standards shows a positive effect.

I don’t know much at all about how the WWC judges this stuff, though.

I am prepared to accept that pretty much any intervention will show a positive effect (due to the additional resources, training, skin in the game, expectation effects). It’s differences between different interventions that I’m interested in. WWC exclude so many studies that they end up relying on a tiny number which, ironically, makes the sample size for their meta-analyses too small.

Right, your ABC style of experiment. Thanks for the thoughts!

Reblogged this on The Echo Chamber.