The curious world of ‘productive pedagogies’Posted: February 8, 2017 Embed from Getty Images
I have come across the literature on ‘productive pedagogies’ before and I have even referred to it as an example of the kinds of beliefs that are mainstream in education. For example, one productive pedagogies research paper contrasts ‘higher-order thinking’ with ‘lower-order thinking’ with the latter occurring when:
“…students were simply asked to receive or recite factual information or to employ rules and algorithms in repetitive routines. In such instances students were given pre-specified knowledge, ranging from simple facts and information to more complex concepts. Often this involves knowledge being conveyed to students through readings, worksheets, lectures or another direct teaching medium.”
I intended to leave the matter there but I found myself in discussion with an Australian teacher educator on Twitter who cited productive pedagogies as his framework for effective teaching. That’s serious. Stating that it is a framework for ‘effective’ teaching is a testable claim about cause-and-effect. So let’s examine the evidence to support this proposition.
Firstly, the origin of productive pedagogies is not Australian. According to the Queensland School Reform Longitudinal Study (QSRLS), it is based on work originating in the U.S and carried out by Newmann and associates:
“Productive pedagogies is a model of classroom teaching and learning practices developed by the QSRLS, as reported in earlier reports. In the first instance, the development of the concept of productive pedagogies was derived from Newmann’s construct of authentic pedagogy. Productive pedagogies also developed from a variety of other educational research literatures concerned with explaining student outcomes. These included sociology of school knowledge, school effectiveness, and ethnographies of classroom discourse.”
The QSRLS researchers then surveyed teachers to see how well they fitted the productive pedagogies model. You would think that they would then correlate these responses with the students’ performance on standardised assessments of some kind. But they didn’t do that. Instead, based upon their reading of constructivist learning theory, they decided that desirable intellectual outcomes could be characterised as ‘productive performances’:
“We claim that there is a set of what we call productive performances that demonstrate students’ achievement of academic and social outcomes from schooling. Within the conceptual framework that underpins the QSRLS, these outcomes are affected by the kinds of pedagogies and assessment tasks students’ experience.”
Funnily enough, these productive performances look a lot like the sorts of things that teachers who scored highly on the productive pedagogies scale would ask students to do. For instance, a productive pedagogies teacher might ask students to hypothesise. Yet hypothesising is also a productive performance. It therefore seems likely that use of productive pedagogies will correlate with productive performances. This is indeed what the researchers found.
I am quite prepared to accept that asking students to write an hypothesis is a more effective way of getting them to write an hypothesis than asking them to do something else. What I need to know is that practices like this lead to better learning of knowledge and skills.
Perhaps a randomised controlled trial (RCT) would sort this out. RCTs, after all, are supposed to be the gold-standard of causal research and therefore most likely to convince someone like me. I think this is why I was passed the findings of a trial from New South Wales. This trial doesn’t explicitly mention productive pedagogies but it uses a ‘Quality Teaching Framework’ which is again based upon the ‘authentic pedagogy’ work of Newmann.
‘Quality Teaching’ is defined as teaching that measured-up to this framework. The researchers found that teachers who participated in ‘Quality Teaching Rounds’ where they did readings and observed lessons were more likely to then be observed using ‘Quality Teaching’, which, again, does not seem surprising. What we don’t know is if this had any effect on what students in these classes learnt as a result.
We now know, from the MET project, that the correlation between lesson observation scores and student learning gains is very weak. In order to get any kind of relationship at all, MET project researchers had to organise multiple observations of each teacher by different observers. Crucially, the teachers did not know which framework they were being observed against. Even then, the observations were worse at predicting the future student gains of particular teachers than the same teacher’s previous student gains.
So, again, nothing has been proved.
It wouldn’t be too difficult to work out the effectiveness of productive pedagogies. Its’ the sort of experiment that the new Evidence for Learning group could conduct. My suggestion would be to randomise teachers into one of three conditions and give standardised English, maths or science tests to each teacher’s students at the start and end of the study. The first group of teachers would receive no intervention, the second group would receive a training intervention based upon productive pedagogies and the third group would receive an equivalent training intervention based upon, for example, teacher effectiveness research. The reason for having the third group would be to isolate the effect of teachers simply being involved in an intervention and spending more time thinking about teaching. If productive pedagogies is effective then it would produce the largest pre-test to post-test gain of the three groups.