Quality Teaching Rounds (QTR) has featured before on this blog. As far as I can gather, the story goes something like this: In the 1990s, Fred Newman and colleagues developed an approach in the United States known as ‘authentic pedagogy’ or ‘authentic achievement’. This approach then informed the The Queensland School Reform Longitudinal Study, a correlational study that took place around the turn of the century. Productive Pedagogies then took a road trip down to New South Wales and became known as ‘Quality Teaching Rounds’.
Up until now, QTR has been most notable for an extraordinary randomised controlled trial. I have not been trained in QTR and so do not understand the subtleties, but it revolves around a teaching framework derived from Newman’s work. The ’rounds’ involve teachers working together in a group. On the same day, each group conducts a reading discussion, observes a member of the group teaching and then codes these observations against the framework. The QTR framework is apparently superior to other teaching frameworks:
“While there is growing advocacy for pedagogical frameworks to guide the improvement of teaching, the QT framework differs in several respects from other widely used frameworks… First, the QT framework offers a comprehensive account of teaching, addressing matters of curriculum, student engagement, and social justice, as well as pedagogical practice (Gore, 2007). In this way, it avoids reducing the complex, multi-dimensional enterprise of teaching (Jackson, 1968) to a set of teaching skills or practices. On the contrary, the QT framework is more about a conception of ‘the practice of teaching'”
The randomised controlled trial was extraordinary due to its outcome measure. Rather than do the obvious and judge the effect of QTR professional development on student outcomes, it judged the effect on teaching quality as measured on – you’ve guessed it – the QTR framework. So essentially, teachers who were trained to teach in a way that scores highly on the QTR framework scored more highly on the QTR framework than those who were not.
This unremarkable and possibly tautological finding came with an interesting spin: QTR apparently improved the ‘quality of teaching’.
The obvious next step – which should have perhaps been the first step – was to assess the effects of QTR on what students actually learn. A study has now taken place and apparently found that QTR boosts learning in maths by 25%. At last, some direct evidence for the approach!
Or is it?
QTR has a flashy new website where you can learn more about it and book a workshop. The 25% figure features repeatedly. If you trace this back to a source, you land on this document which repeats the claim and provide a little, but not much, more detail. A diagram suggests that, in a year, the effect size for the control group was about 0.45, about 0.52 for an ‘alternate’ group and nearly 0.60 for the QTR group.
I’m not entirely sure how you get from that to 25%, whether it is statistically significant, whether there are baseline differences, the methodology of the study and so on – all the kinds of issues to consider when evaluating a trial. However, when you look for a reference, you get, “Gore, J., Miller, A., Fray, L., Harris, J., & Prieto, E. (under review). Improving student achievement through professional development: Results from a randomised controlled trial of Quality Teaching Rounds.”
I cannot find this paper by Googling it, which is not surprising if it is still under review. Nevertheless, it strikes me as quite wrong to be making claims of this kind and launching flashy websites when this claim has not yet been peer-reviewed. This wouldn’t be as much of an issue if we could examine a pre-print version of the paper ourselves, but this information appears to be unavailable. I can only hope this is an oversight and that the paper is due to be released very soon.
If you do try Googling the trial name, the main search result is a protocol for conducting a randomised controlled trial of QTR that includes three of the same authors i.e. a plan for a study to be conducted in the future. This may be the same trial that we now have results for and that is under review. If so, the protocol include three main outcome measures in Mathematics, Reading and Science, as well as a student questionnaire. What happened to those outcomes?
All I can do is urge caution to any school that is impressed by the 25% claim and is thinking of jumping in at this point. My advice is to wait at least until the full paper is published.