We saw yesterday that PISA defines good teaching to have a “student-orientated classroom climate”. In 2012, PISA asked students completing its tests four questions to see how student-orientated their teaching actually was.
How often do these things happen in your mathematics lessons?
• The teacher gives different work to classmates who have difficulties learning and/or to those who can advance faster.
• The teacher assigns projects that require at least one week to complete.
• The teacher has us work in small groups to come up with joint solutions to a problem or task.
• The teacher asks us to help plan classroom activities or topics.
I am not entirely convinced about this construct or the process of comparing different education systems in this way. However, PISA do this themselves and draw quite bold conclusions from the process. So, following their lead, I decided to compare the level of student orientation with PISA maths scores from the same year.
The measure of student orientation that I have used is one that PISA used in their recent report to calculate a ratio of student-orientated to teacher-directed instruction. It is simply the average score when students are asked the four student-orientation questions above. I’m not quite sure how the scoring works but at least this is better than the memorisation construct because the student-orientation construct includes a scaled response to each question: “never”, “occasionally”, “frequently’, and so on.
We have already seen the negative correlation between this measure of student-orientation and mean PISA 2012 maths scores:
PISA also report the percentage of low achievers and the percentage of high achievers in each country so I thought I would examine how these compare with student-orientation. Give the graph above, you might expect that increased student-orientation correlates with increased numbers of low achievers:
However, what about high achievers? They might benefit from more open-ended work? Apparently not:
This last correlation is the least strong and there is some variation at the low end. If we don’t simply dismiss this kind of analysis as a nonsense then there are many reasons why this might happen. I would suggest that the degree of alignment between the curriculum and the types of questions used in PISA would at least cause some variation.
The key message is that PISA’s own data suggests that a “student-oriented classroom climate” – at least in the way that they measure it – does not appear to be a feature of good maths teaching. So they’re wrong about that. I wonder if we will hear this from them soon?