PISA provides troubling evidence on feedback

Embed from Getty Images

If you are reading this post as a form of procrastination while a set of books stare back at you from the corner of your desk then you might want to pay close attention.

I have analysed quite a lot of Programme for International Student Assessment (PISA) data recently, both from 2012 and 2015. It seems that I am something of a PISA pragmatist. Whilst I accept that the only way to truly determine whether one thing causes another is a randomised controlled trial, I don’t dismiss correlation data from programmes like PISA. I think that such evidence can have a role, especially given the problems with running experimental trials. Provided that we exercise caution about possible biases and unseen factors, correlations can provide a window into long-term trends and how policy performs in-the-wild. We should look to triangulate between correlations, experiments and underlying theory. When we find a set that all seem to reinforce each other then I think we have the closest thing to knowledge in education research.

PISA 2015 provides troubling evidence for fans of feedback. PISA surveyed students about the level of feedback they received in science lessons. They were asked questions about how often the received the following forms of feedback:

  • The teacher tells me how I am performing in this course
  • The teacher gives me feedback on my strengths in this <school science> subject
  • The teacher tells me in which areas I can still improve
  • The teacher tells me how I can improve my performance
  • The teacher advises me on how to reach my learning goals

The statisticians at PISA developed an index based upon this and they matched it to PISA science scores. One of the strongest correlations in the 2015 PISA data is a negative association between students’ perceptions of receiving feedback in science lessons and their science achievement. The more feedback they seem to get, the worse their science scores:

index-of-perceived-feedback

This is likely to be because students who are struggling in science attract the most teacher feedback. So it is their difficulty with science that causes both the lower scores and the increased amount of feedback. But why do they receive more? Should more able students also be receiving feedback on how to improve?

The role of feedback has taken on a high profile in education following John Hattie’s 2009 meta-analysis findings that showed it to have one of the largest effect sizes. Yet it has been known for some time that the potential effects of feedback are mixed. For instance, imagine we inform a student that she is not doing particularly well in science: She may decide to work harder to make-up for this deficit or she may decide that she doesn’t like science anyway and it’s stupid. Feedback provides information on levels of performance which cannot easily be untangled from an emotional impact. By constantly reflecting back to students that they are not doing well we might be making them do even worse.

The other point that is often overlooked – a point stressed by Hattie – is that feedback to the teacher is one of the most powerful forms. Imagine you are teaching science and you ask students to answer a question on a piece of paper which they submit to you at the end of the lesson. The traditional response is to take up these answers and write a lot of comments on them in order to provide feedback to your students. Most teachers will have been in the position of writing the same thing, over and over again. This is inefficient. The key feedback is the feedback that you receive on the effectiveness of your teaching by simply reading the answers. You can then tailor the next lesson to address any misconceptions. You don’t have to target individuals in this process – although you might ask students to share particularly good answers – and so the unpredictable emotional content of negative feedback might be avoided.

Feedback is complex. Marks and comments seem to have different effects. Many of the possible responses to feedback are negative. I am starting to think that ‘feedback’ covers too many things to be a useful term.

The PISA correlation should prompt us to pause and reflect.

Standard

5 thoughts on “PISA provides troubling evidence on feedback

  1. The feedback to the teacher also has a potential cumulative long-term effect. A good teacher will notice the things which are not getting through and the kinds of things which have to be done to correct this, and will plan future lessons for the same students, and next year’s lessons on the same topic, better, so that there is better first-time learning i.e. the feedback can challenge the teacher’s perspective on the best way to teach things, because it tells the teacher what has/has not been effective. Also the teacher will notice over time those topics which seem to take more time than expected before pupils “get” them, and will start planning accordingly.

  2. Iain Murphy says:

    Hi Greg

    I think the problem with feedback is the example you have given “inform a student they are not doing particularly well” sounds like a great way to turn them off. A lot of Hattie’s conversation on feedback involves the idea that a grade is feedback and is usually bad feedback vs explaining what was good and what needs improving. Add to that Carol Dwecks research on feedback on talent va effort and it becomes clear that feedback is complex.

    With that in mind how can a scale of 1-5 undertaken by students give a good measure. When the questions are the manipulated even more worrying.

    Why no post about Score connection to student gender?
    The issue of absence from school and/or class?
    The problem of homework which seems to be worse than enquiry based learning initiatives?

  3. Pingback: PISA data on discipline  | Filling the pail

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.