An interview with Dylan Wiliam

Dylan Wiliam is a world authority on formative assessment and Emeritus Professor of Educational Assessment at the UCL Institute of Education in London. His popular book on formative assessment, Embedded Formative Assessment, was recently released as a revised edition and his latest book, Creating the Schools our Children Need, critically examines the ways we could seek to improve education at a system levelFollowing the recent trial of a professional development approach to formative assessment conducted by the Education Endowment Foundation in the UK, I thought it would be good to catch up with Wiliam and seek his thoughts.

1. The Education Endowment Foundation in the UK (EEF) recently published the findings of its trial of the Embedding Formative Assessment professional development programme. How would you summarise these findings?

I think the first thing to say about the EEF trial of the Embedding Formative Assessment (EFA) professional development programme is that it was what is called in medical research an “intention to treat” study. In other words, the study did not just look at the schools who implemented the programme faithfully. Rather it recruited 140 schools, divided them into two groups, and gave half the schools DVDs with the training materials (experimental group), and the other half just got the cash equivalent (control group). The other difference is that representatives of the school got one day’s inservice training, and minimal support over the two years of the project. We know that many of the schools given the materials did not implement them as intended, and it appears that teachers found the ideas more applicable to the teaching of younger children (11 to 14 year olds) than the students whose achievement was assessed in the project (14 to 16 year olds). The evaluation therefore measured the effect of just giving the materials to schools, and therefore gives us a good idea of what would happen if the programme was implemented at scale.

After the project started, the researchers realised that a number of the schools recruited (12 of the experimental schools, 4 of the control group schools) had already been involved in similar work through the Teacher Effectiveness Enhancement Programme (TEEP), which had used many of the ideas in the EFA programme, which was originally developed in 2007. Since these schools had already been exposed to the ideas of the programme, the evaluators decided to analyse the impact of the pack on just the schools that had not been involved in TEEP.

Two years later, the performance of students in their school leaving examinations (GCSE) in the experimental group and control group schools were compared, and those in the experimental group scored 0.13 standard deviations higher in their average grade across eight school subjects (a significant difference). One year’s learning for students of this age is around 0.3 standard deviations, the students take their GCSE exams half way through the summer term, and we have to factor in the fact that students forget stuff—say 10%—from one year to the next. This means that over the course of the research, the students could be expected to increase achievement by 0.52 standard deviations (0.3*.90 + 0.3*5/6). The students in the experimental groups improved by 0.13 sd more than this, equivalent to a 25% increase in the rate of learning. Given that the cost of the programme is around $2 per student per year, it is a highly cost-effective intervention.

2. I qualified as a teacher in the UK in 1998. I first learnt about the principles of formative assessment by reading the publication you authored with Paul Black, Inside the Black Box, as did many of my generation of teachers. Later, I drew links with ideas like the ‘curse of knowledge‘. What changes, if any, have occurred since 1998 in terms of what we know about formative assessment?

I don’t think Paul and I have changed our fundamental ideas about formative assessment very much since we did the research on “Inside the Black Box” 20 years ago. The basic ideas are simple. First, teachers need evidence about what their students are thinking in order to make good decisions, and the quality of that evidence is often poor. Second, students and their peers have insights into their own learning that is often not used in classrooms. And third, the way we use assessment affects, both positively and negatively, students’ attitudes and motivation. What has changed is that we now know that when teachers develop their practice of formative assessment, their students learn more, even when learning is measured in terms of scores on externally mandated tests and exams. This was suggested by the research we reviewed, but we did not know that this was true in real, messy educational settings and implementable at scale. We also know how teachers can incorporate these ideas into their practice at minimal cost, through the use of self-help school-based “teacher learning communities”. We have also clarified our ideas somewhat—so now we talk about the terms “formative” and “summative” as descriptions of the inferences that are made on the basis of assessment results, rather than as descriptions of the assessments themselves.

Looking back, it seems to me that the biggest mistake we made was to start with the idea of formative assessment as being mainly concerned with feedback, for example by highlighting the negative impact that scores and grades can have on learning. Giving students individual feedback is extremely expensive—after all, it’s effectively one-to-one tuition done in a way that means that students often ignore what is being said. I now think it might have been more productive to start with formative assessment as being responsive teaching. In other words, because students do not learn what we teach, we had better find out what they did learn before we teach them anything else, and we cannot rely on the responses given by confident articulate students as being representative of the thinking of other students in the class.

3. It sounds like your ideas on feedback have evolved. Feedback is a big deal in schools, perhaps partly as a result of the assessment for learning research and partly as a result of the work of John Hattie. Would you therefore like to expand a little on how you now see the role of feedback?

I don’t think my views about feedback have changed that much—rather what changed was the realization that, in many countries, this is not a particularly smart place to start the conversation, since teachers feel—often wrongly in my view—that they have little room for manouevre. I also think that a lot of what schools are doing in focusing on feedback is ill-conceived. Kluger and DeNisi, in their 1996 review of research, found that in approximately 38% of well-designed studies, feedback actually lowered performance. Without some understanding of when feedback improves achievement and when it does not, blanket prescriptions about “doing more feedback” are at best risky, and potentially very harmful, for example if teachers start giving more of the feedback that lowers achievement.

However, there is a much more important point about feedback that those who have sought to quantify the effects of feedback, like Hattie, have missed. In the conclusion of their review, Kluger and DeNisi pointed out that feedback interventions that showed large positive effects on learning should not be implemented if they resulted in the learner becoming more dependent on the feedback. They argued that we should stop trying to figure out how much feedback improves learning and instead look at what feedback does to students. After all, the only good feedback is that which is used. This is why I think we need to look much more at what psychologists call “recipience processes” in feedback—getting students to understand why we are giving them feedback and how they can use it. David Yeager and his colleagues have shown that just telling students they are being given feedback because the teacher has high standards and believes the student can reach them makes students more willing to use the feedback and re-submit work.

4. There is an ongoing debate on this blog and on social media more generally about different teaching methods and curricula; skills versus knowledge and explicit instruction versus inquiry learning. One strength of formative assessment may be that it is pedagogically neutral – whatever and however you want to teach, formative assessment will help you achieve your aims. Perhaps it is the one strategy we can all agree on. What are your thoughts?

The really important thing for me is that formative assessment is neutral with respect to curriculum (what we want students to learn) and pedagogy (how we get students to learn). The big idea—what psychologist David Ausubel called the most important idea in educational psychology—is that any teaching should start from what the learner already knows, and that teachers should ascertain this, and teach accordingly. The problem is that even with a new and unfamiliar topic, after 20 minutes teaching, students will have different understandings of the material, which the teacher needs to know about. What you call the curse of knowledge is part of that—we assume something is easier if we know it—but even if we avoid that trap, we still have no idea what is happening in the heads of our students unless we get some evidence, and if we only get evidence from confident articulate students, then we cannot possibly be making decisions that meet the learning needs of a diverse group of learners. Now of course, the fact that students can do something now does not mean that they will know it in six weeks’ time—we have known for almost 100 years that learning is different from performance—but if they do not know it now, then it is highly unlikely that they will know it in six weeks’ time.

Perhaps more surprisingly, formative assessment does not even entail any view of psychology (what happens when learning takes place). If you’re a behaviorist, you need to know if a student has sufficient reinforcement to make strong links between stimulus and response. If you’re a constructivist, you need to know that the learner has formed reasonably adequate ideas about the material at hand, and does not have any important misconceptions. If you emphasize the situated nature of cognition, you need to know the extent to which a learner’s attunements to constraints and affordances in a particular learning environment are likely to allow them to apply their learning in different contexts. The reasons are different, depending on your view of what happens when learning takes place, but you still need to know what is going on in students’ heads to teach effectively.

5. Finally, if a school leadership team decided to prioritise formative assessment, where should they start?

The recent results of the Educational Endowment Foundation evaluation of the Embedding Formative Assessment professional development pack make that a very easy question to answer. Buy the pack, and use as directed. Organize teachers into groups of 8 to 14, led by a practising teacher (not someone with a formal leadership responsibility), and ask each member of the group to choose one formative assessment technique to try out in their classroom, possibly after some modification. The groups should then meet monthly, for at least 75 minutes, to hold each other accountable, and to give each other support. Allow each teacher to spend as long as they want to work on the same technique until it is “second nature” before suggesting that they move on to something else.

This could be supplemented by some school-wide growth mindset interventions—the effects aren’t huge, but they take up little time. Apart from that, the job of leaders is to ensure that teachers in their schools are getting better at the things that have the biggest benefit for students. Given what we know about the impact of classroom formative assessment, any school leader that encourages teachers to work on unproven ideas like educational neuroscience, lesson study, grit, or differentiated instruction is, in effect lowering student achievement. We need to stop looking for the next big thing, and instead do the last big thing properly.

A big thank you to Dylan Wiliam for giving up his time for this interview.

Advertisement
Standard

12 thoughts on “An interview with Dylan Wiliam

  1. Pingback: An interview with Dylan Wiliam — Filling the pail – The Learning Project

  2. Tom Burkard says:

    ‘Inisde the Black Box’ was published when schools were struggling to cope with the increasingly complex National Curriculum and the newly-launched National Literacy Strategy. In the school where I taught, many teachers made no attempt whatever to ascertain what their pupils knew or whether they had learned anything from their lessons; teachers wrote lessons on the board, and pupils copied them into their books. Some SEN pupils had TAs to copy the lesson for them; that was it–job done. The pupil had become an optional part of the learning process. When I took on new pupils in September, after introducing myself, I would say “Today is Tuesday”–then I’d pause for a few seconds before asking a pupil “What day is it?” Invariably, they’d be so shocked that they couldn’t answer. The closed question was a foreign country, and to them teacher talk drifted harmlessly over their heads.

    In this sense, AfL was a welcome reminder that teaching and learning are not one and the same. But even now, the message that “teaching should start from what the learner already knows, and that teachers should ascertain this, and teach accordingly” simply has not been taken on board. We recently tested the maths skills of 353 Yr 7 and YR 8 pupils in a ‘Good’ school serving a disadvantaged population, and only 42.2% could solve 438 x 63. It was far worse with percentages and decimals.

    Our test was comprised of 12 questions very similar to those from the last two KS2 Arithmetic papers, and it only took 15 minutes to administer and perhaps another half-hour of the teacher’s time to score, yet it unvieled a picture that should have had alarm bells ringing in these children’s primary schools long ago. At least this secondary school has resolved to test this September’s intake in order to know where to start.

  3. We teach floral studies and really enjoy your discussion. We do lots of practical work and reinforcement skill building…. But also need the creative environment stimulated…. So they can become creatively independent…. Thank you for this opportunity to reconsider our learning strategy

  4. Wiliam: “you still need to know what is going on in students’ heads to teach effectively.” Why? We can’t know what is going on in students’ heads, even students themselves can’t. In a scientific way we can, e.g. http://act-r.psy.cmu.edu/ , Why not simply use assessment primarily as exercise? And stop talking about feedback as something quite different from exercise?

  5. Pingback: Formative assessment | Pearltrees

  6. Pingback: Newsround – John Dabell

  7. Pingback: What Can You Do With a Psychology Degree and more in this week's news roundup! - Psych Learning Curve

  8. Pingback: The More is Always Better Trap – maelstrom

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.