Differentiation fails another testPosted: February 12, 2018 Embed from Getty Images
Differentiation can mean a lot of things in teaching. If you define it widely enough, many of my own teaching practices could be described this way. However, the common meaning of the term refers to taking a mixed class of students and divide it up into different groups that have different goals and complete different tasks. I am deeply skeptical about this practice for reasons I’ve outlined before.
A new study set out to measure the impact of giving serving teachers training in within-class differentiation of this kind. As the authors, Prast et al., explain:
“With the current movement towards inclusion of children with special educational needs in general education classrooms, the range of ability and achievement levels is continuously increasing, as are the specific educational needs associated with these. Differentiation, i.e. the adaptation of instruction to students’ different educational needs, is often promoted as a solution for responding to this diversity.”
Prast et al. chose to focus on mathematics. This makes a lot of sense because it enabled the training materials to be placed in context and the effect of the intervention to be measured. They also used an interesting design. The study lasted for two years and schools were randomly assigned to one of three groups: Cohort 1 received training in year one of the study, Cohort 2 received it in year two and Cohort 3 received no training at all (but were offered it after the study ended).
What did they find?
There was a gain in student achievement in the first year of the study for students in Cohort 1 – whose teachers were the first to be trained – relative to students in the other cohorts. The gain was statistically significant but small (Prast et al. use a measure of effect size that I’m unfamiliar with but they state it is a small effect). The gain was similar for all groups of students i.e. it benefited more advanced students and less advanced students by similar amounts. However, in the second year of the study, students in Cohort 1 progressed no better than students in the other cohorts. Surprisingly, perhaps, students in Cohort 2 whose teachers received the training in the second year of the study also did no better in year two than students in the other cohorts.
The authors suggest that teachers in Cohort 1 may have been more motivated about the training than those in Cohort 2 who had to wait a year. Yet, if anything, this suggests an explanation for the positive result for Cohort 1 in the first year. It may have been due to an expectation effect – these teachers expected the intervention to make a difference and probably communicated this enthusiasm to their students. Such effects are common in education research because we cannot easily blind trials – subjects know whether they are in the intervention or not. Teachers were also likely to have spent more time thinking about their maths teaching during that year. This would explain why the effect washed out by the second year.
This is not the first study to find a null or negative result for differentiation. I have commented on other studies before. The evidence supporting differentiation is highly elusive but this has not stopped it becoming an article of faith among education bureaucrats and researchers. When I write about differentiation, I can expect strong, negative responses. I think that, as Prast et al. suggest, differentiation has become conflated with views about inclusion and they interpret criticism as an attack on children with special educational needs. If that’s true, we should find some evidence showing that within-class differentiation helps children with these needs to learn. Where is that?