If at first you don’t succeed, punt another £850k on it

Embed from Getty Images

All I knew about the next session of the interview was that it was called, “a discussion with the headteacher on a topic of his choosing”. It was my first or second interview for the job of deputy headteacher (vice principal) in London and so I was not sure whether this was normal.

The headteacher was sat at a desk and I was invited to sit opposite him. A number of school governors observed. The headteacher asked me what I would do if a head of subject requested to use ‘ability grouping’ i.e. place students into different maths or science or history etc. classes based upon their prior level of attainment. I gave a pragmatic answer. At that time, I was not well versed in education research, but I did know that the research on ability grouping was inconclusive: Whether you group by ability or not, it doesn’t make much difference. I suggested that I would probe and question to understand the head of subject’s position and ensure that it had been thought through, but I would ultimately be guided by their preference as the subject expert.

This was the wrong answer. The headteacher replied with something of an ideological rant on the evils of ability grouping. I held my pragmatic line.

At the end of the session, I sought the chair of governors, withdrew from the interview and went back to school to teach my classes.

The evidence available then is much as it is now. Ability grouping versus mixed ability teaching really doesn’t seem to make much difference. If anything, there may be small gains for the most able students and small losses for the least able. If so, never has such an inconsequential position been held with such furious passion by so many.

However, this evidence is not exactly gold standard and is the subject of debate. Most of the evidence has been correlational or quasi-experimental, with very little coming from randomised controlled trials. That makes it hard to separate out the factors. For instance, in the high-profile Kulik and Kulik (1982) meta-analysis, only 13 out of the 51 studies used random assignment. In the wild, it is also possible that the effect of ability grouping could be complicated by practices that might lead, for example, to lower ability groups being assigned a disproportionate number of new and inexperienced teachers. That is why I welcomed the initiation, in 2014, of a large-scale project to evaluate the effect of ability grouping using randomised controlled trials. Best practice in grouping students was headed by Professor Becky Francis of the UCL Institute of Education, funded by the UK’s Education Endowment Foundation, and aimed to test the effectiveness of the best possible versions of ability grouping and mixed ability teaching.

My enthusiasm dimmed when I realised that mixed ability teaching and ability grouping were not going to be compared directly with each other. Instead, the ‘best’ versions of each were to be compared with a control.

My enthusiasm was finally extinguished when, prior to the release of the data on effectiveness, the research team published one of those ubiquitous French-philosopher inspired papers labeling ability grouping as ‘symbolic violence’. How could any results now be regarded as impartial?

When the results finally did arrive, they consisted of two nulls: There was no effect of the ‘best’ version of ability grouping compared to the control and no effect of the ‘best’ version of mixed ability grouping compared to the control. So we are back to square one, provided you accept the data. The total cost of the project was £1,184,349.

In a truly extraordinary move, the Education Endowment Foundation have committed £850,000 to having another go. Becky Francis is still involved, although is not the project lead, and this time it looks like they will eschew randomised controlled trials in favour of a quasi-experimental study, making this an odd project for the Education Endowment Foundation with its commitment to randomised controlled trials.

This is a reversal of the normal situation. Usually, a quasi-experimental design of this kind might be conducted as a pilot study prior to going to a randomised controlled trial. It would be small-scale and presumably cost far less than £850,000. To progress from a randomised controlled trial to a quasi-experimental study seems eccentric. Mind you, the weaker trial design is perhaps more likely to lead to a result one way or the other.

I am wondering whether, this time, the result will favour mixed ability teaching. What are the chances?

Advertisements

4 thoughts on “If at first you don’t succeed, punt another £850k on it

  1. Of all the many articles of faith (and that is the right word) among the lecturers/tutors when I was doing my Dip.Ed., mixed-ability grouping was the most fervently held. To say that the lecturers were intolerant of any ideas to the contrary would be a considerable understatement. And in many ways, it’s the perfect position for a certain type of Ed academic to hold, given that it’s (a) contrary to all common sense, (b) superficially egalitarian.

    I personally believe that the increasingly rigid stratification of schools in NSW is partly due, perversely, to the widespread refusal to countenance the idea of judicious (i.e. not universal) stratification within schools.

  2. Two problems which confound research on ability grouping are differentiation and discovery learning. In primary schools, ability grouping leads to more differentiation, whereas in secondary it reduces it. This factor might not make much difference when teachers don’t use much direct instruction–after all, if teacher-pupil interaction isn’t a major element of teaching, then ability grouping doesn’t have the same impact.

    I carried ability grouping to its logical extreme when teaching remedial literacy skills in a secondary school. Our feeder schools all had doctrinaire whole language policies, and despite average social indicators and slightly above-average scores on a non-verbal reasoning test, over a third of our pupils were two or more years behind on Young’s Parallel Spelling Test or the county-mandated NFER-Nelson group reading test. At GCSE we lagged very near the bottom of Norfolk’s league tables.

    I had the full support of our Senco and most HoDs, so I was able to not only differentiate on the basis of test scores, but in order to get the closest possible match for ability, I mixed pupils from Years 7, 8 & 9 in the same groups. One teacher was horrified–she told me “You can’t do that!” but otherwise I had no difficulty whatever with pupils, parents or teachers. Being that the major elements of my teaching were the scripted SRA Spelling Mastery and Corrective Spelling programmes, getting the closest possible match for spelling ability was indeed essential.

    The results were outstanding, with all pupils making vastly greater gains in spelling ability than they had previously; some pupils made as much as five years’ gain in a single academic year. The pupils who were behind in reading also made excellent gains in reading with little if any specific reading instruction. The results were published in the 1998 Dyslexia Review, and the TAs I trained to use SRA continued the programme long after I left the school.

  3. Greg good you explain the issue with “Instead, the ‘best’ versions of each were to be compared with a control”.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.