Group work and the Education Endowment Foundation


What’s the difference between the Education Endowment Foundation (EEF) toolkit and a wind-chime? A wind-chime turns wind into noise whereas the EEF toolkit turns noise into wind.

Harsh? Maybe.

But there are genuine and serious reasons to be concerned about the kind of meta-analysis that the EEF curates. Following my last post, someone pointed me to the EEF toolkit which includes a review of collaborative learning. On face value, it looks like adding collaborative learning to whatever you are doing already will mean that your students make an extra five months of progress. How extraordinary!

When you dive into the detail, there are a few caveats. You can’t simply implement any old group work in order to obtain the advantages. You need to, “get the detail right,” and use, “structured approaches with well-designed tasks.” So you can throw out any plans you had of implementing unstructured group work with poorly designed tasks. The EEF also note that, “There is some evidence that collaboration can be supported with competition between groups, but this is not always necessary, and can lead to learners focusing on the competition rather than the learning it aims to support.”

This point is really interesting. I was surprised when I looked in to the group work evidence to find that at least some of it is based upon experiments that manipulate different systems of rewards (e.g. here). In these studies, rewards are either handed out individually or to groups and the evidence suggests that the students earning group rewards learnt the most (at least from completing assignments that seem to involve little direct teaching). This is probably why Robert Slavin emphasises the need for group goals in order for collaborative learning to be successful. it seems like the EEF are aware of this but don’t really like the finding.

I haven’t examined the papers in the EEF’s technical appendix but I note that some of them appear to be similar to Slavin’s sources. If you are interested in looking into these papers then I suggest examining the methods closely. What are the comparison groups? Is collaborative learning being tested against individually studying from a worksheet or book or is there actual teaching involved? What are the age ranges of the students involved? What are the outcomes?

Such an analysis is not the purpose of this post. According to the EEF, 5 months of progress equates to an effect size of about d=0.44 which, even if you still believe in effect sizes, is neither here nor there (see my take of effect sizes here). Moreover, the EEF state that, “The effects vary… with pooled effects between 0.09 and 0.78 but no clear explanation of what explains the variation.” Which is also neither here nor there.

And what of the absurdity of treating group work in this generic way? Are we meant to assume that we would gain five months of progress by adding group work to drama lessons? It’s hard to think of drama lessons without it. Are the findings equally valid for science and maths and reading and art?

Nevertheless, the influence of the EEF toolkit is expanding. We now have an Australian version based on the same data set. Busy school leaders will reach for it as a source of ‘evidence’.

What’s the harm? Well imagine an excellent maths teacher who instructs a whole class in an interactive way. Imagine that teacher being told to incorporate more group work because the evidence supports it. That’s the potential harm.

Standard

6 thoughts on “Group work and the Education Endowment Foundation

  1. Brian says:

    “Well imagine an excellent maths teacher who instructs a whole class in an interactive way. Imagine that teacher being told to incorporate more group work because the evidence supports it. That’s the potential harm.”

    Any teacher that would simply change activities designed to be completed individually being completed in groups simply because someone told them that the meta analysis supported the approach would not be accurately described as an “excellent maths teacher”.

    Even if your analysis of the analyses of the analyses bears any relation to the real world which I feel it likely does not, an excellent maths teacher is an excellent maths teacher because they exercise professional skill and judgement in the design and delivery of activities to suit the learner(s), content, environment and themselves.

    For me this stuff is naval gazing and of interest to real life teachers as a thought experiment but other than that it is best avoided like the plague.

  2. Arthur Pendrill says:

    Take a look at the recent paper by Simpson in the Journal of Education Policy (http://bit.ly/2sfXsUr). It notes that the whole toolkit enterprise is fatally flawed. The EEF mistake a measure of experimental clarity for one of educational importance.

  3. Pingback: Educational Reader’s Digest | Friday 9th June – Friday 16th June – Douglas Wise

  4. Pingback: Educational Jargon | History Lover

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.