Mere bloggers of the world unite

Embed from Getty Images

I often write blog posts where I reflect on research evidence. I think it is important for teachers to engage with research and I will explain why.

Old Andrew, a UK blogger, recently wrote a post about the UK’s Education Endowment Foundation (EEF) and the evidence it presents on setting and streaming (between-class ability grouping). It was a interesting piece, critical of the EEF’s approach.

The EEF presents itself as an authority for teachers and school leaders to go to in order to find the best evidence available on what works in education. The EEF data and model have been adopted by Evidence for Learning (E4L) in Australia and this comes at a time when we are just about to hear the results of the ‘Gonski 2.0’ review which will presumably offer proposals to make education more evidence-informed.

So this represents a moment of potential shift, where we might start to embrace the use of evidence. However, if that places power in the hands of a few curators of evidence such as the EEF, E4L and John Hattie, then there is also a potential risk.

All of these approaches use meta-analysis, a process that generates overall effect sizes for particular interventions and then ranks these interventions on these effect sizes. Plenty can go wrong here. Effect sizes don’t just vary by type of intervention, they vary by age of student, range of student abilities, whether students take standardised or experimenter-designed tests and by the design of the study (see e.g. discussion here)

In addition, curators have a choice in how to group studies and this may reflect implicit biases. Education is a contested field and ideology runs deep. For instance, I have questioned whether the EEF’s category of ‘metacognition and self-regulation’ is justified.

Which is why we need teacher-bloggers more than ever. We have a different take on what works and, like Old Andrew, we are placed to ask difficult and necessary questions.

If you’re not convinced then the reaction to Old Andrew should give you pause for thought. His blog post was Tweeted by Nick Gibb, a UK schools minister, causing an outpouring of rage on Twitter. One education academic asked why Gibb was listening to a ‘mere blogger’.

Now imagine a lawyer or a doctor or an engineer writing a blog post about research in their area of professional practice and imagine an academic dismissing them as a ‘mere blogger’. It seems unlikely, and this tells us something about the relative status of teaching.

Perhaps it is a shock because it is so unusual. But we need more teacher engagement of this kind if we are going to benefit from the push for evidence.

Advertisements

9 thoughts on “Mere bloggers of the world unite

  1. Such attitudes seem increasingly common in the British education world at the moment. The cult of management, evidence and ‘research’ has gained such traction that anyone who lays claim to any other kind of knowledge is routinely trashed. After all, what can those who in many cases have spent years of their life in the classroom (often far more than managers have) know about what they are doing?

    1. I’d go so far as to argue that all research in education is problematic because of the difficulty in anticipating all possible variables that might affect the outcomes. The more complex the objective of the study, the more possibility for confusion–this was something that Andrew Old picked up on, yet went right over the heads of the EEF. Of course, this isn’t to say that teachers are infallible guides to what works–teachers almost always implement new ideas in the light of their previous training and experience. They’d be less than human if they didn’t. However, researchers often have some pretty strong prejudices, too. This is why blogs play such an important role–they provide a forum in which the results of research can be discussed in the light of those who inevitably must implement change. As we wrote in our last policy paper,

      “For the last 30 years, England’s schools have been adventure playgrounds for ambitious education professionals. Many of their initiatives have worked well in tightly-controlled pilots, but when they are implemented on a larger scale they disappear without a trace. It is never very long until another exciting innovation comes along to displace the last one. Many of these ideas are too complex and too time-consuming to succeed, but the biggest problem is that teachers—that is, the ones who actually spend a good part of their day in front of a room full of squirming pupils—are not key players in the process of change. They are seldom consulted prior to the introduction of a new initiative or policy.”

  2. It is amazing that Kevan Collins, Chief Executive of the Education Endowment Foundation (EEF) said,
    ‘if you’re not using evidence to inform your decisions, you must be using prejudice.’ Similar to Hattie’s simplemindedness, ‘statements without evidence are just opinions’. It would be great if the prejudice and conflict of interest were turned back on them.

    I’ve had a look at Hattie’s evidence on ability grouping here – https://visablelearning.blogspot.com.au/p/ability-grouping.html

    I’ve found the same problems – mistakes, misrepresentation, selective exclusion of studies that don’t suit, etc…

    E.g., in the Accelerated influence, Hattie reports an effect size of -0.02 but the study found +0.02. But ii actually calculated 2 effect sizes. The 0.02 was found using the advanced year level as the control group. But they also found an effect size of 0.87 if the accelerated students were compared to their original Year level. Hattie does not report the 0.87!

    1. To be fair, a within-subjects standardised effect size is clearly not comparable to a between-subjects standardised effect size (the SDs refer to quite different types of variation), so the 0.87 there is fairly uninterpretable. The conclusion you should draw here is that any result based on comparing standardised effect sizes is pretty dubious unless the studies have used identical outcome measures, in which case there’s no point in standardising.

      1. Luke, I totally agree with you that you can’t compare effect sizes across different studies. Greg provides a link to Adrian Simpson’s article which shows you can calculate effect sizes from the SAME data that range from 0-infinity depending on how it is calculated.

        But the example I gave was about how you can’t compare effect sizes, but from another angle – the bias of the researcher.

        The authors found 2 effect sizes, 0.02 and 0.87. Hattie only reports (and incorrectly) -0.02.

        I think that both the 0.02 and the 0.87 is interpretable within the context of this study alone – the authors Kulik and Kulik conclude:

        ‘talented youngsters who were accelerated into higher grades performed as well as the talented older students already in those grades. Second, in the subjects in which they were accelerated, talented accelerates showed almost a year’s advancement over talented same-age nonaccelerates.’

        But I’ve probably missed the point of Greg’s blog – that mere bloggers of the world should unite.

    2. I have a top-of-curve child that the (English) state system inevitably struggles to accommodate & you can’t avoid taking an interest. Was years ago, but my memory says an early version of the EEF page once had a little box/section tentatively saying some gifted programs might add one year.

      The current EEF page it is now rather vague about gifted programs but does have some long twaddle about the negative effects on others if you do anything for these kids. That seems to be the default angle of attack now e.g. a few years ago some educational speakers were telling SLTs that labelling and doing anything for these kids would give them paralysing fixed mindsets.

      Having watched events and views around ability grouping for a while I definitely do mean ‘attack’. Feels like anti-intellectuals of the world united, because so much has looked rotten in both this and the broader ability grouping area. So thanks for taking the time on an honest attempt to add a bit more to the story.

    3. The bit that always amuses me about Hattie is that he goes on about ‘passionate teachers’ and the effect they have – and I am left wondering how passionate is passionate *enough* for his effect sizes to work….

  3. Greg, I’d love to see more teacher bloggers too. I’d likely disagree with most, but having something to productively disagree with is far better than having nothing at all.

    The problem I would think is time. My observation of teachers, and my experience in tertiary teaching (TAFE) is that teaching consumes inordinate amounts of time in preparation, marking and meeting students. Many would have not time to blog, irrespective of desire, regrettably.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.