Education has a problem with reality (and this isn’t easy to fix)

Embed from Getty Images

So The Productivity Commission in Australia today released its draft report into the evidence base for education. It states some interesting findings: Although spending on education has increased, “National and international assessments of student achievement in Australia show little improvement and in some areas standards of achievement have dropped.” This cannot be rectified by collecting data and creating competition between schools. Instead, we need better evaluation of policies, programs and teaching practices.

The body of the report recommends the use of randomised controlled trials (RCTs) over the more nebulous kind of research that is commonplace in education.

This strikes me as naive. Do the authors really believe that the education sector will read this report and go, “These new-fangled RCTs seem like a good idea. Just look at this case study about teaching assistants in England! Why didn’t we think of this before?”

No, the education sector won’t do that. And it is important to understand why.

The first problem we have is a monumental disdain for outcomes that can be measured. RCTs rely on these outcomes. Do you have a reading intervention? Then we will need assess its impact with a reading test. But tests are bad! There is more to life than tests! What are you, some kind of monster that thinks a thing doesn’t exist unless it’s on an exam paper!!!

Instead, we need to attend to the whole child and develop skills such as critical thinking, collaboration and creativity. The Productivity Commission takes its lead from the OECD here and massively misses the point by suggesting that we need better ways of measuring these kinds of outcomes. The whole purpose of them is that they are hard to measure and so free people to claim pretty much anything.

When quantitative research is conducted, it often falls short of the standards of a good-quality RCT. If you want to produce positive results for an educational intervention then there are plenty of ways that you can do this. You can:

  • Enthuse teachers and students and make it obvious which students are in the experimental group (i.e. getting the intervention) and which are not. This will then create a placebo effect.
  • Deliver the intervention to the experimental group using self-selected, enthusiastic teachers and then compare the results with standard practice elsewhere. This will produce a similar expectation effect perhaps conflated with a teacher effectiveness effect.
  • Measure things that are addressed in the experimental group but not in standard practice and then report these measures. A good example of this might be a problem-based medical education (PBL) trial where students in the PBL condition meet lots of patients as part of the approach and those in the standard condition do not. You then report the ability of students to relate to patients as a key outcome.
  • Take this last approach even further and research a tautology. For instance, you can redefine your outcome to effectively mean ‘engage with the intervention’. There is a great example of this in an experiment where ‘curiosity’ is essentially defined as ‘to engage in discovery learning’ and then a discovery learning condition is found to promote curiosity.

And these are just the problems with people doing quantitative experiments. There are many in education who reject this kind of research entirely: Education is different. Everything is socially constructed. Instead of relying on fallible data, we should follow a theory that someone like Freire once wrote about. Let us therefore talk to three practitioners through this lens, record our thoughts about this and call it ‘research’.

I have never been able to understand why this is an advancement. Brian D. Earp suggests – as he borrows from Churchill – that the scientific method, “is perhaps the worst tool, except for all the rest.” Why would a text written by a fallible person be any better at establishing the truth in a complex area than an experiment?

My final question about The Productivity Commission’s report is whether there is a purpose to all of these RCTs evaluating policies and teaching practices. To some extent, don’t we already know? I am sure that there are many nuances worthy of teasing-out yet we can’t even accept some of the most basic findings of previous education research. Studies produced over decades suggest that if you want to teach something new to a group of students then do so explicitly, with small steps at first, ask lots of questions and then structure plenty of practice. That seems pretty close to the reality of the situation.

But in education, we’re not so fussed by reality.


5 thoughts on “Education has a problem with reality (and this isn’t easy to fix)

  1. Mike says:

    Your first dot point there is spot on (as are the others, but that’s a particularly crucial one). As I mentioned in a comment to your previous post, I saw this actually occurring during a desperately flawed study at my previous school.

    Even if all the usual pitfalls (or tricks) you mention above are scrupulously avoided, the basic problem with quantitative research in education remains: proper double-blinding is next to impossible. Does this mean one simply throws up one’s hands with quantitative research? Not quite, I would suggest, but perhaps the approach needs to be different. Rather than measuring interventions, I think it would be more helpful to measure (as far as is practicable) the results of disparate methods that are ALREADY very much a part of the culture of two or more different schools. This way, in most cases you would have teachers who are confident with the method and the materials and believe in what they are doing, so the double-blinding problem is largely avoided, or at least mitigated.

  2. Iain Murphy says:

    Think your dot points are spot on especially the 2nd one linking with previous comments you have made about PBL and socioeconomic factors.

    Could I suggest an additional point for problems with studies in education.

    That the only way to demonstrate knowledge, competence and/or understanding is by completing a test/exam (most likely multiple choice because any other adds to much scope for variation) under test conditions. Probably by then taking an average of results.

  3. Pingback: Evidence for Project Based Learning | Filling the pail

  4. KenS says:

    Greg, will you please slow down in your writing so I can get caught up on past posts?

    I’m kidding. Keep up the outstanding work.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.