Placebos in education research

A friend drew my attention to a blog post which led to me tracking down the study that the blog post is written about. I think this study is of importance to those of us with an interest in education research.

Trials of new drugs tend to be double-blinded. Patients are either given the test drug or a placebo – a simple sugar-pill which looks the same. The patients do not know which of these they have been given and neither do their doctors (hence the ‘double’ blinding). Part of the reason for such secrecy is to reduce expectation effects – patients will all have an equal expectation of getting better.

This is why it can be hard to evaluate complementary therapies such as acupuncture – you tend to know if someone has stuck a needle in you. If we compare acupuncture with, say, massage then people might believe that the acupuncture will help them more than the massage and this may affect the results.

In education, we often refer to an expectation effect known as the ‘Hawthorne Effect’ where the knowledge of being the subject of study somehow affects the results – perhaps teachers are more enthusiastic or perhaps students work a little harder. Again, this is an expectation effect and it is one of the reasons why we should be skeptical about research. If teachers opt in to teaching the new, shiny program and students are aware that they are in this program – or even self-select into it as has been the case with some university-level trials of problem-based learning – then should we really be surprised that there is an effect when compared to business-as-usual?

In fact, a new, shiny program would not only involve a Hawthorne effect – the knowledge that you are the subject of study – but a placebo effect – a new method or piece of equipment or construct of some kind that may cause you to expect a favourable outcome.

Should we worry about these effects? After all, expecting to do better won’t teach us how to read and it seems unlikely that it could have an effect on something as fundamental as our underlying intelligence.

This is where the new experiment comes in. The researchers were investigating the possibility of a placebo effect in ‘brain-training’ activities. They recruited participants to the study using two different posters. The first was neutral and asked student to “participate in a study” in order to gain course credits whereas the other poster – the placebo condition – mentioned “cognitive enhancement” and the potential to “increase fluid intelligence”.

The intervention was then the same – a brain-training game. So it wasn’t the intervention that was manipulated but the recruits’ expectations of it. It’s a bit like giving acupuncture to two groups of patients but promoting the potential benefits with only one of the groups.

There is a possibility of a systematic difference between the two groups given that assignment was by self-selection rather than being random. Even so, performance on the brain-training task was similar and yet students who self-selected into the  placebo group saw the equivalent of a 5-10 point increase on a standard IQ test.

This feeds the ongoing debate about whether brain-training achieves much at all and, if it does, whether this transfers out of the narrow range of skills that the training addresses. But it should also give us pause to reflect on educational research. Has anyone ever run a trial of an educational placebo?

Advertisements

7 Comments on “Placebos in education research”

  1. Reply to your closing sentence:
    The Dr Fox effect https://en.wikipedia.org/wiki/Dr._Fox_effect

    It is not yoke. Pupil ratings of teacher effectiveness are vulnerable to the ‘dr Fox effect’.

  2. Pamela Snow says:

    An interesting and important post Greg, thanks. However I think you are incorrectly conflating two methodologies: the double-blind trial and the placebo. In medical trials, the intervention group receives the “active” or “experimental” intervention, but the control arm typically receives a business-as-usual intervention – and in so-doing provdes ‘control” against over-attribution of beneficial effects of the new treatment. If you are researching depression treatments, for example, and you recruit a pool of people who are all deemed on validated measures to be depressed, it would be unethical to knowlingly give half of them a “sugar pill”. Instead, the control group receives business-as-usual, while the intervention group receives the experimental intervention (whether a drug or a form of psychotherapy). The same can apply in education, where the children/classes/schools randomised to the intevention arm receive the new XYZ intervention, while standard classroom practice continues in the control settings (provided you can minimise/avoid contamination across arms). This is not a placebo. A placebo would be putting in place something that “looks like” an educational intervention but has no efficacy as such (eg getting students to cut pictures out of magazines about a particular topic – sadly such activities are not hard to find in education).
    In addtion to the Hawthorne Effect, you might also want to learn about the Rosenthal Effect – http://psych.wisc.edu/braun/281/Intelligence/LabellingEffects.htm

    Cheers, Pam

  3. Iain Murphy says:

    Have a look for Jared Horvath’s work on brain games which show that while they improve the thing you are learning the transfer potential is very low. It shows that repetition of a skill can be helpful if that skill is needed but limited if the brain isn’t helped to see contexts and links between skills. Lots to considering the teaching of Maths.

  4. Mark Bennet says:

    If the placebo effect has a positive impact on learning, someone should study how best to deploy it as a low cost intervention. I have an untested hypothesis that the gains made by people using the latest ‘thing’ are often the product of enthusiasm (of the adopter), and there may be a need to change the ‘thing’ every so often to maintain the enthusiasm. Any non-negative ‘thing’ would do. The problem is that if this is a placebo effect, then telling the practitioner may dilute the effect without replacing it with something more effective.

  5. Ann in L.A. says:

    This study:

    https://www.statnews.com/2016/07/24/brain-training-cuts-dementia-risk

    is interesting, and is one of, if not, the first to show long-term improved outcomes from brain training. It’s still in preliminary stages and hasn’t gone through peer review, but they had a sizable group of seniors (over 2,500) to work with and were able to include occasional follow-ups over the course of 10 years and saw a diminished rate of dementia in the study group which had the most intensive brain training.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s