Confounded experiments

There is a fundamental problem with confounds in educational psychology experiments. Briefly, a confound is something that violates the key principle that scientific investigations only vary one thing at a time. If you vary more than one thing and measure an effect then you don’t know which of the things it is that you varied that caused this effect.

The importance of avoiding confounds seems obvious but the reality is more subtle. What is a thing? Is it a whole package of items bundled together as an intervention? This is often the approach taken by large-scale randomised controlled trials. If we observe an effect in such instances then we can only conclude that it was due to the whole package. There may be elements of that package that are pointless or that actually have the opposite effect. We can’t really know.

Educational psychology takes a more reductive approach, attempting to conduct smaller experiments at a more limited scale. You would therefore expect these studies to avoid confounds but they often don’t. There’s a good reason for this.

Imagine that you wanted to compare an explicit approach with a problem solving approach whilst only varying one thing. Can you characterise either method with a single variable? It’s easy to start comparing good problem-solving to an impoverished form of explicit instruction or vice versa. It is hard to do both justice while avoiding confounds.

An area of particular interest to me is the order of instruction. A number of papers have been published in recent years where this order is manipulated. Students either conduct problem solving followed by explicit instruction or receive explicit instruction followed by problem solving. You would think that this would be just the sort of experiment that could easily avoid confounds. All you have to do is change the order while keeping everything else the same.

Oddly, this has been hard to achieve. A new paper in Educational Psychology Review seeks to produce an overview of experiments in the field and draw some inferences. The authors list 20 studies that meet their criteria (and then handily tabulate them for those of us with a literature review to work on). They also examine these studies to see if they are unconfounded. Specifically, they are looking for evidence that the same problem description and data are provided during the problem solving phase and the same content and prompts are provided during the explicit instruction phase. Out of the 20 studies, they find eight that are confounded and six that are not. Presumably, the authors did not have enough information to decide upon the other eight.

Advertisements

4 Comments on “Confounded experiments”

  1. […] resources than those in the I-PS conditions. This difference confounded those outcomes (see Greg’s post on this) and, unsurprisingly, added 15 plusses, 7 equals, and just 1 negative to the overall […]

  2. […] resources than those in the I-PS conditions. This difference confounded those outcomes (see Greg’s post on this) and, unsurprisingly, added 15 plusses, 7 equals, and just 1 negative to the overall […]

  3. […] Not properly control your trial so that students in the experimental group don’t just get the stated intervention but also get better quality materials or more time or the students in the comparison group get a degraded version of the standard alternative. RCTs of Reading Recovery tend to confound the Reading Recovery strategies with one-to-one tuition so you can’t know which of these two factors is having the effect. […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s