Has cognitive load theory been debunked? In short, no. However, you might get that impression if you saw this tweet by @Research_Tim and the subsequent discussion:
The tweet references a paper by Naismith and Cavalcanti (2015) that reviewed efforts to directly measure cognitive load. This has been attempted a number of ways including simply asking people how much mental effort they are expending and asking people to complete secondary tasks to see how much working memory capacity is being used. The authors suggest that these measures are not very good.
The findings in this paper lend support to cognitive load theory rather than debunk it. To understand why, we need to be clear about what cognitive load theory is.
Cognitive load theory posits a relatively simple model* of the mind. On the basis of this model, it makes predictions about different instructional procedures (teaching methods). Verifying or falsifying these predictions therefore requires us to run tests where one instructional procedure is compared with another. This process of attempted falsification is important because it causes us to refine or perhaps even set-aside our theories.
For instance, cognitive load theory predicts that for relatively complex tasks, such as solving a physics problem or composing a paragraph, novices will learn more by studying a worked example than by problem solving. So you could falsify cognitive load theory by showing the opposite result or a null result. Interestingly, this has already happened. Early in the development of the theory, experiments were run on geometry and physics problems that involved the use of diagrams. The worked examples were no more effective than problem solving. However, by redesigning the worked examples so that relevant information was placed directly on the diagram rather than in a key at the bottom, researchers again found them to be more effective than problem solving.
This is the origin of the ‘split-attention’ effect and it added an interesting component to cognitive load theory with practical significance for the design of worked examples. It also directly linked to the explanatory mechanism in the model – the need to integrate information from two different places needlessly increased cognitive load.
Measuring cognitive load directly is therefore not necessary for the development of the theory (for instance, I haven’t attempted it yet in my own research). Yet it is clearly an avenue worth pursuing because it might shed further light on how cognitive load varies for different tasks and therefore offers the prospect for further refinement.
Unfortunately, as the Naismith and Cavalcanti paper describes, these direct measures of cognitive load are not as valid as we would like; which is unsurprising when you examine the detail of how they are conducted. When they reviewed the literature, they found that the basic idea of cognitive load theory – that higher cognitive load would impede learning – was not always present in the data. However, given that the measures of cognitive load lacked validity, it was hard to draw conclusions from this.
However, they did find that, “Studies reporting greater validity evidence were more likely to report that high CL [cognitive load] impaired learning.” And that is in line with the predictions of cognitive load theory.
So no debunking today. Critics will need to wait for that.
*I have written an FAQ on models that addresses a common issue that people raise when discussing cognitive science