When quizzing fails

There is a lot to still figure out about the nature of learning. When exploring the edge of any field of knowledge it is useful to look for the contradictions. These generate the most heated debates – so there’s lots of noise – but they also perhaps herald the transition to a new, more complete theory.

Two seemingly contradictory effects are that of the generation effect and the worked example effect.

If I were to ask you to name the capital of Texas, you might struggle. Perhaps you would suggest that it is Dallas or Houston. If I then informed you that it’s Austin then the research suggests that you are likely to remember this better than if I had simply told you that the capital of Texas is Austin. This is the ‘generation effect’ and you could also characterise it as a ‘productive failure’ effect: You failed but your failure was productive in helping you recall the correct answer at a later time.

Yet the worked example effect seems to show something opposite to this. When novice algebra students are given a worked example to study they learn more than if they try to solve the problem themselves.

One way of resolving the contradiction might be to do with feedback. In the Texas example, the answer is given after the failed attempt. Productive failure research attempts to do the same for mathematical problems – after a period of problem solving, students are given direct instruction in the standard solution method. There is some evidence to support this approach but it is far from overwhelming and there have been questions about the experimental designs that have been used.

A competing way of resolving the contradiction between learner-generation and explicit guidance has been put forward by John Sweller and others. He defines something known as ‘element interactivity’. This is essentially a measure of how dependent different elements of a problem are upon other elements of the problem. In the case of the capital of Texas, there is pretty much no element interactivity at all. It is a single, bald fact to recall. The same is true for what might, on the surface, seem quite complicated items such as the name of a chemical element. However, in an algebra problem, ‘a’ relates in a specific way to ‘y’ which relates in a specific way to ‘x’. Manipulate one and this has consequences for the others and so you need to be able to handle both the elements and the relationships between them. Furthermore, it is suggested that as you become more expert, this element interactivity decreases because you simply ‘know’ many of these relationships – the solution patterns are stored in your long term memory.

Cognitive load theory would predict that too much element interactivity would lead to reduced learning and a reversal of the generation effect.

The idea of element interactivity has been quite roundly criticised, notable by Jeffrey Karpicke. And this fair enough. Even Sweller will acknowledge that it is hard to define or quantify element interactivity in many situations.

Karpicke’s interest comes from his research into retrieval practice. This is the idea that testing or quizzing is a more effective learning activity than restudying or rereading – you’ve probably heard of this. There is plenty of evidence to support this effect, much of which comes from well-designed lab-based studies. Retrieval practice is similar to the generation effect except that it comes after an initial phase of teaching rather than at the very start. However, it seems likely that retrieval and generation share some characteristics and work in similar ways.

The differences between the Sweller and Karpicke camps came to a head in a recent special issue of the journal, ‘Educational Psychology Review’. Sweller and Tamara van Gog produced evidence that the effect of retrieval practice declines with increased element interactivity, bolstering the case based upon Cognitive Load Theory. Karpicke and William Aue responded by criticising the construct of element interactivity, the experiments that Sweller and van Gog had chosen to include in their study and that fact that these experiments didn’t manipulate element interactivity i.e. it was said to be high in some experiments and in others it was said to be low but in no experiments was it varied.

And so it was with interest that I read a new paper from a different set of researchers who sought to test retrieval practice in the wild rather than the lab. They studied students enrolled in an undergraduate biology course and their acquisition of course content regarding reproduction. The researchers compared the task of copying-out a definition with constructing a definition (a form of retrieval practice). They found that low-performing students benefited most from copying but that high performing students benefited most from retrieval. Perhaps wisely, they avoided any discussion of element interactivity but the findings are consistent with the argument of Sweller and van Gog and, crucially, ‘element interactivity’ could be said to have been manipulated within the one experiment in the form of student expertise.

As an interesting aside, they also asked students to predict their test scores following both kinds of training and found a Dunning-Kruger effect, with low-performing students tending to overestimate their performance.

By New York World-Telegram and the Sun staff photographer: Orlando Fernandez [Public domain], via Wikimedia Commons
By New York World-Telegram and the Sun staff photographer: Orlando Fernandez [Public domain], via Wikimedia Commons

2 thoughts on “When quizzing fails

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.