There is a muscular form of the learning sciences that will be familiar to those on Twitter. Emanating from undergraduate seminar rooms, particularly in the United States, this is the strand that promotes retrieval practice, distributed practice, interleaving and dual coding while dismissing the folly of learning styles. I broadly approve, but I don’t think this is a complete agenda. Think about the big, controversial issues we have to face in schools – managing behaviour, the teaching of reading, the teaching of maths. How are these addressed? And while we are discussing maths, I think we need to point to an issue with maths textbooks.
The textbooks my students use follow a formula. A chapter will introduce a new concept and related procedures. At the end of the chapter will be a number of questions related to the concept for students to answer. Drank neat, this is not interleaving.
Interleaving is often confused with distributed practice, but the two are different. Interleaving involves completing one type of problem followed by a different type of problem then a further type before circling back to the first. The idea is that this presents ‘desirable difficulties’ that impair performance in the short-term but enhance it in the long-term. If my students’ textbooks incorporated interleaving then they would look very different. The implications of research on interleaving are therefore quite profound and could lead teachers to ripping up their textbooks.
Yet I am not so sure that the textbooks have this wrong. The reason is related to a concept from Cognitive Load Theory that has clear, practical implications for teachers and that should be a lens through which teachers interpret all such advice about desirable difficulties. It is that important.
This is the concept of element interactivity, a concept that I have known some scientists to mock. It is certainly misunderstood. For instance, in a recent paper published in the Journal of Educational Psychology, element interactivity is conflated with the complexity of the learning materials. This leads the authors to suggest that learning to solve a problem from a worked example and recalling details of that worked example are tasks with equivalent levels of complexity because the learning materials are the same.
I can understand why we might want to view complexity in this way – it seems an objective measure – but clearly, these tasks are not equivalent in complexity. Complexity does not just relate to the materials, but the task. Recalling details of a worked example seems to be a less complex task that having to remember and apply the steps in a particular order.
Complexity also, inevitably, depends on the student. For instance, imagine I asked you to learn two words. You have a minute to study each of them and then, after five minutes, I will ask you to recall them. Here they are:
Both words contain the same number of letters and so, by any objective measure, the materials represent the same level of complexity. However, the second word is written in the Roman alphabet used by English. Given that you can read this blog, you will be able to read the word and automatically associate it with meaning retrieved from your long-term memory.
If you know Russian then you can probably also read the first word. If you do not, then you will have to try and hold on to all those symbols in working memory and you will not have access to the word’s meaning, closing down one easy route for remembering it. Clearly, this would be a more complex task with far more for you to attend to.
So, although it would be good to be able measure complexity entirely objectively, we cannot. If you do not like element interactivity then that is fine, but you will need to find some other way of capturing the complexity of a learning task and this cannot be done without reference to the human being doing the learning and what they already know.
However, you might ask: why do we need a measure of complexity at all?
There is growing evidence that the effectiveness of retrieval practice, distributed practice and all those other ways of introducing desirable difficulties depend on element interactivity.
The initial experiments in many of these areas were conducted using relatively simple tasks such as learning lists of words. However, when we move to more complex tasks, we start to see a difference emerge between relative novices and relative experts. In a recent experiment, variation of task was found to be beneficial for relative experts but not for novices.
It may be the case that the textbooks have it right after all. When first meeting a complex concept, there may be enough difficulties for students to attend to without introducing supposedly desirable ones.
If so, the concept of element interactivity, far from being a rarefied, theoretical pursuit, is critical to the practical decisions that teachers make every day.
7 thoughts on “One idea that teachers probably need to know about”
I still don’t get what element interactivity is. English isn’t my first language however. Is it variation of task about the same material ?
It refers to how many different interconnected things must be understood. An assignment with low element interactivity would be recalling the symbols for different elements of the periodic table, whereas an assignment with high element interactivity would be recalling what happens in Act II of Hamlet or solving a physics problem that calls for several equations to be applied to get to the final answer.
You examples of high element interactivity would only be true for relative novices. An expert in physics can draw on complex schema and so the physics problem would be low element interactivity for her.
Thank you for the reply. But now, I don’t see what’s the difference with task complexity. Is it that the more complex the task, the more it shows element interactivity ?
Pingback: Mapping element interactivity – Filling the pail
Pingback: Problem solving or explicit instruction: Which should go first when element interactivity is high? – Filling the pail