It is a cold Saturday morning in winter. You have lots to do but the covers are warm and soft and your bed is welcoming. Your children are quiet – they’re probably watching something inappropriate on TV but, crucially, they’re not bothering you. A few thoughts flit through your mind; you need to pay the water bill, you need to post a birthday present to your great aunt Penelope before the post office shuts at 12.00 pm, you need to get your car cleaned so that you don’t have to wash your hands every time you open the boot, you need to get out there and do something significant with your life before you die.
Your better-half enters the bedroom with a steaming mug of milky tea; your favourite. ‘How agreeable!’ you think, but before you can voice your appreciation, your partner speaks.
“The kitchen sink is blocked and it smells like rotten cabbage.”
According to many proponents of problem-based learning, you should now immediately leap out of bed crying, ‘thank you, thank you,’ whilst skipping to the kitchen to have a sniff of it yourself. Why? Well, being presented with problems is intrinsically motivating and authentic, real-life problems – such as a blocked sink that smells of putrefying brassicas – is doubly so. No, you don’t want to simply be told a solution.
I think this myth warrants specific challenge because it represents the single wheel on a barrow that some people have been pushing for far too long.
It signals a retreat from claiming that problem-based learning produces greater academic gains. It seems that it does not. And so the territory has shifted; problem-based learning is more intrinsically motivating instead.
I have noted before that this is an odd claim. Increased motivation should result in increased application to learning tasks and therefore increased academic gains. Although it might be true that these won’t show up in short studies, they should appear in longitudinal and correlational studies.
So I offer two more pieces of evidence. In the first study by Tornare et. al. (2015), we have students who feel more negative after being asked to solve mathematical problems than beforehand. The most important factor seems to be how they feel that they went in the problems and that this is related to self-concept i.e. how good they think they are at maths. Actual performance was not related to their emotional state (apart, perhaps, from feelings of ‘hopelessness’.) It seems evident to me, and probably to many others without needing to read research papers on the subject, that we need to set children up for success by teaching them how to solve the problems. This will make them feel more competent and will positively affect how they feel about problem-solving.
Instead, we have a narrative where children need to struggle because it’s good for them and if they don’t like it then that’s because they’ve got a fixed mindset or something and so we need to put up some motivational posters. I paraphrase.
I dislike talking about definitions but it is necessary to highlight here that the concept of problem-based learning tends to overlap with the idea of inquiry learning; both seem to share many features. In medical training and mathematics teaching, educators tend to refer to quite well-defined problems whereas science tends to adopt a language around ‘inquiry’ because people link this to the scientific method – in fact, many courses will describe learning about the scientific method as developing scientific inquiry skills. However, both approaches involve presenting a problem or question to resolve without giving instruction on a solution, or the solution to a similar task, upfront.
I therefore want to mention a second study by Hushman and Marley (2015) which I think has something to tell us. It is similar to the famous Klahr and Nigam experiment in which students were either taught the scientific principle of controlling variables (CVS) or were facilitated in discovering the principle for themselves (I always find this an ironic subject to choose given that so many educational studies are badly controlled). However, in this case, the researchers had three instructional conditions which are interesting to investigate.
The first condition was called “direct instruction” and is worth quoting in detail:
“The experimenter read a definition to the students of three types of variables (independent, dependent, and control variables). The relationship between variables was verbally illustrated using an example of two levels of ramp steepness as the independent variable, the distance the ball rolled as the dependent variable, and the surface area as the control variable. The explanation was delivered without soliciting responses from the participant. Next, two examples were given… During the presentation of the examples, each type of variable was highlighted by the experimenter. After each example, participants were asked if they could clearly tell the effect of steepness on the distance the ball rolled to induce cognitive engagement (Klahr & Nigam, 2004). After the student answered, regardless of their answer, an explanation as to why the example was or was not unconfounded was given by the experimenter.”
If you are a regular reader of my blog then you will know that I prefer the term ‘explicit instruction’ to ‘direct instruction’ to avoid confusion with Engelmann’s ‘Direct Instruction’ programs. However, I do not recognise the above as a description of explicit instruction because it is profoundly non-interactive. Rosenshine unpacks the various confusions around this term well in an article that you really should read if you have the patience. In short, I think this represents a worst-case kind of direct instruction.
The other two conditions were a minimally-guided student-centred inquiry condition called ‘minimal instruction’, similar to the Klahr and Nigam study and a ‘guided instruction’ condition which, confusingly, reads a lot like my understanding of explicit instruction and seems to have no student problem-solving or inquiry prior to the presentation of examples (unless we count the presentation of some initial prompt-to-reflection questions; something that I often use in my own teaching).
“Guided instruction was delivered through the use of leading questions prompting reflection during the example phase of the session (Mayer, 2004). Students in this treatment received the same instruction on the type of variables as those students in the direct instruction treatment. While the same experimental examples were used, the participants in the guided instruction treatment were asked questions prompting explanation of the parts of the experiment and whether the experiment was unconfounded. They were asked to verbalize what the independent variable was in the example and to elaborate on how they knew. Then they were asked the same questions regarding the dependent variable and the control variables. When wrong answers were given, the facilitator encouraged the participant to try again. Finally, the student was asked if he or she could clearly tell the effect of steepness on the distance the ball rolled, followed by questions asking the student to provide an explanation as to why he or she could or could not clearly make a conclusion. In contrast to the direct instruction condition, the facilitator offered no explanations during the presentation of the examples.”
Why have I gone to so much trouble to describe an experiment of this kind in a post about motivation? Well, sometimes you have to go digging for gold. Without perhaps realising it, these experimenters have run a trial of interactive explicit instruction against student-led inquiry. And what makes this interesting is that they measured the students’ self-efficacy.
Before I get to that, it is also worth remarking that both ‘direct instruction’ and ‘guided instruction’ outperformed ‘minimal instruction’ on most learning measures. Interestingly, although these differences were significant, there was often no significant difference between the ‘direct instruction’ condition and the ‘guided instruction’ condition. I would have expected more differences in favour of the latter, given that it involved more student interaction.
When students’ feelings about their success in science – their self-efficacy – were examined, the pattern changed. Gains in self-efficacy were significantly greater for ‘guided instruction’ than for ‘direct instruction’ and ‘minimal instruction’, with the latter two not being significantly different from each other.
So what does this show? If you explicitly teach students stuff and ask them questions while you’re doing it then they will learn more and feel like they’re better at the subject than if you just lecture them or let them solve problems without much guidance.
And I suspect this is going to be motivating.