Another decade, another article in The Conversation illustrating the irony that advocates of critical thinking seldom think critically about it.
There are two interlinked but distinct concepts that we have to consider about critical thinking. Firstly, how general are these skills and, secondly, how transferable are they? They may sound like the same thing, but they are not. And they both have real, practical consequences.
Firstly, let’s consider generality. Those who tend to buy into the view that critical thinking is a general skill (or a ‘general capability’ as it is termed in the Australian Curriculum) can fall into two main traps. The first is to view the contexts for thinking critically as interchangeable. This is what we see when science lessons become projects involving cardboard boxes and LEDs. The precise science covered is not considered important because students are developing their skills of creativity, critical thinking, problem solving and so on. However, if you view these skills (or ‘virtues’ as Carl Bereiter calls them) as being highly domain specific, then these students have merely developed the ability to think critically, creatively and so on in the domain of cardboard boxes and LEDs. The value of this to the student then becomes debatable, especially when contrasted with actually learning some science.
The second trap is to think that, because these skills are general, you can teach a discrete critical thinking course or bolt a critical thinking module onto the curriculum that deals with these strategies in the abstract or in model contexts.
When we start to consider the kinds of strategies that may be more generally applicable, we often alight on maxims such as ‘look at the problem from different perspectives’ or the kinds of rules-of-thumb embedded in logical fallacies. I tend to think that there is a trade-off. The more widely applicable such a strategy or rule-of-thumb is then the less useful it tends to be. One example is perhaps the maxim that, ‘correlation is not causation’:
This maxim is applicable to a number of fields where we seek correlational evidence, from the sciences, through social science and into the humanities. There are even philosophers who will go on about it a bit. And you may think it has high utility. But let’s imagine you wade into a discussion between two public health officials about the health impacts of smoking with your pithy observation that ‘correlation is not causation’. You are likely to get short shrift. Why? Because they have a lot more contextual, domain-relevant knowledge. They can probably discuss the extent of the correlational evidence and the tipping point that signifies acceptance of a causal relationship. If you try to apply this maxim when reading an article written by one of these officials, it may lead you into error.
Now consider someone making claims that school exclusions cause knife crime based upon statistics showing rising levels of both. Here, the maxim that ‘correlation is not causation’ may point you in the right direction. It is plausible, for instance, that the same thing that causes students to be excluded from school is causing the knife crime. Or that knife crime causes students to be excluded from school. But how can you tell the difference? Armed only with your maxim, how will you know when to apply it and when not to apply it? The only way you can figure that out is by learning lots about the evidence behind smoking or knife crime. In other words, the way you ultimately establish a reasoned position is by learning about the domain in question. The maxim alone could lead you as often into error as to the truth.
So what of transfer?
Even if we accept that ‘correlation is not causation’ has some general utility, it is not certain that people will apply it generally. Odd as it sounds, there is quite a lot of evidence that when people learn strategies that do work across different domains, the strategy somehow becomes locked to the domain in which they learnt it and so people fail to apply it to other situations where it would also add value. Dan Willingham highlights this in his discussion of two problems that involve a tumour and a fortress.
Latterly, proponents of critical thinking appear to have become more aware of the common criticisms and will make statements such as, ‘of course, knowledge is important to critical thinking’. However, you cannot have it both ways. If critical thinking just represent the highest levels of performance within a traditional subject discipline, then we do not need critical thinking courses or a special focus on critical thinking. We just need to teach our subjects really well. However, if we believe that it deserves special treatment, then why do we think that, unless it has a more general impact?
Peter Ellerton, the author of the piece in The Conversation, proposes that we consider the example of Philosophy for Children, a, ‘program that involves teaching the methodology of argument and focuses on thinking skills.’ So it is a discrete critical thinking course. Ellerton also thinks it will have a general impact because he states that, ‘Studies involving a Philosophy for Children approach show children experience cognitive gains, as measured by improved academic outcomes, for several years after having weekly classes for a year compared to their peers.’ In other words, deliver Philosophy for Children and you will produce gains in wider academic outcomes. I am not convinced by the study Ellerton cites. It is not randomised and there is high attrition in the experimental versus control groups (28% versus 9%).
A perhaps more thorough study of the benefits of Philosophy for Children is the Education Endowment Foundation randomised controlled trial that found that children who had the intervention made greater gains in reading than those who did not. However, this was an odd trial which I have discussed at great length on this blog. The scale-up study is scheduled to report early next year and then we will see more evidence supporting or refuting the extraordinary idea that discussing whether it is OK to hit your teddy bear impacts on reading ability.
Until then, proponents of critical thinking, although they may be welcomed at conferences, have a credibility gap. They need to demonstrate two things, firstly that these skills are generally useful and, secondly, that students will transfer their learning of these skills to the various different domains where they apply. Until then, we should ask, as Carl Bereiter does:
“Nobel Laureates, captains of industry, cabinet ministers, school superintendents – any one of them is likely to end a commencement address or a discourse on the current crisis by declaring that schools have got to ‘teach students how to think’. The words roll easily off the tongue and the speakers show not the slightest doubt that the words mean something. But do they?”