Irreducible complexity

The summer before I started university, I was given a reading list. On this list was Richard Dawkins’ “The Blind Watchmaker”. It had a major impact on me. Prior to this point, I had implicitly accepted a concept known as ‘irreducible complexity’ which can be encapsulated by the question: What use is half an eye? Basically, proponents of irreducible complexity doubt whether something as complex as an eye could evolve through a series of infinitesimal steps, each of which confers a small but significant advantage on an organism in the way that Darwinian evolution suggests. Instead, these features must emerge fully-formed. In The Blind Watchmaker, Dawkin’s patiently describes each tiny step in the evolution of the eye; how a patch of light-sensitive cells confers an advantage over organisms which cannot sense light, how having these cells in a depression on the skin gives some sense of the direction of the light and how this is an advantage over just having the cells on the surface and so on. Dawkin’s proceeds to build to a whole eye in such steps. He even notes that eyes have evolved independently in a range of different organisms.

Matticus78 at the English language Wikipedia [GFDL ( or CC-BY-SA-3.0 (], via Wikimedia Commons

Matticus78 at the English language Wikipedia [GFDL ( or CC-BY-SA-3.0 (, via Wikimedia Commons

Most of those who accept irreducible complexity are motivated by religion. It is a key argument put forward for pseudoscientific theories of intelligent design which reject evolution in favour of some form of creator. I had a more idiosyncratic view, holding that evolution did occur but must have proceeded via great bursts of mutations, probably caused by radiation from space. Whatever I had thought, Dawkins changed my mind.

However, I think that we can draws some useful parallels here for education. Consider an intervention, teaching method or innovative approach. Must it be adopted wholesale? Will implementing parts of it give some of the claimed advantages? Of course, it is entirely possible that there are interventions that are greater than the sum of their parts but if this is the case;

1. Why is this so? What mechanism causes it to work only if all components are present?

2. Is it really going to be possible to implement this intervention at scale if it’s only going to work if everyone adopts it faithfully?

The problem with accepting the irreducible complexity of a particular approach is that when and if it fails to produce results then proponents of the intervention can always claim that it wasn’t done properly. This can even approach the ‘no true Scotsman’ fallacy whereby the fact that it didn’t work proves it wasn’t implemented properly. In scientific terms, we have an unfalsifiable claim.

How can we be sure something is not irreducibly complex? If we examine Engelmann’s Direct Instruction then the different components of it should each confer some advantage over not having them in place; explicit instruction on concepts and procedures, immediate corrective feedback, practice, a systematic approach (over a serendipitous one) and scripted lessons. I think the evidence supports the first four of these, with some caveats around how and when feedback should be immediate. I am not so sure about the scripting of lessons because I know of few other examples where this is used. On balance, I am happy to conclude that such an approach does not display irreducible complexity and is therefore likely to work at scale.

Try applying this test to a different intervention.


9 thoughts on “Irreducible complexity

  1. chrismwparsons says:

    What about synthetic phonics? Arguably each phoneme-grapheme correspondence which a child learns confers some advantage to them, but purists would argue mixing some phonics with ‘whole word’ learning undermines the system, as pupils will not commit to decoding properly.

    Overall I’m totally with you – and I think so much IBL vs DI – either/or talk is unnecessary.

    • Ultimately phonics is really for pupils who wouldn’t learn to read any other way. It’s value for both reading and spelling phonetically is important but I think that whole word learning works for fewer children in my experience. One of the most important aspects of phonics teaching is that – yes we need to fix how to get them to spell words correctly – but I have experienced a sharp down turn of children who claimed they could not spell in KS2 – a quick – use your phonics – would get them on track. It is better than before.

    • “purists would argue mixing some phonics with ‘whole word’ learning undermines the system, as pupils will not commit to decoding properly.”

      No, I would say phonics advocates argue that mixing some phonics with whole word learning means that SOME pupils do not commit to decoding properly. However, it’s not so much learning a few whole words that is the problem, it is combining that with ALSO being told to look at the picture, or the first letter of the word, and guess, etc. And also not being corrected if the word ‘pony’ is guessed when the word on the page is actually ‘horse’ (making meaning is paramount in this system). These are the Whole Language (which is, confusingly, different from ‘whole word’) strategies that tell children to use various clues or ‘cues’ to identify words, with phonic “cues” being seen as a last resort.

      This method was enforced by the Literacy Strategy, and so is still being taught in many classrooms, alongside phonics. Reading scheme books were specially designed to facilitate such guessing, and many schools are still using those books. Some children are simply overwhelmed by being offered so many strategies at once, particularly those with lower working memory or SEN. These children turn up in remedial reading sessions in year 7 with their eyes darting all over the page for “clues”. For such children, the key is to get them to look at the word from left to right, taking in each letter/grapheme. Obviously, if they are not doing this, they will get little advantage from a theoretical knowledge of some correspondences.

      The other problem is that children at risk of reading failure often need a very large amount of practice in decoding to establish all the correspondences they need firmly. The more ‘guessing’ they do, the less practice they get.

      Here’s a link to a good analysis of the Whole Language approach, including how it informed the Literacy Strategy:

      • chrismwparsons says:

        Thank you Chris – I don’t have to teach phonics, so I haven’t explored the nuances of the debate as closely as you. What you say is interesting.

        … Taking this further however… could it not then be argued by advocates, that ANY teaching strategy needs to be kept ‘wholesale’, ‘pure’ – whatever – for SOME recipients…? Whether DI or whatever? That no strategy claims irreducible complexity for all people at all times, but that there will always be some pupils who won’t benefit from it if you don’t follow the full procedure? I can see that this seems to open the door to possible uses of the ‘no true Scotsman’ fallacy in some way, but it seems a feasible argument that could be made by some.

        … And does this then put this post into the context of just about any other educational idea we wrestle with – that it depends on the specifics of the children at hand as to whether it’s valid or not…?

  2. ChrisM

    “could it not then be argued by advocates, that ANY teaching strategy needs to be kept ‘wholesale’, ‘pure’ – whatever – for SOME recipients…? Whether DI or whatever?”

    I don’t think that necessarily follows. The thing about “other strategies” and phonics is that problems arise mainly because the other strategies can actually PREVENT the child looking at words “from left to right all through the word”, as recommended by phonics. The key point I was trying to make is that when this happens, they are not ‘supplementing’ phonics, they are interfering with it – rather in the way that mixing medications can interfere with their effectiveness. I don’t think a bit of enquiry learning would interfere with learning through DI in the same way, so the cases wouldn’t be analogous.

    I think you would have to look at each case on its merits. For example, you are unlikely to learn touch typing if you are continually looking at the keyboard, because this interferes with the necessary internalizing of the position of the keys.

    • chrismwparsons says:

      This is a great reflection ChrisN, and it has helped clarify my own thinking greatly 🙂

      I guess the key thing which now occurs to me here then is that reading through decoding and touch typing are both discrete, specialised skills, and hence the successful teaching of them is more likely to rely on a specific approach. However, there is no single skill or kind of knowledge which DI is setting itself up as specifically trying to achieve.

      Perhaps then, the “irreducible complexity” idea might only apply to techniques used to teach highly specific skills. Even then, there can be many routes to the learning of some skills, and our brain draws on all sorts of routes to the same goal sometimes. Learning times-tables comes to mind as something which could seem to be a highly specific skill, but I think can be mastered through a whole patchwork of approaches.

      Thanks again.

  3. Pingback: What is progressive education? | Filling the pail

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.