From the outset, concerns were raised about a programme launched in New South Wales to teach early literacy known as “Language, Learning and Literacy” or simply “L3”.
In Winter 2015, Learning Difficulties Australia published a feature article in their magazine about the programme. According to this article:
“The L3 guidelines involve small group teaching within the whole classroom, with the classroom teacher focussing on groups of two or three children for short periods at a time in what is called the ‘engine room’, while the rest of the class are hopefully engaged in literacy-related self-directed activities.”
If we bear in mind that L3 covers the first year of schooling then these arrangements sound entirely impractical. Furthermore, L3 seems to be based upon dodgy ideas about reading instruction:
“The words, letters and sounds chosen for explicit lessons in Word Work are drawn from the language of the text. There is not a predetermined sequence to follow.”
So it is not a systematic approach then. Systematic approaches cycle students through grapheme (sets of letters) and phoneme (the sounds encoded by these letters) correspondences in a logical order rather than one dependent upon a particular text. In their groundbreaking 1999 review of the available research for the U.S. government, Catherine Snow and colleagues found that ’embedded phonics’ programmes of this kind were less effective than systematic ‘direct code’ programmes where texts were selected to practice the correspondences that had recently been learnt rather than the other way around.
L3 also seems to be teaching ‘multi-cuing’ strategies. These are where children are encouraged to guess words from context – such as the pictures in a picture book – rather than sound them out. In his 2005 review for the U.K. government, Jim Rose ended up devoting an entire appendix of the report to such strategies, highlighting that they do not represent the methods used by skill readers and are at odds with the science of reading. Teaching these strategies not only wastes time that could be spend on actual decoding but it also encourages students to rely on them, something that will work less and less effectively as text complexity increases.
It seems odd that a brand new programme in New South Wales would have such fundamental flaws.
You may have noticed that I have used the word ‘seems’ a great deal. This is because the actual L3 materials are shrouded in immense secrecy and so it is hard to make any definitive statements. So it was with interest that I recently found myself chatting to a few teachers with first-hand experience of L3. All had negative tales to tell. It was hard for them to get past the sheer classroom management madness of the ‘engine room’ approach in which most of the class are unproductively occupied for most of the L3 session. However, when they did talk about the theoretical basis, it was clear that this was not a scientifically sound phonics programme.
If you are a teacher in NSW who is starting to question L3 then I suggest completing some reading on scientifically-based literacy instruction. The Snow report above is a good place to start. The 2005 review of reading for the Australian government is also a good source.
L3 will undoubtedly collapse at some point and so NSW teachers might therefore decide to cut their losses sooner rather than later. If you are looking for an alternative to L3 then I can recommend the U.K. government’s Letters and Sounds programme. It is not perfect but it is based in better evidence and is free to use (the U.K. government obviously saw no reason to keep it secret). There are also a variety of strong commercial programmes available* and there are a number of people who are working on evidence-based alternatives to the kind of mixed methods approach typified by L3. So there are plenty of folks out there who can offer you a hand.
What this debacle proves is that state education ministers can proclaim a commitment to using evidence but unless they have control of their departments then such proclamations are empty.
*if you have a sound SSP programme to pitch then please feel free to do so in the comments
Ten years ago a randomised controlled trial (RCT) took place in the U.S. The trial pitted four early years maths programs against each other. Two of these programs, Saxon Math and Math Expressions, used explicit instruction. The other two programs, Investigations in Number Data and Space (commonly known as ‘TERC’ after its developers) and Scott Foresman-Addison Wesley Mathematics, did not, preferring a ‘constructivist‘ approach. For instance, TERC encourages students to ‘develop their own strategies for solving problems and engage in discussion about their reasoning and ideas’. This stands in contrast to teachers explicitly teaching students how to solve problems.
The RCT found that the explicit programs were more effective than the constructivist ones. This is hardly surprising given the wealth of evidence we have in favour of explicit instruction generally, as well as specific experiments that have found that explicit maths is superior to constructivist maths.
Eric Taylor, an assistant professor of education at Harvard, has now reviewed the data from the original study and completed an additional analysis that seems to show something pretty interesting (thanks to @Smithre5 for pointing this article out to me).
The teachers in the study were all assessed on something called the “Mathematical Knowledge for Teaching” test (MKT). This essentially assesses teachers’ maths knowledge but through the lens of teaching maths. For instance, some of the questions provide sample student responses and then ask questions about those responses.
Taylor found evidence that when teachers had a low score on the MKT test, it did not really matter whether they used the explicit or constructivist maths programs. Instead, it was for teachers who scored more highly that a difference emerged in favour of the explicit approach.
There is a common sense explanation for this. In a program where the teacher has to stand up and actually teach maths, their maths skills matter, but when the students have to figure things out for themselves then the more skilled teachers have no way of making use of their greater skill level.
If this finding stands across other studies then I think it has three implications:
- Primary teachers must pass a maths skills test if they are to teach mathematics (schools could perhaps reorganise so that maths was taught by specialists to get around the problem of getting all teachers to this level)
- Primary teachers who lack maths skills should be given training in this area
- Explicit programs for teaching maths should be adopted in primary schools
We already have masses of evidence for point three and it seems that education systems might be waking up to points one and two.
This finding perhaps explains other interesting results. For instance, the literature is full of studies that seem to tell contradictory stories about the effect of the level of teacher education on student results. This might be expected if bad teaching methods cancel out any gains from having better qualified teachers.
I used to be into clubbing. I don’t mean the kind that involves repeatedly striking something with a blunt object. Instead, I am thinking about Fabric, Turnmills, The End and holidays in Ibiza. These were the days of superstar DJs and long lie-ins.
But we would often start the night at the Tattershall Castle. That’s the place I’m not sure still exists.
The Tattershall Castle was a boat moored on the Thames that served as a pub. We would start our evenings there for a good reason – the Houses of Parliament looked out over us and this gave a sense of connection to something bigger. It was a potent feeling. We were part of something; a story.
I felt that tonight when I walked down a road, turned a corner and there, before me, was Sydney Harbour Bridge. I was somewhere, connecting to something bigger than me; something that mattered.
This is what a knowledge rich education does. It connects us to something bigger and gives us a sense of where we are, who we are, and how we got here. It places us in the world and so it is a wonderful gift to give to our children.
I have been blogging about education for close to five years. When I first started, I never had an audience in mind (to be fair, I didn’t have an audience). It was more of a sandbox for setting out my ideas. I think that putting your ideas down in writing helps clarify your thinking and we don’t always have the opportunity to do that in real life. So blogging was an exercise that helped.
After a while, I began to receive feedback from readers. Much of this was positive and from people chewing over the same ideas as me. I started to realise that I was providing a useful resource. I would share papers and articles that I had read and they found this helpful.
Others would disagree with me. Again, I saw this as a way of testing out my thinking. Some of the arguments in the comments or on Twitter would go back and forth for days.
Then an odd thing started to happen. People would give me advice. “If you want to persuade me…” they would write before making comments about my tone or style. I genuinely found it astonishing that they thought I was trying to persuade anyone of anything. I now understand a little more about how writing is taught at school. It is conceptualised as writing for a purpose; narrative, persuasive, informative. It makes sense that someone would interpret the articulation of my views as an attempt to persuade.
Some of this advice seemed pretty restrictive: I should hedge everything; I should write more in the style of an academic paper; I should avoid saying anything that might upset anyone; I should be less abrupt on Twitter.
I never paid much attention to this advice because I wasn’t trying to persuade. I now realise that it’s almost impossible to persuade the people who strongly disagree with me and I wonder why they were offering me this advice. There are some people I’ve given up arguing with because they seem impervious to reason. More darkly, there have been attempts to shut me down with threats, complaints and abuse.
However, I now realise that there is a third constituency out there. These are the decent, hardworking teachers who are not ideologically committed to educational progressivism. They may have been taught this ideology during their training – without it ever being labelled as such – and had it promoted to them through professional development. But, as pragmatists, they are aware that it doesn’t work particularly well; it is impractical. I think it is no coincidence that the strongest progressive voices are those who don’t have to teach classes every day; consultants, academics, managers and commentators from other fields.
And I’ve realised that it is ordinary teachers who benefit from the airing of these opposing views. They see bloggers like me making a reasoned case and they see us attacked for our tone, insulted or labelled as fascists. They wonder if there is any rational argument hidden away behind those accusations and they suspect that they are made so vociferously because such an argument is lacking.
So I think this debate is good for teachers. As a profession, we can only gain from it. That’s what keeps me blogging.
I noticed some discussion recently about Edward De Bono’s silly thinking hats. So I thought it might be worth re-posting this old piece that first appeared on the websofsubstance blog back in September 2013. Apologies for the fact that I’ve since reused the voice projection anecdote on this blog.
When I was a young pup, during my first year of teaching, I had to attend a special training session each week with the other young pups and our professional tutor. One week, this session was led by a drama teacher and the subject was the proper projection of the voice. We were all stood in a line and asked to say the word, “Now,” in turn. Apparently, we were to do this from our stomachs – which oddly seemed to be located in our intestines – and not from our throats. I failed. So, the instructor asked me to jog on the spot and say, “Now.” Begrudgingly, I did this but it seems that this exertion was in vain because I was still utilising my throat in the process.
Not to be discouraged, the instructor had another idea. I should run from one end of the room to another saying, “Now,” repeatedly as I did so.
“No,” I said throatily, “I won’t be doing that,” and I sat down.
My professional tutor was embarrassed and there was something of a flap before we all agreed that it was perfectly fine for me to sit out the rest of the activity.
This highlights the importance of seeing things from multiple perspectives. What the instructor perceived to be playful and constructive, I perceived to be pointless and demeaning.
Imagine, therefore, that someone were to ask me – literally or metaphorically – to don Edward de Bono’s red thinking hat and declare my emotional reaction to a proposal to alter the end-of-term reporting criteria. On a good day, I may confect something trite in order to move the discussion on to the next step. On a bad day, I might just refuse to play.
Further, imagine it is 2006 and the boss of a big bank is conducting a thinking hats session around the proposal to take-over a profitable sub-prime mortgage provider. Imagine an underling is given the job of performing some ‘black hat’ thinking in a meeting and divine the potential problems. Which of the following scenarios do you think would be most likely?
1. The underling plays “It’s the end of the world,” by REM on the boardroom sound-system whilst swaying rhythmically and issuing dire prognostications about the death of the bank, a global financial crisis and huge sovereign debts accrued in bailing-out a banking system deemed too big to fail.
2. The underling notes some branding differences between the two banks that will need to be overcome.
One of the largest risks we face is hubris. Just in the last decade, we have had the Iraq war and the banking collapse. Whatever you think about the moral case for the Iraq war, there is no denying that it was badly thought through, largely due to hubris. The banking crisis is a monument to hubris. Could it have been avoided with thinking hats? Probably not. What is worse, such strategies have the potential to provide a veneer of proper analysis where no such analysis exists. They replicate the form of different types of thinking without necessarily replicating their substance. The confusion of form with substance, the idea that by adopting a form you can short-cut the need to engage in the substance, is a significant error of reason.
Simply donning a white hat does not give you the knowledge – known as ‘information’ in the thinking hats schema – that you need to make a good decision. Yet, this is where the majority of the work is to be done in the majority of cases; the collation, evaluation and comprehension of sufficient domain knowledge.
I first came across thinking hats when I picked-up de Bono’s book as a Penguin Classic a few years ago. It was a cheap, impulse buy. I assumed that it would contain psychological insights based upon, well, the science of psychology. What I found was a sequence of assertions and a description of a method, plus lots of testimonials. I soon tired of this, declaring the whole thing ‘silly’ and not paying it further attention.
My next encounter was quite recently, in the book “How Mumbo-Jumbo Conquered the World,’ by Francis Wheen. To my astonishment, I found that the Blair government had actually been a big fan of de Bono and his thinking hats. Wheen explains;
“When Blair entered Downing Street, several executives from Andersen – and McKinseys, the other leading management consultancy – were seconded to Whitehall with a brief to practise ‘blue skies thinking’. Soon afterwards, in perhaps the most remarkable manifestation of New Labour’s guru-worship, they were joined by Dr Edward de Bono, whose task was ‘to develop bright ideas on schools and jobs.’
In the autumn of 1998 more than 200 officials from the Department of Education were treated to a lecture from de Bono on his ‘Six Thinking Hats system’ of decision making… ‘Without wishing to boast,’ he added, ‘this is the first new way of thinking to be developed for 2,400 years since the days of Plato, Socrates and Aristotle.’”
Francis Wheen’s book was a real eye-opener and I recommend it. He goes on to explain that the warning signs around de Bono’s judgement were already there for the Blair government to see;
“In his 1985 book… Edward de Bono offered the lessons that might be learned from a number of people… The millionaires he extolled included US hotelier Harry Helmsley, later convicted of massive tax evasion, and Robert Maxwell, subsequently exposed as one of the most outrageous fraudsters in British history.”
So I knew a little about Edward de Bono and his thinking hats but I hadn’t been aware that this approach had made it into schools until I read Tom Bennett’s excellent book, ‘Teacher Proof’ – another recommended read. It seems that some teachers are using the six thinking hats in class to develop thinking amongst their students.
I sometimes offend people when I criticise forms of pedagogy. Let me be clear; it is perfectly valid to criticise or even mock a teaching approach. This is not a personal attack. However, some people choose to see it as such: I am attacking something that they do and so they see is as an attack on them personally. It’s as if claiming that the England team’s tactics are unsound is a personal attack on the integrity of the manager. It is not. Such claims are fair in a free society. But this convenient line of reasoning is often effective at shutting down legitimate debate in education.
So here are my reservations about the thinking hats:
1. As I have mentioned, adopting the form of certain type of thinking is not a short-cut to the substance. Many responses are likely to be lazy, platitudinous and uninformed. Pretending to be a wizard doesn’t make you a wizard.
2. The role of knowledge is diminished. The white hat (information) is just one of a total of five active hats who are shepherded by the blue managerial hat. In real decisions, knowledge plays a much more central role and is critical to any success or failure.
3. It relies on a proposition; something open-ended to be discussed. This is not necessarily bad in of itself, but open-endedness is fetishised in some quarters in education at the expense of the transmission of knowledge. Such strategies fit this agenda.
4. It is silly.
Does this mean that you should never touch the hats? Actually, no, it does not. I don’t care for them but I can see that they could break-up a lesson in an interesting way. They could represent a fun way of having a classroom discussion. Even if we discovered the most efficient, optimal form of teaching then you wouldn’t want to do it all of the time; students would become tired because thinking is hard and then the strategy would be suboptimal. There is something to be said for mixing things up a bit. I just don’t think that thinking hats should be taken too seriously.
There’s another reason why I wouldn’t ban the hats. I find Debra Kidd’s defence of thinking hats to be lucid, detailed and convincing, although not convincing enough to change my mind just yet. I believe that if and when Debra uses this approach then she and her students find it to be effective. This may be because of a placebo effect. It may be because Debra integrates a lot of her experience and wisdom into its application – like the man who made soup from a stone. Or, it may well be that I am completely wrong. I’m not sure that there is enough evidence to decide it one way or the other.
What I would be dead against is a whole school ‘thinking hats’ policy where begrudging, rueful teachers are forced to apply thinking hats in a tokenistic way. I’ve been there with Building Learning Power and its a bad place.
Can you imagine; all those forlorn faces sitting underneath those brightly coloured hats…
I am actually slightly more interested in what to teach than how to teach. However, teaching methods are more amenable to experiment than curriculum content and so I find myself discussing them more often.
The reason why the effect of our choice of content is not easy to test highlights an important flaw in many experimental designs. Think about it: what will you test students on at the end of your experiment? If this content was taught in one condition but not in the other then I can tell you the outcome already. So any fair test of content has to involve a transfer of understanding from one context to another. This is hard to achieve and relies on an element of chance.
So, setting the question of content aside, what are the best teaching methods?
Teacher-led is better
In the words of Jeanne Chall:
“The methods with the highest positive effects on learning are those for which the teacher assumes direction, for example, letting students know what is to be learned and explaining how to learn it, concentrating on tasks, correcting errors, and rewarding of activities – characteristics found in traditional, teacher-centered education… Quite consistently, when results were analysed by socioeconomic status, it was the more traditional education that produced the better academic achievement among children from low-income families.”
There is no great mystery here. If you want a child to learn something then it is more effective to teach it to them than to try to create the conditions through which the child will come to understand that something for themselves. Any teacher who is well versed in formative assessment routines will be aware of just how difficult it is to convey the subtleties of an academic subject while avoiding key misconceptions, even with constant, minute-by-minute attention. The idea that students receiving less teacher input will somehow do better is quite far-fetched.
For instance, what would you predict to be more effective: teaching children how to write or just asking them to do lots and lots of writing? The evidence is clear that explicit writing instruction is superior.
So why is there experimental evidence for alternatives to teacher-led instruction?
The reason why sensible people stray from this fairly obvious position is perhaps related to the way much education research is conducted. If you want to show that your pet approach works then there are plenty of ways to go about this. Firstly, you can try manipulating content. For example, imagine an experiment where one group receives teacher-led instruction about the rate of chemical reactions and the other group conducts experiments. You then give students a test that is all about conducting experiments, the group that learnt through experiments does better and so you conclude that this is more effective than teacher-led instruction.
You could also run your well-resourced and heavily hyped intervention against a poor-quality version of the alternative or perhaps against no alternative at all. There are plenty of experiments where doing something is compared to doing nothing. The Education Endowment Foundation (EEF) seem keen to fund such studies and it is a major reason why I have argued for more ABC designs where two competing interventions are compared against each other and a control.
I suppose the EEF studies do offer us something: If you can’t get your intervention to work under such favourable conditions then it really is a dead duck. The EEF trials of Project-Based Learning and Let’s Think Secondary Science would seem to fit this bill.
This leaves us with a landscape where, as Professor John Hattie is famous for saying, “everything works”. Hattie’s solution is to coral similar studies together using the tool of meta-analysis and then only look for interventions that have an ‘effect size’ that is greater than a certain value (0.40 standard deviations). I am no longer convinced about this solution – it seems arbitrary and takes no account of the quality of the studies that have been fed into the meta-analysis sausage machine.
Well-designed experiments with good controls do tend to consistently show evidence in favour of explicit, teacher-led instruction and so do natural experiments or correlations (see the links here or Rosenshine’s article). The superiority of teacher-led approaches jumps out of the recent two rounds of PISA data. Yes, these are only correlations but they are highly suggestive and suffer far less from potential experimenter bias. They also tell us about what happens in real-world classrooms.
All explicit, all the time?
If you are going to argue that alternatives to explicit instruction are more effective then I will disagree. Similarly, if you want to argue that they are more motivating, I will still disagree. One major component in long-term motivation is the feeling of getting better at something – explicit instruction can deliver this feeling because it is effective.
However, this does not mean – in the words of one critic who dubbed me an ‘extremist’ – that I favour, ‘all explicit, all the time’. All models of explicit instruction include the gradual release of responsibility to the student. Once students have a good grounding in a topic then it is possible for them to do more open-ended and investigative work. For students who have reached a certain level of expertise, this will be more effective than redundantly listening to explanations of concepts that they already understand.
There is also an argument for variety. I don’t think explicit instruction is demotivating but I do think that doing the same thing all the time could definitely be demotivating. We might decide to trade efficiency for variety. A research project may result in less learning overall for the time invested but we might decide that we want to give students that experience. I’m fine with that provided that we do it with our eyes open.
Nevertheless, the evidence is clear. The best way to teach academic content is with explicit instruction.
It was a Saturday morning and I was stood in a field in the North of England, holding a shotgun. I had just missed five clay pigeons in a row while being gently mocked by the instructor. It was my friend’s stag weekend. The night before had been a late one and I was feeling a little depleted.
I gestured to a grassy bank and said that I was going to sit down. “I’ll just watch from now on,” I explained. The instructor, feeling a little guilty, tried to talk me around; he wanted me to get my money’s worth. But I was having none of it. Clay pigeon shooting and me were over.
This is how maths must feel to many students, except that we rarely let them sit it out – perhaps with the exception of some kinds of group work. Instead, we keep throwing them into the struggle and making them confront their own shortcomings. It can’t feel good. Maths is unforgiving in this regard. You can write a sentence full of spelling mistakes but, at the end of it, you’ve still written a sentence. If you’ve failed to solve a maths problem then you’ve achieved nothing.
Progressive maths educators recognise this issue. For them, it is even more prominent because of their view that learning maths should be mainly about problem solving. They valorise open-ended problems. They think these are motivating. But the motivation is fragile. It can be easily shot down.
So their solution is to change the personalities of maths students. Instead of being bummed-out by failure, students should see this as a good thing. I don’t think it’s true to claim that every time you make a mistake, your brain grows, but this kind of statement might serve a useful purpose in shaping mindsets. We should teach students to value struggle.
Such a strategy might even work for a short period of time. We might be able to psych our students up for the struggle so that they feel positive about it. But learning maths is a long term process. It will include times when our students are feeling depleted. It seems a stretch to think that we can change their attitudes in such a radical and persistent way.
There is an alternative, of course. Progressive educators want to take traditional maths teaching and revolutionise it. Explicit instruction, on the other hand, enhances traditional maths teaching with research-based practices. It works with the grain of how teachers naturally teach maths.
One such enhancement is to tailor the teaching to obtain a high success rate of around 80% or more. This forms a key part of the gradual release of responsibility that starts with the teacher fully explaining and demonstrating and that ends with students independently solving complex problems. One tactic that I use to obtain success early in this process is the use of example-problem pairs: a fully worked example placed next to an almost identical problem for the students to solve.
This builds motivation because success builds motivation. Hitting four out of five clay pigeons is far more motivating than hitting none.
So you have a choice. Which option seems the best bet to you? Should you seek to cause a long-term change to your students’ personalities or should you enhance your teaching to ensure a higher success rate?