The problem we face in education across much of the anglophone world is quite clear: Silly ideas. But if this is the problem then what is the solution?
I am keen on the role of argument. I believe that if we point out the flawed logic underpinning popular conceptions then, over time, we might start to change the terms of the discussion. Clearly, there are those within the education establishment who still recognise no debate. You can tell that they don’t by the way they react to criticism; as if they’d just seen a wombat reciting Sylvia Plath.
I think we’re getting there. We can’t be ignored any longer. And nasty, irrational attacks just make our case seem more reasonable. But there’s a long way to go.
I can’t help noticing that many of those who are shaping the global discussion about teaching methods do so from a position of association with free-schools and academies in the UK and Charter schools in the US. This is unsurprising; such schools provide a space where unconventional thinking is allowed and encouraged. And unconventional is what we need.
I have not been convinced up to this point that these schools lead to overall system improvement. I would certainly commend some of them. But I also believe that many schools will market themselves as much on silly ideas as they do on sound ones – just look at how independent schools tend to brand themselves at present.
The Centre for Independent Studies (CIS), a thinktank that prioritises liberty and free-enterprise, today released a report authored by Trisha Jha and Jennifer Buckingham. It is a measured report, short on hyperbole. It weighs the evidence and just about finds a positive effect in favour of Charter Schools and their equivalents. Now, you might expect this from such an organisation but I would challenge you to read the report before dismissing it.
You see, I don’t think this is necessarily a left-right issue. If such schools were allowed to make profits then it might be – something that the report does not rule out. However, essentially, we are talking about alternative ways to deliver a public service that will remain free to users.
The attractiveness is that this will allow choice to those who cannot presently afford it. Independent schools in Australia are subsidised directly by government but are still out of reach for many Australians who are locked into a school through residential zoning. So perhaps Australian Charters could add something to the mix. I am certainly interested in their potential to provide proof-of-concept for schools organised on very different lines to standard government schools.
Immediately after the CIS report was published, a counterargument was presented in The Conversation. I’m not sure that it is the strongest possible attack on the idea of Australian Charters. One passage stood out among the others. Discussing Charter Schools, Dean Ashenden observes:
“Their record in innovation is similarly mixed. Some do use their freedom from the usual rules and regulations to innovate, but most pitch to parents in the same way as Australia’s independent schools. They sell on “traditional” values, curriculum, teaching methods and discipline.”
I suspect Ashenden’s ‘innovations’ are pretty much equivalent to my ‘silly ideas’. And I reckon that there are a lot of parents out there who would be keen on a Charter School that sold itself on traditional values, curriculum, teaching methods and discipline. Are these parents wrong? In fact, one of Jha and Buckingham’s main points is that it is exactly this type of Charter School that is the most effective.
There might be something in this after all.
Yesterday, Doug Holton dismissed me as a troll. I had written a blog post that was highly critical of an article that I had read in The Age. Holton effectively told one of the journalists that I was not worth bothering with.
It seems that some people define troll to mean, ‘people who say things that I don’t like.’ I think this debases us as an online community. It appears to be an attempt to shut down robust debate.
Moving past the trolling issue, Holton’s complaint seems to be that I don’t pay attention to counter-evidence. The link that Holton provides is to the IES evaluation of Direct Instruction. I don’t actually advocate Direct Instruction programs, although I find them interesting and I refer to the evidence in favour of them when advancing my case for explicit instruction more generally. And so it is highly relevant to this case that IES finds pretty much no effect for these approaches.
There is a caveat.
IES has extremely high standards for the evidence that it will accept when producing its reports. It is a comment on the quality of educational research that IES often rejects more studies than it accepts. For its overall evaluation of Direct Instruction – a program that has now been developed across maths, literacy and other domains and upwards through the different grade levels – IES is able to make use of a single study; one which it accepts ‘with reservations’:
“The study was classified as “meets evidence standards with reservations” due to severe overall attrition. Based on the number of classes and children in the original study, the sample size at assignment was 368 children with disabilities [Cole et al. (1993) stated that the full sample included just 206 children]. However, the analysis sample was 164 children. Based upon the inconsistency between the figures at assignment, the study was downgraded for severe overall attrition.”
The study is interesting – it compares early years special education students who receive Direct Instruction with those who receive something called “Mediated Learning” which is based upon the work of Feuerstein. I’d not heard of this before and it is difficult to form a picture of exactly what is involved from the research paper. The result is certainly not what I would expect.
Oddly, the IES has a separate report on ‘Reading Mastery’ which shows significant gains for that particular program. Reading Mastery is a Direct Instruction program. Again, the report ultimately rests on a single study; a paper from 2000 that meets the required standards. I’m sure that there is logic to this, but why would this report meet the standards for evaluating Reading Mastery but then not be included in the report evaluating Direct Instruction?
I have interacted with Doug Holton before but I would have to suggest that the communication is a little one-way. He wrote a piece that claims that people like me get our views from Wikipedia and listed a whole load of evidence to support his ideas. I commented on this and raised an important point. Much of the evidence that Holton presents is of college-level studies that compare supposedly ‘active’ learning with traditional lectures. I’ve read a number of these papers now and it is often hard to pin-down exactly what the active learning condition consists of. In some cases, we are clearly comparing straight lectures with lectures where students interact via clickers.
I would predict that the interactive lectures would be more effective than the non-interactive ones. Firstly, you have the fact that these will be something of a novelty and are likely to generate a Hawthorne effect. Secondly, I actually promote interactivity during explicit instruction because I think it helps maintain attention. I suggest that students should be regularly called-upon to answer questions and that these students should not self-select. Indeed, a key feature of Direct Instruction programs is that they are highly interactive.
So I’m not sure that all of Holton’s evidence is actually relevant to the question of explicit classroom instruction versus constructivist inspired approaches i.e. the questions addressed in the Kirschner, Sweller, Clark paper (below) that he criticises.
Although I have now made this point a number of times, Holton has never addressed it in my interactions with him. I invite him to do so.
What is this all about? I am broadly in favour of explicit instruction in K-12 education rather than inquiry learning and the like. Below, I list some evidence that I believe supports my own position. I have copied this evidence from a previous post.
2. Barak Rosenshine reviewed the evidence from process-product research and found that more effective teachers used approaches that he called ‘direct instruction’ and which I would call ‘explicit instruction’ in order to distinguish it from the more scripted Direct Instruction programmes developed by Engelmann and other (such as DISTAR). Most of this is paywalled but he did write a piece for American Educator.
3. Project Follow Through, the largest experiment in the history of education, is generally considered to have demonstrated the superiority of Engelmann’s Direct Instruction (DI) programmes to other methods, including those base upon constructivism. It is important to note that DI was not just the best on tests of basic skills but it performed at, or near, the top on problem solving, reading comprehension and for improving self-esteem.
6. A meta-analysis found a small effect size for ‘guided discovery learning’ over business-as-usual conditions and a negative effect size for pure discovery over explicit instruction. Whilst this might be seen as evidence for guided discovery learning, it is worth bearing in mind that the studies included were not generally RCTs and so the experimental conditions would have favoured the intervention (which is why Hattie sets a cut-off effect size of d=0.40). The definition of guided discovery learning also included the use of worked examples which are generally considered to be characteristic of explicit instruction.
11. Findings on the best way to teach cognitive strategies (such as reading comprehension) also echo the findings of the process-product research i.e. that an explicit approach is more effective. (You may, as I do, still question the value of teaching such strategies or, at least, the time devoted to it). [Paywalled]
12. Classic worked-example studies show the superiority of learning from studying worked examples over learning by solving problems for novice learners. Worked examples are a feature of explicit instruction whereas problem solving (without prior instruction) is a feature of constructivist approaches.
[There are others – I’ll add to this list as I remember them]
This post originally appeared on a different forum a couple of years ago:
Since my last post, I have been involved in lots of discussions on Twitter about group work. I have begun to wonder whether some people think that I actually coined the term ‘social loafing’. Quite the contrary; social loafing is a well investigated phenomenon. However, few teachers know about it.
In fact, nobody ever taught me about social loafing. Group work was always encouraged as a ‘good in itself’ when I was training although I did notice how scarce it was in the classrooms of effective teachers in my placement schools. No, I only found out about social loafing research about a year ago. And I came to it by thinking about meetings.
I know that you wouldn’t want your mum, friend or partner to say this, but I can get away with it because I am a teacher myself: Teachers have an extraordinary ability to spew forth platitudes in meetings and at great length. You know the sort of thing; “We need to put the children at the centre of what we do.” Really? I was going to go with jam, “We need to develop agency.” How, exactly? And my favourite deepity of all, “We need to engage our learners in learning how to learn.”
So it was in the midst of one such meeting that I began to wonder whether there was any research on the effectiveness of, well, meetings. At home, I took to Google Scholar and found that there was.
There is a quite wonderful experiment that has been conducted many times in different contexts. You give people a brainstorming task to do. There are certain rules that are followed; for instance, there should be no evaluation of ideas in case this causes people to withhold. You then ask some participants to brainstorm alone whereas you place others in groups. The results are quite clear; more unique ideas are generated by four people working individually than by a group of four. The disparity increases with group size. All of the obvious variations have been performed and the obvious questions tested, such as whether group ideas are better. But the findings are robust; groups do less useful work than the same number of individuals.
This is easy to understand. Thinking is difficult and we do whatever we can to avoid it. I am a maths teacher but if the calculator is out on my desk it is hard for me to not use it. Groups provide us with cover to slack off. This is social loafing. Most research papers are concerned with how to mitigate this effect; making individual group members personally responsible, for instance, seems to help. However, it apparently does this by making group work more like individual work.
Slavin does something similar when he writes about group work. He undoubtedly is of the view that group work is effective and that there is research evidence to support this. However, he also spends a lot of time on how to do it right; how to mitigate the well-known problems.
But – here’s a thought – perhaps we can avoid the negative effects of group work by simply not doing group work. What would we lose? I suspect we would not lose a great deal. Collaboration can be very powerful but, if this is what you want, I would suggest experimenting with short, controlled periods of paired work. There is also a lot of talk about learning certain skills through group work such as ‘tolerance’. Perhaps, in a group-work free world, children would not develop such skills?
Although you might be able to instruct students in a few basic heuristics such as ‘wait your turn’, tolerance is not essentially a skill. It is a judgement based upon knowledge and it is not always a good thing. Tolerance of racism from one of your group-mates, for instance, would be a bad thing as far as I am concerned. How can we educate children about this? We teach them history and science so that they can form their own judgements. And we teach these subjects as effectively as we can. Trying to teach tolerance as a ‘skill’ implies that the teacher makes these judgements on the students’ behalf.
It’s not even as if social loafing is the only problem associated with group work. What about the problems of having to orchestrate the class with all this group work going on? The teacher’s time is split between groups so he or she will often have to keep repeating the same things. Also, how do we know that the students collaborating in a group at the back of the room are not cementing and reinforcing common errors and misconceptions? We probably won’t find out quickly and so undoing these issues will make the learning less efficient.
Of course, that nice, small class of eighteen-year-olds who have opted to study Philosophy may well seem to take to group work. In fact, the learning may be almost as effective as in a more didactic style. Perhaps the variation that it offers may even, through increased motivation, make up for its other shortcomings. Perhaps.
I remain to be convinced.
It can be easy to be negative about education. Just this week, I felt the need to respond to some poor ideas that had been publicised in The Age newspaper, provoking one of the journalists to exclaim, “I’m glad I’m not one of your students.” That’s the sort of thing people feel free to say to teachers. It’s because we’ve all been in the classroom and so we all imagine ourselves sitting in the back row when a teacher is pontificating.
So what would that look like in my classroom? As an advocate of explicit teaching, I thought it might be worth sharing three ideas, based upon a mixture of research evidence and craft knowledge, that I use in my own teaching. I teach maths and science but some of my suggestions have broader applications.
Have a robust lesson framework
When students enter my classroom, there is a box on the whiteboard into which they may write the numbers of any homework questions that they found difficult. They then take their seats and begin a starter activity that’s on the screen. After this, we discuss the starter activity. It is usually related to the previous lesson and similar to the homework so, at this point, some of the homework questions get rubbed-off the board as the students’ problems are resolved. I then set the new homework and go over any remaining questions from the previous one. Sometimes, if I sense that only a few students had problems, I will leave this to the end of the lesson when other students are working independently.
I then introduce the new material, making use of worked examples when I can (see below). The final phase of the lesson involves students working independently on questions.
I use a set of PowerPoint slides to frame all of this. They are not just there to display notes – although I do use slides for this purpose and I usually print out these notes for the students. In addition, I have a slide that reminds me to take the register; we have an electronic registration system and I easily forget about it when in full flow. Also, the fact that there is a slide near the start of the PowerPoint template that I use labelled, “Homework,” means that I rarely forget about it and am actually forced to think about this prior to developing the rest of the lesson. This gives focus.
The PowerPoint then becomes an object that can be reviewed and changed. It is particularly powerful if the construction of resources like this can be shared across a teaching team. If curriculum authorities can avoid the temptation to fiddle with the syllabus every five minutes then having a framework like this acts as a ratchet rather than a wheel; when teaching the unit for the second time, you can start where you left-off the first time. It doesn’t have to be a PowerPoint, of course; it’s the mechanism that counts.
Optimise use of worked examples
Worked examples are powerful learning tools. They reduce the cognitive load involved in solving problems and so they allow attention to focus on the salient features of a process. However, I doubt whether many of us use them optimally. In fact, probably the worst way to use worked examples is to present a whole series of different ones before giving students an exercise to complete. Yet, this is something that I have often done.
As teachers, we suffer from the curse of knowledge. We can make relatively large conceptual leaps between different examples and we assume that our students can do the same. However, our students are novices and they generally need to proceed in smaller steps, particularly those students who are struggling.
With this in mind, I have started to structure things differently. I will give a worked example and then ask the students to complete a question, straight away, that is very similar to this example. In maths, you can often achieve this by asking pretty much the same question with different numbers in it.
In classic worked-example experiments, students are simply presented with the example. As teachers, it is instinctive to want to work the example in front of the students, explaining our thinking as we go. However, we need to be careful not to provide too much for our students to attend to and thus increase cognitive load. Communication needs to be focused on the features of the example.
Interestingly, although much of the early research on worked examples was completed in the area of maths, similar effects have now been found across a range of subjects. A ‘worked example’ effect has been found for an annotated section of a Shakespeare play, for instance. Similarly, it is quite reasonable to assume that the construction of a paragraph would act as a worked example.
Another possibility is to ask students to complete some of the steps in a worked example. This makes use of the completion effect – it lowers the cognitive load compared to solving the entire problem independently but can aid retention of the example.
We should also be mindful of the expertise reversal effect – studying worked examples is not effective when we already have a lot of expertise in the area. I have some highly talented maths students who are better served by solving problems themselves than following through my worked examples. And so this is what they do.
I rarely mark homework. Students have the numerical answers – these are in the back of the textbook and I provide them for any other questions that I set. I then focus on ensuring the homework is completed, that solutions are worked in full (otherwise they could just be copied from the answers) and that they have been checked against the answers. I am therefore able to perform such a check every lesson in about five minutes while my students are completing a starter activity.
The problem with homework is that you can never be sure of the conditions in which it was completed. Students might have had help. They will put-in differential amounts of effort.
However, barely a week will pass without me setting some sort of test or quiz in class. I take this up and mark it. I try to set quizzes a couple of weeks after I have taught the concepts in order to disrupt the process of forgetting. I am able to control the conditions and gain more realistic feedback on the progress of my students.
Of course, waiting two weeks to discover anything about what my students know would be far too long and this is why I ask a lot of questions in class and why I am quite a fan of mini-whiteboards.
Marking can quickly grow out-of-control when you couple unrealistic expectations and policies with teachers’ own sense of guilt. Complex pieces such as essays can be particularly time-consuming. I would recommend a reductionist approach – you don’t check whether a student has read a book by asking them to write an essay where you then correct all of the grammar. That’s too circuitous. You check whether a student has read a book by setting a quick multiple-choice quiz. Focus on the thing you want to assess.
Yesterday, The Age published an article that represents much of what is wrong about discussions of maths teaching in Australia. The only consolation, as far as I am concerned, is that I am often told that people don’t actually think these things. Here, we have documentary evidence.
The piece starts by lamenting Australia’s stagnation in PISA maths. It is debatable as to how much attention we should pay to this measure but, given this opening gambit, it makes an interesting lens through which to examine what is being proposed.
The article encourages us to ditch textbooks. Simon Pryor, executive director of the Mathematics Association of Victoria, and clearly someone who should know better, is quoted as saying, “We advocate throwing away textbooks … teaching to a textbook should not be the sole thing that the teacher is doing.”
Firstly, this is a non sequitur. You can possess textbooks without teaching to the textbook being the sole thing that you do. Secondly, there is plenty of evidence that high performing countries, as measured by PISA, consistently make use of good quality textbooks. If anything, they make more use of them than we do. Aided by a stable curriculum, these texts can be refined over time, adding to a level of curriculum coherence.
Tim Oates makes these very points in an important paper for Cambridge Assessment.
A textbook-free maths department is not a nirvana of personalisation. It is a department with a large photocopying bill where teachers are all scrabbling around at the last minute for resources that are loosely relevant.
Apparently, we should be moving away from ‘rote’ learning multiplication tables. I suspect that this is not what they are doing in Shanghai. Again, we seem to be presented with a false choice. Either children learn their tables through singing a song and cannot tell you what 6 x 8 is without singing the whole thing, or they don’t learn their tables.
Has nobody ever heard of times-tables grids where children are quizzed on their recall in a random order? Why is it not possible to both memorise the answer to 6 x 8 and to know what it means? Indeed, this is important.
If a child is busy trying to work out a simple multiplication like this from first principles then she cannot also attend to other aspects of a question. This leads to something known as ‘cognitive overload’ and is a key reason why a lack of knowledge of basic maths facts impairs performance on more complex problems.
The ‘real’ world
Why is maths held to a standard that no other subject must meet? We never talk of how to solve mundane, everyday problems with knowledge of Shakespeare or the history of the Australian Federation. We see these things as worth knowing in their own right. However, when it comes to maths, it’s only any good if we can directly apply it to a contrived problem about how much paint we need buy in order to cover the garden shed or something like that.
I do not buy the argument that this leads to greater motivation, particularly in the long term. I also like the quote attributed to Jim Rohn, “Motivation alone is not enough. If you have an idiot and you motivate him, now you have a motivated idiot.”
The Australian future does not need motivated mathematicians, it needs competent ones. And becoming more competent at something can indeed be motivating. This is why the children taught maths explicitly in Project Follow Through saw the greatest growth in self-concept. I suggest that we should first teach maths well then see what happens.
The anti-formulas rhetoric of the article is pure constructivist dogma. I suspect that most maths teachers don’t simply teach formulas, they also explain where these come from. The idea that students should discover fundamental theories of mathematics by themselves, whether by folding pieces of paper or otherwise, is a recurring and damaging theme in the history of education. Let’s face it; this stuff took professional mathematicians a lot of time to work out. This paper by Kirschner, Sweller and Clark is probably the best on the subject.
Where is the evidence that this is what they’re doing in Singapore?
Reading this nonsense, any aspiring maths teacher could be forgiven for thinking that the future involves personalised learning in mixed ability classrooms where children play with plastic blocks rather than learn the basics. This is highly damaging and, if implemented wholesale, would likely lead to further declines on international tests.
If we really wanted to learn the lessons of PISA then we would be encouraging whole-class explicit instruction enriched with quality textbooks.
Leaf blowers are annoying, right? They make noise and wake people up on Sundays and even Thursdays. They emit harmful carbon dioxide, are heavy to carry and are hard work for the user. And you have to wear ear defenders which look kinda dorky.
Leaves are what make a leaf blower worthwhile. Only when you see the ease with which a leaf blower can gracefully and efficiently corral leaves does it start to make sense as a piece of gardenmachinery. Better still, asksomeone to try to gather leaves with a simple table fork. After a few hours of this, see how readily they will accept the need for a leaf blower.
This whole metaphor – for it is a metaphor for something and we’ll see what that is in a minute – hit me the other day whilst I was watching a cool and zeitgeisty TV show. I immediately realized…
View original post 552 more words
I have just have an amazing idea that I reckon will help a lot of folks out.
At present, if you wish to promote a faddish teaching method then you have one of two options:
- Just tough it out. Make lots of colourful diagrams, generously lace your presentations with jargon and simply ignore the fact that there is no evidence to support your position.
- Conduct a badly controlled experiment. Vary more than one factor at a time in such a way that you can imply that the experiment shows evidence for one factor – the method you wish to promote – even though any difference between experimental and control groups is likely to be due to one of the other factors. Of course, you could just not have a control group.
However, it strikes me that there is a highly attractive alternative; the use of thought experiments. Thought experiments are potentially limitless; you can run them as many times as you want in order to generate whatever statistical power you wish. This means that you will no longer have to rely on using tortured relativist logic to try to explain why quantitative studies are for losers.
Moreover, thought experiments are unconstrained by the capriciousness of reality. Once you have decided what you want a thought experiment to demonstrate then it’s just a simple case of making-up some sort of description of it and a set of results. We could even farm this out to the internet. The Foundation could harness the disruptive technology of distributed collaboration. Busy consultants can offer up a research question to the network and a host of cognitive empiricists could set to work pretty much straight away.
[Post inspired by @sblakey]