Regular readers will know that I often link to a paper by Kirschner, Sweller and Clark to support my argument for explicit instruction. It’s a great paper but it is sometimes dismissed by critics due to its name, ‘Why minimal guidance doesn’t work.’ It turns out that nobody will own the concept of ‘minimal guidance’. They don’t recognise it in their own approach which they always insist contains loads of guidance.
This is a shame because the argument in the paper actually sets out the need for full guidance by providing worked examples or other forms of explicit instruction. Perhaps this is why, when they rewrote their article for an audience of teachers, the authors discussed the case for ‘fully guided’ instruction.
Many teachers and academics are against full guidance and so the argument applies to their methods. For instance, a form of mathematics instruction known as ‘Cognitively Guided Instruction‘ withholds explanations:
“…teachers would not show the children how to solve these problems. In fact, teachers who use CGI usually tell the children to solve the problems any way they can…
These teachers know that children are able to solve story problems without direct instruction on strategies, because children naturally direct model story situations about which they have informal knowledge.”
Interestingly, a number of CGI fans on Twitter have been arguing that this is not a form of discovery learning. If CGI is not a form of discovery learning then I don’t know what is. I think this indicates the strength of the argument against discovery learning: people would rather pretend it doesn’t apply to their methods than address this argument directly.
We tend to be attracted to discovery learning because we think it somehow leads to students learning things better. We imagine deeper kinds of learning. This idea was tested in one of the most misunderstood experiments in this field. Klahr and Nigam taught students a key scientific principle – the control of variables – either by explicit instruction or discovery learning. More of the students in the explicit instruction condition learnt the principle. But this is not the point. Of those who did learn the principle, students who learnt it by discovery were no better at evaluating science fair poster for control of variables than those who learnt by explicit instruction.
Following the publication of the Kirschner, Sweller, Clark paper and the fallout from it, Richard Clark wrote a chapter where he identified what he sees as the key difference in the way people view guidance:
“Guidance advocates suggest that learners must be provided with a complete demonstration of how to perform all aspects of a task that they have not learned and automated previously. So even if a learner could solve a problem with adequate mental effort, guidance advocates provide evidence that it is more effective and efficient to provide a complete description of “when and how”.”
Clark contrasts this position with that of those who would only provide guidance if it becomes clear that a student cannot solve a problem unaided.
I agree with Clark that there is evidence to support the position held by guidance advocates. So let’s debate that contention rather than the meanings of ‘minimal’ and ‘guidance’.
Since my initial post on this topic, ACARA have added a note to their website to explain the changes. The main change to the NAPLAN numeracy assessment involves moving from two papers consisting of 32 questions each, one of which was a non-calculator paper, to a single paper with 48 questions. This single paper has a non-calculator section that only contains eight questions.
According to ACARA:
“…the test continues to cover all sub-domains of numeracy, allowing students to demonstrate performance across a range of numeracy skills. The reduction will not affect either the reliability or validity of the test.
Students in Years 7 and 9 will continue to answer calculator and non-calculator questions, and the number of questions requiring mental calculation (without the aid of a calculator) remains the same as in previous years – there is no reduction in the number of questions of this type.”
You may ask how it is possible that there has been no reduction in the number of ‘mental calculation’ questions when we have gone down from a 32 question non-calculator paper to just 8 questions. Well, there is some logic to this. A proportion of the questions on the non-calculator paper involved things like mentally rotating shapes, constructing expressions or reading graphs. A calculator would be of no benefit for these questions. However, it seemed unlikely to me that these items would constitute 24 of the 32 questions.
So I did a check. I looked at the 2016 Year 7 non-calculator paper. I was able to identify 18* questions out of 32 that involved some form of calculation that a student could complete with a calculator if it were available. That’s more than eight. It also represents 28% of the total whereas 8 questions out of 48 represents 17%.
Personally, I don’t think 28% of questions requiring a mental or pen-and-paper calculation is enough, particularly given the widespread concern about Australia’s continued decline in international assessments such as TIMSS and PISA and specifically in the science and maths subject areas.
Arguing – as I am sure the maths subject associations would – that there is no need for students to be able to do manual calculations in an age of calculators misses a number of key points. Maths is not purely functional – it’s not just about getting a result. The functional argument is like arguing that we shouldn’t teach children how to draw because we have cameras. As well as consolidating knowledge of maths facts, mental arithmetic is likely to support all sorts of activities such as proportional reasoning, factorisation and so on that lead into higher levels of maths. Even if we did accept the functional argument, a calculator user with no mental arithmetic will struggle to spot when he or she has made an error.
It’s worth pointing out that the suite of NAPLAN papers consists of five assessments. Only one of these is a numeracy assessment, with the other four assessing different aspects of literacy. Now, the numeracy element is going to be reduced in size and contain a smaller proportion of non-calculator questions.
*Questions 2, 3, 9, 11, 13, 14, 17, 19, 20, 23, 24, 25, 26, 27, 28, 29, 30, 31
Last week, I attended a physics conference organised by the Science Teachers Association of Victoria (STAV). It was a curious affair – as all physics teacher conferences are – and I was, as ever, left with the feeling that I had just attended a revival meeting for a religion I don’t quite believe in.
Don’t get me wrong, I love physics. And that’s the problem.
You see, Victoria has recently rewritten its senior physics curriculum to make it groovier and funkier. We no longer have unit titles that describe what the unit is about. Instead, we have facile questions such as, “How do things move without contact?” or, “How fast can things go?” [my emphasis].
The logic of such a change is obvious. Physics is really dull, right? So by changing the titles of units into questions we’ll make it more engaging. Students will find the learning irresistible. It’s all about inquiry. Drama and history classrooms will be mothballed as kids flock to a new, funkier, mutton-dressed-as-lamb physics.
You see, to take this attitude, you have to have both a pretty low view of physics and a predilection for constructivist teaching methods. Both stands are wrong.
Which is why it was such a breath of fresh air to read what Tom Alegounarias of the New South Wales Education Standards Authority had to say about the new physics syllabus that has just been published in that state. According to the Sydney Morning Herald:
“He [Alegounarias] said there would be more focus on the topic rather than the context. Instead of studying ‘moving about’ in Physics, students would learn ‘kinematics and dynamics’.”
Three cheers for that man.
You see, I distinctly remember the suggestion that one reason for the changes now impacting Victoria was to make physics more like the now defunct New South Wales course with its emphasis on ethics and the social impact of physics. So this new turn is most welcome.
While at the physics conference, we also spent much time discussing the requirements of the new practical investigation.
Physics students in Victoria have always had to complete an investigation but it seems as if some schools may have been squeezing this requirement in order to teach the students more physics.
So the latest Victorian physics syllabus beefs up this requirement with a series of more explicit regulations and insists that students must design and conduct an experiment themselves.
This is at odds with what we know from cognitive science – Year 12 students don’t have the expertise to make this a worthwhile activity – and is essentially an imposition of inquiry learning on all schools.
Today, we heard through a Victorian Curriculum and Assessment Authority bulletin of changes to the format of national NAPLAN numeracy tests for Years 7 and 9.
Previously, there were two papers; a calculator and a non-calculator paper worth a total of 64 marks. Now there is only going to be one paper worth 48 marks. Only 8 of these marks are non-calculator. This seems incredible.
Yet I can’t find any information online at present from ACARA, the authority in charge of NAPLAN. And I also don’t recall any consultation.
On the surface of it, it certainly looks like dumbing-down: a victory for the who-needs-to-know-maths-because-calculators party.
Watch this space.
My grandmother was a singer. She suffered from nerves and had a way of coping with this fear that she shared with me whenever I was involved in a school performance. “I sang for the mayor,” she would explain, “and I told myself that he’s only a man in a pair of trousers.”
This is the key insight of adulthood. Adam and Eve didn’t grow up because they became aware of their nakedness, they grew up because they became aware of their own, and everyone else’s, flaws. As children, we idolise our parents. As teenagers, we see their imperfections. This is why we like movies with heroes in them. They send us back to childhood.
Teachers live in this twilight. Students are inclined to believe us. Why? Well, the past generations of children who accepted an adult’s assertion that putting their hands in the fire was a bad idea are the ones that grew up to have children of their own.
And this provides temptation. Should we shape future minds? Should we ensure that children grow up to be right-thinking?
As a science teacher, I often hit upon existential problems. Personally, I find evolution and the Big Bang far easier to accept than the sheer immensity of the universe, but it is these first two that cause the trouble.
Sure, I have ideas. And I think the world would be a better place if everyone else shared them. I have a wholly eccentric view of quantum mechanics that I think should be more mainstream. But it isn’t. So I could use my position to push these views.
I don’t, of course. That would be wrong. If kids ask me what I think, I’ll tell them. But I’ll also let them know that my opinions aren’t fact and that they should seek a range of views from their parents and the people around them.
You see, everyone I admire has been wrong about something. Newton was an alchemist. Einstein couldn’t accept the uncertainty of quantum physics. Why should I be different? Propagating my own views is a fairly limited aim. Passing on the enabling knowledge to allow students to critique views and form their own opinions is a thing worth pursuing.
If I were a humanities teacher, I wouldn’t impose my politics on students, implicitly or explicitly. If asked, I’d say what I thought but that’s about it. I may, and I do, have strong views and opinions. But the means of preying on a student’s tendency to trust their teacher do not justify the ends of promoting my political goals because I might be wrong.
Every day, school staff across Australia experience the kind of working conditions that they should not have to tolerate. School Principals are assaulted by students and parents. Violent attacks on teachers have spiked over the last four years in New South Wales. You might think there would be a coordinated effort to address this crisis.
Unfortunately, this is structurally difficult. There are approaches to tackling poor behavior that are backed by evidence. These generally seek to deal with low-level issues before they escalate and lead to the kinds of major incidents highlighted in newspaper headlines. They emphasise positive reinforcement for good behaviour and retain the possibility of negative consequences for poor behaviour. Systematic programmes have been developed that tier the level of support available to individual students depending on their level of need.
However, these approaches are unfashionable. Two connected ideas dominate the landscape. Firstly, children are not responsible for their behaviour. In some instances, this is the right judgement to make. A small minority of children suffer from neurological disorders that causes them to behave in unusual ways and these children need teachers with specialist training – something that many regular schools can’t provide.
Yet many learning difficulties and disabilities are diagnosed on the basis of the behaviour itself. In other words, the logic is something like this:
- Child X behaves badly in class
- This is because Child X has a disability
- We know Child X had a disability because he behaves badly in class
There is no room within this circular logic for the idea that, like adults, children often have choices about how to behave. If we believe that bad behaviour is always the result of some kind of learning difficulty or disability then we might try to work around this rather than address it. We would not expect a child in a wheelchair to climb stairs so we should not expect a child with a behavioural disability to behave.
The solutions that are often presented are therefore workarounds. Children will behave – so the theory goes – if we provide them with engaging enough work targeted at their individual needs. Some would argue that we should use Universal Design for Learning and its spurious brain diagrams as a way of differentiating activities to meet this diversity.
For instance, if a student struggles with writing then she probably won’t enjoy writing and so we might ask her to draw a picture instead. This might initially work to engage her in an activity. However, her writing difficulty will not be addressed by avoiding writing and so, as she falls further behind her peers in academic work, she will become increasing disengaged with the whole project of school. It is at this stage, typically in secondary school, when we start to look for alternatives such as vocational programs.
This is a form of systematic pessimism where students are defined by arbitrary labels that cannot be changed. All we can do is exhort teachers to differentiate more and tolerate more so that students don’t get excluded from school.
People go into teaching for a number of reasons. Often, they want to make a positive difference to the world. We are not going to attract and retain the best of these teachers if they learn that they are powerless to make a difference and that violence is an occupational hazard.
I’ve had an idea.
In education, we are surrounded by pseudoscience. There are spurious diagrams of brains or eccentric research approaches. And all of it gets wrapped up in politics. Nobody would claim that it’s somehow ‘right-wing’ to dismiss homeopathy – indeed, alternative medicine is often associated with the privileged classes. Yet if you challenge alternative education then you can expect to attract this label.
Medicine is not perfect, but the reason it has made more progress than education is that it has a sounder evidence base. So that’s what we need. Unfortunately, this is where we hit a major problem: Everything works.
It was John Hattie who made this claim in his 2009 book, Visible Learning. All education interventions appear to work due to the inherent problems with designing studies. It is very hard to design an educational experiment where the participants are blind to the fact that they are receiving the intervention. So this will affect expectations. The teacher or students might try a little harder or simply think about subject content a little more.
It was also Hattie who proposed a way around this. If everything works then let’s look at the size of the effect. By comparing effect sizes, we can see what works best. These will be the interventions where the effect size is large enough that it is unlikely to have arisen due to the subjects’ expectations. Hattie set an arbitrary cut-off for effect sizes of 0.4 of a standard deviation.
The trouble is that you can’t really do this. Effect sizes from different experiments aren’t really comparable in this way. For instance, the effect size will be larger with small children or with a selective cohort of students.
So here’s the idea. Let’s mobilise the resources of groups like the Education Endowment Foundation in the U.K. and Evidence for Learning in Australia to run a different kind of trial; a trial that follows the model of one of my favourite papers.
Instead of having a control group and and intervention group – an AB design – we should run trials with one control group and two competing intervention groups – an ABC design. Both interventions would need to be supported by researchers who are committed to them and both would need equal resources. We could then see which of the two interventions works best. Comparison would be fair because it would be within one experiment.
Good candidates might include running Reading Recovery against a systematic synthetics phonics programme or running ‘productive pedagogies’ against a programme rooted in teacher effectiveness research.
None of this would completely fix the problem of pseudoscience. You’d still see eccentric articles in The Guardian and the proponents of alternative education would rant and rave about ‘positivism’ and politics. But we would start to build an evidence base that could be drawn upon by reasonable teachers and policy makers who haven’t yet hitched themselves to the wagon of woo. Slowly and quietly, we could edge towards a more evidence-based profession.