Teacher Toolkit: Helping you survive your first five years – A Review

There is something endearing about Ross Morrison McGill’s new book. He starts with a tale of his own upbringing: His parents were members of The Salvation Army and so the family moved around a lot. Their longest stay in a single place was three years towards the end of McGill’s schooling. I tend to agree with him that this is likely to have been a cause of his poor school grades. I was reminded of E D Hirsch’s argument in favour of a common curriculum. Hirsch’s point is that it is usually the disadvantaged that move schools most frequently and so it is they who suffer most from a lack of curriculum coherence.

The book is structured around an interesting conceit. It is divided into five sections, each of which represents one of the first five years of a teacher’s career with a theme on which to focus in each year. This comes from McGill’s notion of the ‘vitruvian’ teacher and the five components that are the qualities of such a teacher. For instance, McGill thinks that resilience is a quality of good teachers and so he structures his advice for the first year around this concept. This makes a lot of sense.

When it comes to the advice itself, it is hard to find the meat. Much of it is akin to advising a comedian to be funny. For instance, we learn that new teachers should turn up for duty, do as they are told and follow school policy. There is nothing intrinsically wrong with following school policy but you don’t need a book to tell you to do that. We read that, “Every time you add something to your own or another person’s workload, you should also commit to taking something away.” Fair enough, but what should we take away and how should we do this? It’s not clear.

Many new teachers struggle with behaviour management and therefore could do with some insights on effective approaches. McGill advises:

“When you do progress to feeling more confident about pushing your behaviour management boundaries, or even now as you decide on adapting rules for your classroom, ensure you are consistent and never detrimental to whole school policy…”

He then borrows some tips from Paul Dix that include, “Limit the amount of time that you demand the attention of the whole class. Too much Teacher talk promotes low level disruption.” I’m not sure that this is the right advice. It might be better to use strategies that improve students’ ability to pay attention rather than to adapt to a low level of attentiveness. Some widely known classroom management techniques are simply not mentioned.

The ‘5 minute lesson plan’ makes an appearance, as you might expect given that McGill is famous for promoting this on his blog. I can’t imagine using this to plan a lesson and I think it would be of limited value to someone new to the post. It’s basically a collection of buzzwords in tiny boxes. Even if you were to believe that a particular buzzword should be addressed in your lesson plan – e.g. ‘numeracy provision’ – then there wouldn’t be the space to write anything significant about it.

And there are other ‘5 minute’ plans. For instance, there is one for marking. At first I thought this must be a plan for completing your marking in five minutes. Which would be huge, if true. However, it is not this but rather a way of potentially planning your marking in five minutes. I can’t really see much value in this because I know of few teachers who spend much time at all planning their marking. And if you follow McGill’s approach then you are likely to plan quite elaborate and time-consuming marking.

Marking is something of a theme of the book – “Mark, plan, teach (repeat)”. Although McGill discusses verbal feedback, it is clear that he mainly equates feedback with marking books and therefore sees feedback mainly as something that teachers provide to students, missing out on a discussion of the potentially powerful effect of the feedback that students provide to teachers. There is a suggestion about drawing yellow boxes around portions of the work that you intend to mark and then empty yellow boxes in which the students respond and/or redraft. You can see the thinking here: It is intended to be a mechanism by which teachers don’t have to mark every single thing that a student produces. However, I can think of easier ways of doing this. The main advantage seems to be in providing evidence to managers of what you are doing.

I also think that McGill was let down quite badly by his editor. At times, it is difficult to understand his message. I wonder if it’s McGill’s desire to provide a balanced view that leads him in to flatly contradicting himself. We read that:

“Whatever your views on strike action, it must be made clear to you that if you are part of a union you will be obligated to strike. But this does not mean that you have to do it!”

“The laziest form of differentiation that exists is going into every lesson, setting a whole class task, waiting for students to produce an outcome, and then simply differentiating a follow-up task of feedback. Avoid this at all costs… However, in all honesty, occasionally we have to resort to this…”

And there are other occasions that are more reminiscent of Alan Partridge:

“I am proud to say – even though it makes me cringe at times when I look back – that the three form classes I had (two were over five years apiece) were a work of art.”

In short, this is a potentially very useful book that has not been executed particularly well. I cannot think of anything else that sets out to achieve what McGill has tried to do here and so I cannot recommend an alternative. Yet despite touching on some key issues for new teachers, ‘Teacher Toolkit: Helping you survive your first five years,” has little of real substance to say about them. I therefore would not recommend it to someone starting out in a teaching career.

I am grateful for being sent a copy of this book to review.

Toolkit

Standard

Dunning-Kruger and the curse of knowledge


The Dunning-Kruger effect is a cognitive bias where a relatively unskilled individual overestimates his level of skill within a particular area. It is thought to arise because the individual lacks the knowledge needed for accurate calibration. If you don’t know what an expert performance looks like then you won’t realise that you are lacking that expertise.

The curse of knowledge is a different and yet related cognitive bias and is effectively a failure of empathy. If you know something then this thing that you know seems transparent. The ease with which you can retrieve this knowledge leads you to think that everyone must know it; that it’s obvious.

What happens when teachers suffering from the curse of knowledge teach students suffering from the Dunning-Kruger effect? I suspect you get an unwitting conspiracy where everyone thinks the situation is fine: the teacher thinks her students understand and the students think they understand. Yet we should see a major impact on the performance of the lower-achieving, Dunning-Kruger affected individuals.

What’s the solution? Teachers need to be regularly confronted with what the students don’t know. This needs to be systematised so that it is not up to the teacher to decide when and if to check. The curse of knowledge is powerful: In my own teaching, I make assumptions about what students have learnt that are overly optimistic and I repeatedly make these assumptions even though I am aware of the cognitive bias.

Students also need this information. We must not protect them from the truth about their performance because this only feeds the delusion.

Standard

Neo-traddie nu-blob trolls – dontyoujusthatethem?

One of the interesting components of the great education debate is the names that get bandied around. When I criticise aspects of ‘progressive education’, rather than recognising that this is a well-defined movement with its roots in romanticism and its flowering in the 20th century, many assume that it is a neologism created as a term of abuse.

Schools Week’s recent progressive-educator-of-the-week, Gerald Haigh, whilst complaining about a ‘rather vulgar world of offensive tweets and ill-humoured blogs’ suggested that ‘child-centered’ has become a term of abuse and longed for the era of a better sort of ‘trad’ (is that a term of abuse?) before concluding that ‘Yah-boo debate gets us nowhere. Children need teachers, but do not become educated through the learning of facts’. Which is kinda wrong.

Michael Gove bears much responsibility for this paroxysm of name-calling for coining the metaphor ‘the blob’ to describe the educational establishment in the UK. Whatever the merits of this as a metaphor, it is plainly insulting and I have tried to avoid using the term.

But there is a double standard. The same people who complain about a lack of civility and the debasement of discussion are the first to jump to terms of abuse. As far as I am aware, it was Michael Merrick who coined the ludicrous, oxymoronic moniker of ‘NeoTraddie’. When asked to identify an example of a NeoTraddie, he chose the writings on my old blog. The irony, of course, is that he coined such a term in order to shrilly berate the likes of me for being shrill.

And we all remember Guy Claxton having a go at ‘angry trolls’ who ‘not very bright’ because they don’t like thinking or learning, only winning arguments.

In an otherwise aimless editorial for last week’s TES, Ed Dorrell writes of the ‘neo-trad nu-blob’ and claims that they worship a saintly Michael Gove (which is all a bit random given that Gove is now in charge of the legal system). Dorrell does this for a reason, of course. He is judging that it is what the people who read the editorial will like. No doubt many of those who complain about the state of debate, the negativity and unkindness, will cheer this othering without noticing the irony.

Standard

Claxton’s Character Education

I once moved to a new school to be confronted with Guy Claxton’s “Building Learning Power” (BLP) programme. This was such a bizarre scheme that I struggled to understand what it was all about. For those of you who are unfamiliar with BLP, it is a cross between a growth mindset intervention and a learning-to-learn programme. Students are told that they possess ‘learning muscles’ which can be trained. For some reason, these all begin with the letter ‘r’; resilience, resourcefulness, reflectiveness and reciprocity. Staff and students are even shown a diagram of the brain that is divided into these ‘muscles’.

At my new school, teachers were supposed to take time at the end of each lesson to reflect with the students on which learning muscles they had used that day. There was a box on the lesson observation proforma to ensure compliance. A large minority of teachers didn’t bother with it unless they were being observed. However, the school was totally committed. A keen Assistant Headteacher was a true believer and pumped resources into training teachers and students. The students would get whole days on it and yet there was little discernible impact. It was all a classic example of the sunk cost fallacy.

My first foray into evidence-based teaching was when I tried to look for research evidence on BLP. At the time, I recall that the BLP website had a tab called ‘research’ which linked to a few action research projects conducted at Bristol University (it doesn’t have a research tab now). I had never heard of action research prior to this and I quickly grasped that it was incapable of objectively evaluating a programme such as this. I went to the Assistant Headteacher and tried to convince him to move away from BLP. He was having none of it.

Since then, I have heard horror stories from other schools about attempts to implement BLP. The worst of these happened to a relative of mine who was training to be a teacher: In a placement school, a child was tormenting another child. The one who was being tormented was then berated by the teacher for not exercising his ‘managing distractions’ muscle.

The problem is that there is no one thing such as ‘reciprocity’ or ‘resourcefulness’ that can be directly trained in this way. Claxton has reified these concepts.  Resourcefulness looks different in different contexts. A child may be resourceful in one area and completely unresourceful in another. The key variable is domian-specific knowledge and skills. This is why attempts to teach ‘learning how to learn’ in this way are doomed to frustration and failure. Instead, we should be teaching worthwhile content in a rigorous way.

So this is why I am skeptical when Guy Claxton pops up from time-to-time to explain to us what we are all missing:

“Cognitive science is moving very much in this direction – seeing mental capacity as something that is, in itself, educable. We can teach people to become more intelligent, to become better at learning, to persevere, to become better collaborators, to use their imaginations more effectively. The scientific underpinning of that is strong.”

Sorry, but I really don’t think that the scientific underpinning of this idea is strong at all. Where’s the evidence?

The brain is like a muscle. Geddit? [Häggström, Mikael. "Medical gallery of Mikael Häggström 2014". Wikiversity Journal of Medicine 1 (2). DOI:10.15347/wjm/2014.008. ISSN 20018762]

The brain is like a muscle. Geddit? [Häggström, Mikael. “Medical gallery of Mikael Häggström 2014”. Wikiversity Journal of Medicine 1 (2). DOI:10.15347/wjm/2014.008. ISSN 20018762]

Standard

There are many ways to mess things up

Image you have an educational programme for teaching literacy or a numeracy intervention. Imagine that it has five key elements, that you’ve trialled the programme extensively in small studies and you now want to scale it up across lots of classrooms. How’s that going to work?

Well, there is only one way that teachers can implement all five elements. However, there are five ways in which they could miss one of the elements; they could miss element 1, 2, 3, 4 or 5. There are 10 ways that they could miss two of them; 1 & 2, 1 & 3 and so on. If you continue with this, you find that there are 31 different ways of not fully implementing the five elements.

However, these different ways are not all equally likely. Imagine that you run some excellent training for staff so that they largely understand what the programme is trying to achieve. They end up with an 80% chance of implementing each element. This means that the chance of implementing all five elements becomes 0.8 x 0.8 x 0.8 x 0.8 x 0.8 or approximately 33%. In other words, your programme will be implemented in full in about a third of classrooms.

I think that there are two clear implications of this. Firstly, as I have argued before, a good programme needs to have a positive effect if only some of the elements are faithfully implemented. If the impact is negative unless implemented in full then we should probably steer well clear.

Secondly, this sheds some light on the scripting of lessons that is a feature of Engelmann-style Direct Instruction programmes. Apparently, Engelmann did not set out to do this but found himself on this path when teachers struggled to implement the programme fully. We can see why this might arise even if teachers have a pretty good understanding of what we are trying to achieve. We can also perhaps imagine how scripting was therefore a key factor in the success of DI in Project Follow Through.

I can’t help comparing this to Atul Gawande’s checklist approach. Yes, we can expect doctors to understand the importance of washing their hands but if we put this on a checklist and insist on use of the checklist then the chance of this happening consistently will be much higher.

Standard

Ian Chubb shows lack of knowledge of science. Again.

Earlier this week, the Australian government released its plans to promote innovation. The statements made about education were fairly bland but this hasn’t stopped a host of commentators from filling in the gaps. For instance, when Tim Palmer of ABC radio asked Australia’s chief scientist, Ian Chubb, how to arrest the decline in students studying science, Chubb made the following suggestion:

“Well I think you teach – I basically think you teach it, with respect to science particularly, teach it how it is practised. You think about the evidence that is available, you construct a hypothesis, you design an experiment to test it. 

You learn a lot from the ones that work and you learn at least as much from the ones that don’t, and then once you’ve done your experiment you’ve got time to sit back and reflect on it as you design the next one. And we don’t allow enough time for that.”

This is fundamentally misconceived. There is a vast amount of research on the differences between experts and novices and one conclusion is inescapable: the way that experts extend a field of knowledge is not the best way for novices to learn the basic principles of the subject.

When a professional scientist generates a hypothesis then she has huge stores of relevant knowledge to draw upon. She might know of the results of similar experiments or of a theoretical framework into which a particular study might sit. She will not know exactly how her experiment will turn-out but she’ll have a good idea and certainly some concept of the various possibilities.

A science student, on the other hand, will have to start with a blank piece of paper. He will need to do research and yet the kind of things he will read about will be conceptually demanding. He will also need to attend to issues such as how to design an experiment to collect data and how to ensure a fair test. Due to the fact that he has little background knowledge to draw upon, he will need to process all of this at the same time. The result is that he will be overloaded and little learning will take place, particularly of the underlying science.

Paul Kirschner, a Dutch educational psychologist makes the point that:

“…how to learn or be taught in a domain is quite different from how to perform or ‘do’ in a domain (i.e., learning science vs. doing science)… experimentation and discovery should be a part of any curriculum aimed at “producing” future scientists. But this does not mean that experimentation and discovery should also be the basis for curriculum organization and learning-environment designing.”

But perhaps I’m missing the point. Chubb’s view was expressed in response to a question about increasing uptake. Perhaps learning through scientific inquiry is not an optimal way of learning science but that’s not the point. It might be a more motivating way to learn.

This idea suffers from a number of problems. Firstly, being frustrated and overloaded is not motivating. We can reduce the conceptual demands if we choose our investigations wisely but is it really motivating to be conducting investigations into which brand of paper towel is strongest when wet? Contrast this with the scientific ideas that children are often interested in; dinosaurs, the possibility of alien life, explosions, space. In comparison, it seems rather mundane.

There is also a problem with the assumption that increased motivation will cause greater science performance and lead to a new generation of scientists. Motivation and ability are certainly associated with each other but it’s not necessarily the case that this is because motivation causes increased performance. It could be the case that increased levels of performance lead to motivation. In fact, this is exactly what Canadian researchers found when they examined the relationship between motivation and achievement for primary school maths students.

If this is the case then we might expect a funky intervention to perhaps increase motivation initially but for this to wash-out in time. In contrast, if we want to increase long-term motivation then we had better focus on those approaches that deliver the greatest gains in achievement. And as Melbourne Professor of Education, John Hattie, has found, inquiry approaches do not do this as effectively as methods that explicitly teach content in an organised and coherent way.

Standard

Attack of the Maths Zombies!

Be afraid. Be very afraid. For hiding out in a classroom near you there is a maths zombie! You will be able to identify this supernatural algorithm-cruncher by the fact that she can answer complex mathematics questions yet has no understanding whatsoever of what she is doing.

You might however object that ‘understanding’ is latent and cannot be measured. You may in fact suggest that our best guide to mathematical understanding is the ability to answer maths questions. And our zombie has this ability. So what should we do? How can we know?

Well, it turns out that there is a way to make the distinction. Maths zombies will not be very good at the sorts of tasks beloved of constructivists and progressive educators. You know the sort of thing; explaining a mathematical method in words, solving a mundane problem several different ways, making a poster, composing an interpretive dance.

But wait; you may not yet be convinced. Perhaps you might wonder why these other things display a superior understanding of maths than the ability to do complex maths? Surely, a student could be as easily trained to deliver rote explanations or multiple methods without understanding as he could be drilled in the mindless application of a formula? And can we really conclude that a student who struggles to explain his thinking does not ‘understand’? He might just struggle with communication? What about English language learners?

And hang on a minute; exactly how would a traditional, instructivist teacher demonstrate that her students had understanding using her usual methods? She couldn’t. And yet by simply getting students to do the stuff that a constructivist teacher would prioritise, we will demonstrate that those students do have understanding; at least the ones that participate. Ergo, a priori and without any further need for investigation, constructivism wins. It’s about understanding, dude. Deep.

Hmmm… I suppose this is all predicated on the assumption that zombies exist…

"Just give us a formula!" (By Joel Friesen [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons)

“Just give us a formula!”
(By Joel Friesen CC BY 2.0 (http://creativecommons.org/licenses/by/2.0), via Wikimedia Commons)

Standard

Givers should also be receivers 

I wonder whether one of the reasons that school leaders pursue highly prescriptive marking policies is that they misunderstand some of the research around feedback. Yes, feedback has powerful effects. However, some of these can be negative and, as John Hattie suggests in “Visible Learning”:

“It was only when I discovered that feedback was most powerful when it is from the students to the teachers that I started to understand it better.”

Writing a comment on a piece of work is not the same thing as feedback. Instead, it is one potential way of providing feedback to students. Comments in exercise books don’t even act as feedback if they are not received and the message certainly won’t be received if we try to convey too many points at once or if a grade or score is also present. I suppose this is why the surreal practice has evolved whereby students then have to write comments about the comments in order to demonstrate that they have read them.

Given these widespread beliefs, I decided that I’d tell you about what I have have been involved with to improve feedback at my school. It is part of a wider whole-school initiative to get smarter about our use of data (although it wouldn’t look the same in all areas – a critical point). I have invented the statistics that I am going to show you, both for purposes of clarity – there has been a lot of tinkering as we have gone along – and also because I have not sought permission to use real data.

Imagine two classes sit a maths test. In this case, I teach one of the classes and a different teacher has the other class. He is more experienced than me at this level and has a strong track record. Let’s have a look at a crude measure; the average scores on each question:

Test Marks

The test was on probability and question 2 was the only question on the binomial theorem. It was a non-calculator test and students had to calculate some relatively simple combinations by hand. This might not mean anything to you but the key point is this; it looks like my group have underperformed on question 2 given that they get similar scores to the other group on the rest of the questions.

Our curriculum is tight; we are both using the same materials in class and are ostensibly following the same lesson plan. This evidence is not research-standard – there are too many variables – but it does strongly imply that there is some meaningful difference in the way that we taught this concept. Given that 90%+ of what we did must have been the same, it should be pretty easy to figure out what that was.

And it is easy. Following a discussion, I learn that my experienced colleague had expanded on a particular example and started a discussion about the use of Pascal’s triangle to find simple combinations. So I go away and teach that strategy to my own students in time for the exam and we write it in to the curriculum so that everyone will teach it next year.

I’ve been through a cycle like this a few times now. It’s not always straightforward and it can be hard to tease out the differences between teachers who think they’ve all done the same thing. It usually turns on the use of a particular example. In one instance, we discovered that a teacher had used an additional step which made a procedure more explicit. She hadn’t even realised this until we asked her to show us what she did and we noticed the difference. Now, everyone does it this way.

We also sometimes find ourselves side-tracked into discussions about things that are not the cause of the difference. For instance, in the example above it is tempting to discuss whether my class has less able students. Yet if this were true, why doesn’t it show up on the other questions? So there is a discipline in focusing on what is important. Teachers also need to have a certain level of expertise; a necessary requirement for effective forms of inquiry.

You don’t even have to have groups that are operating at the same level to use an approach like this – if a generally lower performing group scores more highly than a higher performing group on a particular question then that tells you something interesting.

But please don’t get the wrong idea. We do not analyse everything in this way. A mania for doing that would probably be as unhelpful as a prescriptive marking policy.

Standard

Standardised tests can work

One of the most baffling currents in the education debate is the one about testing. We know that frequent testing is the best way of ensuring that students retain what they have learnt and yet prominent educationalists advise against it. Testing may cause anxiety, this is true. But it’s most likely to cause anxiety for those students who cannot answer the questions. Perhaps we should level-up and ensure that more students can succeed by teaching them better. This seems preferable to dumbing-down and removing a powerful learning tool.

Standardised testing takes things a stage further. To detractors, tests set at arms-length by state authorities are ‘neoliberal’ – a wholly surprising way to characterise a ‘big government’ initiative.

However, fans of standardised tests would point to the fact that they are less affected by biases. The horror of ‘teaching to the test’ is facilitated by knowing exactly what is on the test. Standardisation offers the chance of giving tests to kids that their teachers haven’t seen. The fact that they are standardised also means that you can compare the performance of students in your own school with students elsewhere. You may think you’re doing an excellent job but if everyone in the state can add fractions and your students can’t then it might be time to review how this is being taught.

Of course, you need to have a strategy. I think the best criticism of standardised tests is that, alone, they don’t offer any solutions. Teachers also need to know how to improve. Current educational trends might convince some teachers to adopt inquiry learning as their improvement strategy and yet this is unlikely to work.

Yet if you set your face against standardised tests then you are really suggesting that you should not be accountable. The most common argument is that standardised tests don’t measure everything; a non-sequitur that was trotted-out this week to discredit the new Australian tests for prospective teachers. So what if they don’t measure everything? The stuff that they do measure is important and it’s worth knowing whether students have learnt it or not.

It’s a bit like a football coach arguing that she should not be judged on the results of games or a car salesman suggesting he should not be held accountable for his sales figures. “It’s too reductive,” they might argue. “This work involves people and people are incredibly complex. There’s so much to this job that is simply not captured in bald results.”

In this context, it is interesting to note a new research article published by EducationNext (and brought to my attention on Twitter by @JohnYoung18 who is worth a follow). It may be true that testing is good for highlighting differences in instruction, such as who is better at teaching fractions, but this might not matter all that much for students’ later fortunes. The authors therefore set out to discover whether the future life chances of students were affected by being exposed to a standardised testing regime. The design is quite clever and I am going to quote two paragraphs that I think have wide-reaching implications for education:

“Our analysis reveals that pressure on schools to avoid a low performance rating led low-scoring students to score significantly higher on a high-stakes math exam in 10th grade. These students were also more likely to accumulate significantly more math credits and to graduate from high school on time. Later in life, they were more likely to attend and graduate from a four-year college, and they had higher earnings at age 25.

Those positive outcomes are not observed, however, among students in schools facing a different kind of accountability pressure. Higher-performing schools facing pressure to achieve favorable recognition appear to have responded primarily by finding ways to exempt their low-scoring students from counting toward the school’s results. Years later, these students were less likely to have completed college and they earned less.”

Texas have now closed the loophole that allowed higher-performing schools to exempt low scoring students by classifying them as eligible for special education.

Standard

Dismissed as an ‘ideologue’

I’ve just read a kind of homage to Dan Meyer. It is typical of the genre and similar to Meyer’s own writing. The key features are the hubristic claims – that Meyer will ‘save’ maths – coupled with absolutely no evidence at all to support them. As ever, we are supposed to simply feel that the argument is right. We are meant to evaluate it on its truthiness.

Critics – and Meyer has acknowledged that I am one of them – are dismissed by Meyer as ‘ideologues’. So that’s OK then. Game over. Except that it’s not. Much as it serves a rhetorical purpose to paint me as a crotchety old has-been who just doesn’t like change, this won’t wash. Call me ‘Gradgrind’ or the child-catcher from chitty-chitty-bang-bang; call me any names you want but this won’t alter the fact that I have absolutely loads of evidence to support my position.

Meyer likes to paint my argument as an arcane point about when to explain: He just wants to change the order around a bit and do some problem solving before explicit instruction. What’s wrong with that? There is some evidence to support an approach such as this known as ‘productive failure’. However, it is not strong, with most studies being poorly controlled. This is probably the best study and yet if you read the method you’re likely to spot the problem: The kids who get direct instruction first then have to spend a whole hour solving a single problem that they already know how to solve. I wouldn’t call that optimal.

However, the broader point is that this is a bit disingenuous: Meyer simply does not value explanation in the way that I do. In his TED talk he discusses taking away the scaffolding of a textbook problem so that students have to work out more for themselves. This is clearly going to increase cognitive load and particularly disadvantage students with the least knowledge to draw upon. It will frustrate them.

And Meyer’s explanations must come after this struggle. Mathematical principles are only there to help solve mundane problems about basketballs or whatever. Ironically, this leads to an impoverished kind of maths where kids are given formulas they might need just-in-time, rather than those relationships being built in a systematic and concept-driven way. It is small wonder then that, despite feeling right and being truthy for about a hundred years, proponents of this kind of maths cannot point to any hard evidence to support its use.

In Meyer’s latest desmos venture, it seems that even this limited use of explanations is set-aside in what sounds very much like full-fat, 100%, unashamed discovery:

“In the parking lot lesson, students draw and redraw their dividers, getting immediate feedback as cars try to pull into their spaces; only gradually do they begin to work with numbers and variables. Other modules ask students to share their models with the class, which allows them to revise their thinking based on the ideas of their peers.”

Almost everyone agrees that this kind of thing doesn’t work. So why would we buy it? What’s the evidence? 

When pressed, Meyer and his advocates will suggest it is all about motivation. What’s the point in having the most effective way of teaching maths if it turns kids off the subject? Surely, the biggest issue we face is apathetic students who don’t want to engage?

I think this theory of motivation is wrong. Evidence is starting to build that it doesn’t work this way around. Rather than motivation causing students to engage in maths and achieve, it seems that achievement in maths causes students to feel motivated. This simply confirms that our main priority must be to teach maths well. This can best be done by breaking it down into digestible pieces that are fully explained to students. This will give them a sense of success.

So I have thought about this stuff a bit. It’s not just ideology. And, crucially, I can point to evidence to support my claims.

Meyer would do well to read this seminal blog post by Paul Graham on how to disagree. If he does, he will see that calling people names is the lowest level of argument.

Standard