I got a bit of a mention

In case you missed it, I was recently mentioned in an essay by UK Schools Minister Nick Gibb:

GibbNick2015

The essay comes from a collection produced by the think-tank Policy Exchange. It was put together to mark a lecture given by E D Hirsch Jr. The essay collection is well worth reading.

If you are looking for a blog post in which I try to popularise Hirsch’s ideas then this one is a good place to start.

Standard

Two Roads

“Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.”

Robert Frost

Frost’s famous poem is often interpreted as a celebration of free-thinking; of not following the crowd. However, critics contend it has quite a different meaning; of how foolish it is to spend our time dwelling in past decisions and attributing blame – “If only I had…”. There will always be something that you missed on the other path; something of value. Such is life.

I am going to inhabit that ambiguity a little as I suggest the two paths that we face as educators who wish to do a better job.

We can all probably recognise the current state of affairs. Early years and primary school might be different, particularly with regard to reading and maths instruction, but I suspect that most secondary school teaching is a kind of suboptimal explicit instruction. This is not a harsh criticism. You can learn a lot this way but we should always strive to do better because we can always do better. This is human.

If you are keen to improve your teaching then there are two roads that you may choose to travel.

The first road has had many names but it basically involves the teacher talking less and the students doing more. But the things the children are doing are important. They can’t generally be completing long lists of sums or spelling drills. Their activity needs to look more like what people do in the real world. So science classes should involve students doing the sorts of things that professional scientists do. Mathematics must involve solving novel problems.

The idea is that this is both more motivating for students and that it leads to a deeper kind of learning. Rather than just knowing stuff, students can process knowledge better. This might be described in terms of generic skills that they will develop.

This first road represents something of a revolution. It involves a breaking-down prior to any building back up. It is widely encouraged.

The second road is quite different. This is about taking what is already there and making it better. One simple way to improve everyday explicit instruction is to look at how feedback is structured; both to the teacher and the students. There is nothing more pointless than taking-up exercise books every few weeks and marking them. By the time you read what your students have written, the moment has passed and you are onto something else, with no capacity to do anything about it. And your students have little capacity to respond.

It is better to gain regular feedback from students as you go along in order to avoid the ignorance deal; that state where students and teachers avoid feedback in order to collude in the idea that learning has taken place; a deal to avoid cognitive dissonance.

If you ask questions of all students, not just a few, you will get a better idea of what they really understand. To manage this, you will need good classroom behaviour and there are ways to work on that, provided that schools don’t adopt policies that militate against this. Short, regular tests can give good information to teachers, can give the students a clear idea of what they do and do not know and can also consolidate learning through retrieval practice. Space them out and return regularly to important concepts and you can disrupt the forgetting process.

I would also add that good quality curriculum materials that can be refined over time take away some of the chance element of learning; that tweak you made last year which seemed to result in an improved set of test scores can be carried over to this year.

It is clear which road I favour and where I think the evidence lies so let’s now return to the sense of Frost’s poem. Does it really matter all that much? Will we miss-out on something whichever path we choose to follow? Perhaps. And I will return to this another time.

For now, I will leave it for you to think about.

William Bartlett [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

William Bartlett [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)%5D, via Wikimedia Commons

Standard

Five questions to ask an education guru

Imagine that you are a teacher in a school and you have just sat through a presentation by an education guru of some sort. Let’s call her ‘Marion’. If it is safe to do so, what questions should you ask Marion at the end of the presentation? I have a few suggestions.

1. What are you actually suggesting that we should do?

A typical education guru will say plenty of things that don’t actually lead to concrete proposals. She may spin anecdotes about a fantastic school that she recently visited where everything was marvelous.

Alternatively, she may construct terms and then set about defining these terms. You may find yourself hearing about ‘Productive Change Capacity’, how this is made up of constructive professional dialogue, openness to the views of others etc. and how this can be contrasted with ‘Resistive Change Capacity’ which is made of restricted professional dialogue, closed mindedness and so on. I’ve just made-up these two terms but education literature is full of such tautologies that lead us nowhere in particular.

Instead, if Marion really has anything to offer then she should be able to describe what might change as a result of taking on her ideas. Of course, you don’t expect Marion to know everything about your context and so she might need to ask a few questions too. However, she clearly should have something to offer or there really is no point.

2. What problems do your proposals solve?

Presumably, implementing Marion’s ideas will make things better in some way. Otherwise, why would we bother? So it seems reasonable to ask what current or typical problems these ideas will solve. This also places the ideas on a testable basis. If they solve a particular problem then what will that look like? How will we know that the problem is solved? Perhaps our students will read more at home or they might become better mathematical problem solvers. Perhaps they will feel better about school. Some of these things are easier to measure than others but any meaningful change should have observable consequences.

3. What would convince you that you are wrong?

Testability leads to a key principle of science that is largely absent from education discussions; the idea of falsifiability. In fact, it is so absent that you are likely to need to persist in order to get an answer to this question. I would predict that Marion’s first response would be to explain why her ideas are not wrong, why they are well grounded in theory or research or whatever. So you’ll probably need to clarify with a follow-up question.

The problem is that many supposed educational theories can explain all possible sets of circumstances. We will see an example shortly. If Marion really cannot think of anything that would convince her that her ideas are wrong then we have something more akin to an unshakable belief than something based on evidence.

Strong theories are always falsifiable. Their strength comes from the fact that, despite this, nobody has managed to demonstrate that they are wrong. There is a common story that tells of how the evolutionary theorist, J. B. S. Haldane, was asked what would falsify the theory of evolution and he answered that finding fossilised rabbits in Precambrian rocks would do it.

4. Does adopting part of the approach give part of the benefit?

One way that a proposed intervention can become unfalsifiable is when it only works if implemented fully and with 100% fidelity. If it works, great. If it doesn’t work then that is because you didn’t do it properly. Either way, the rightness of the original intervention remains unchallenged.

Something like this happened with a huge differentiated instruction study in the U.S. It didn’t work but the authors concluded that this was because the intervention was not implemented correctly, leaving the principle of differentiated instruction unchallenged.

Now it may be the case that some approaches will work with a perfect implementation and will not work or will cause harm if implemented with anything less than this. If this is true then ask yourself how much practical value there would be in adopting this course of action. It seems unlikely in the extreme that you will get any team of teachers anywhere to implement something with complete faithfulness to the originators’ intentions.

However, if implementing part of the program delivers part of the benefits then this seems a much better prospect. You can imagine different teachers having strengths in different elements, at least to begin with. And as the benefits accrue then you might start to win over the skeptics.

5. What are the negative effects?

If a consultant cannot describe the negative effects of their proposed initiative then this is either because there aren’t any, the consultant is badly informed, the consultant is dishonest or it has never been attempted before and you are the guinea-pigs.

I cannot think of any intervention, even those that I would recommend, that have no negative consequences. For instance, a push for explicit instruction would meet with some teacher resistance that would have to be effectively managed.

No questions asked

Of course, you will not need to ask any of these questions if they are answered in the presentation. Some of the best educationalists that I have seen will preempt most of these points. Dylan Wiliam, for instance, has spent a lot of time thinking about the negative impacts of attempts to embed more formative assessment and has tried to develop programs that provide benefits if implemented only in part. I have heard him talk of teachers aiming to embed one new practice per year.

And we shouldn’t be too harsh. Just because a consultant tells an anecdote, it does not mean that she is wrong. Anecdotes enliven presentations and often make them more bearable. A consultant promoting a genuinely effective approach might not have been challenged by such questions before; the education community is often too polite and credulous.

However, if after a thoughtful pause, you don’t get a straight answer then I’d be keen to investigate further before plunging into the next whole-school initiative.

“We’ll need to have a think about that, Marion.”

If you are interested in reading more about the evaluation of educational initiatives and ideas in a way that is accessible to teachers then I recommend Dan Willingham’s book, “When can you trust the experts?”

Standard

Four tips for reducing maths anxiety

Some might argue that there should be room for a little anxiety in school life. We don’t want to wrap students up in cotton wool because the real world is not like that. Perhaps a little anxiety helps lead to better coping strategies; more resilience. Perhaps.

By GRPH3B18 (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

By GRPH3B18 (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

However, I think it is true that anxiety can disrupt learning and so we probably want to reduce unnecessary anxiety if we want to maximise learning.

In my earlier post on Jo Boaler’s remarks about multiplication tables, I noted that improvements in competence in a subject lead to improvements in self-concept; how students feel about their academic abilities. So, if we wish to reduce student’s anxiety about mathematics it would seem reasonable to try to increase their self-concept by teaching them in such a way that they become better at maths. I have used this principle, along with others from cognitive psychology, logic and experience to suggest the following four tips to reduce maths anxiety. Please feel free to add your own in the comments.

1. Have frequent low-stakes tests

We know that retrieval practice is effective at supporting learning. However, if we test students infrequently then they are likely to see these tests as more of an event and therefore as something to worry about. Instead, we should build frequent, short-duration, low-stakes testing into our classroom routines. Not only will this make testing more familiar, it will increase competence when students tackle any high-stakes testing that is mandated by states or districts and will thus reduce anxiety on these assessments too.

2. Value routine competence in assessment

If you were to spend your time reading maths teaching blogs then you might think that they only kind of maths performance of value is when students can creatively transfer something that they have learnt to solve a novel, non-routine problem. This is not the case. Routine competence is also of great value in mathematics. There is a lot to be said for being able to reliably change the subject of equations.

If we communicate to students that it is only non-routine problem-solving that matters then we are likely to make them feel inadequate. We can send such a message explicitly or we can send it implicitly by setting large numbers of non-routine problems on and making these the focus of assessment.

Non-routine problems are great for avoiding ceiling effects on tests and enabling some of the most talented students to shine. However, assessment should also include a large amount of routine problem solving to show that this is also valued. As a general rule, I would advocate a gradual move from routine to non-routine.

3. Avoid ‘productive failure’ and problem-based learning

Similarly, some educators advocate framing lessons by setting students problems that they do not yet know how to solve in the belief that this will make them keen to develop their own solution methods or receptive to learning from the teacher. Some children might find this motivating but others – and particularly those with a low maths self-concept – are likely to feel threatened. Motivational posters will not help.

It is true that some studies seem to show that this kind of approach leads to improvements in learning. However, these are often poorly designed, with more than one factor being varied at a time (see discussion here). And it is a matter of degree. In the comments on this blog post, Barry Garelick suggested asking students to factorise quadratics with negative coefficients one they have been taught how to factorise ones with positive coefficients. This still requires a little leap but it is far less of a jump than asking students to develop their own measure of spread from scratch such as in the experiments of Manu Kapur.

Given that there is a wealth of evidence in favour of explicit instruction, where concepts and procedures are fully explained to students, it seems that productive failure is risky and could backfire through its interaction with self-concept.

4. Build robust schema

It is true that you can survive without knowing your multiplication tables. You can survive without knowing most of the things that students learn in school. If you just have a particular gap in your knowledge then you can develop workarounds.

The question is; why would you want to? Knowing common multiplications by heart makes mathematics easier to do because it is one less thing to process. Building and valuing such basic knowledge is both a way of generating little successes for students to experience and a way of aiding the process of more complex problem solving. I think that this is one of the reasons why the ‘basic skills’ models in Project Follow Through were so successful at generating gains in more complex problem-solving.

A guiding principle

In reducing maths anxiety, we should focus primarily on teaching approaches that are likely to make students better at maths. Increase maths competence to reduce maths anxiety.

Standard

Jo Boaler is wrong about multiplication tables 

The TES has quoted maths education professor Jo Boaler as stating that the increased focus on memorising times-tables in England is “terrible”:

“I have never memorised my times tables. I still have not memorised my times tables. It has never held me back, even though I work with maths every day.

“It is not terrible to remember maths facts; what is terrible is sending kids away to memorise them and giving them tests on them which will set up this maths anxiety.”

Boaler is obviously alluding to some research here although it’s not clear what this is. What is clear is that she is wrong.

Tables help

Knowing maths facts such as times tables is incredibly useful in mathematics. When we solve problems, we have to use our working memory which is extremely limited and can only cope with processing a few items at a time.

If we know our tables then when can simply draw on these answers from our long term memory when required. If we do not then we have to use our limited working memory to figure them out when required, leaving less processing power for the rest of the problem and causing ‘cognitive overload’; an unpleasant feeling of frustration that is far from motivating.

An example would be trying to factorise a quadratic expression; tables knowledge makes the process much easier.

The fact that Boaler never uses times tables as a maths education professor tells us something but I’m not sure it tells us much about the value of tables in solving maths problems.

You can read the cognitive load argument here.

Anxiety 

I am sure that testing can induce anxiety but it certainly does not have to. Skilful maths teachers will communicate with their students and let them know that the tests are a low stakes part of the learning process.

Tests are an extremely effective way of helping students learn, particularly for relatively straightforward items such as multiplication tables and so, appropriately used, they should be encouraged.

We also know that how students feel about their ability – their self-concept – is related to proficiency and that it is likely that proficiency comes first ie proficiency causes increased self-concept

With this in mind, if we want students to feel good about maths and reduce maths anxiety in the medium to long term then we need to adopt strategies that improve their ability to solve problems.

Learning multiplication tables is exactly such a strategy.

copyright Greg Ashman 2015

Standard

What if everything you knew about education was wrong? – A Review

I have finally had the opportunity to read David Didau’s latest tome. I am mentioned in the acknowledgements and a graph of mine is included in the section on differentiation. So you might think it standard for me to now write a few paragraphs about the excellent ideas the book contains or the personable and lively way that it is written. All of this is true. But I’m not going to write that sort of review. Instead, I am going to write about something from the book that made me think and something that I disagreed with.

What if

Liminality of learning

Didau discusses the fact that learning is not performance and that the two have a troublesome relationship. Learning is liminal, Didau claims. it exists in the twilight; on the edge of what is known and what is unknown; where dragons be. Some knowledge is ‘troublesome knowledge’ that is hard to come to terms with. Sometimes, we can help students past the liminal zone by repetition but troublesome knowledge is more of a challenge because it often requires us to revise ideas that we previously accepted. This is what constructivists often suggest before prescribing strategies that don’t solve the problem.

This concept of liminality is key. I tend to see learning through the theory of cognitive load; an interaction between the environment, the working memory and long-term memory; a system mediated and constrained by the limited capacity of the working memory. And a question arises here; should we therefore reduce the cognitive load in tasks to an absolute minimum? The most troublesome aspect of cognitive load theory is the concept of germane cognitive load; the load that leads to learning. It is troublesome because it makes the theory unfalsifiable and so John Sweller now recommends avoiding it in explanations.

It is interesting that germane cognitive load sits in the liminal shadows; precisely the point that Didau would wish us to focus on.

My own view is that we usually underestimate the cognitive load in the tasks that we present to novices and so, as a general rule, reducing it is a good idea. I am not a fan of the notion that children should struggle – I think this unnecessarily increases cognitive load, interacts a great deal with self-concept and can lead to negative attitudes towards a subject. However, I also tend to agree with the quote that Didau attributes to Robert Coe that, “learning happens when people have to think hard.” So there is an uncertain space here that I find interesting.

Learning needs transfer

I disagree with Didau’s definition of learning:

“The ability to retain skills and knowledge over the long term and to be able to transfer them to new contexts.”

The problem is the inclusion of transfer in the definition. It sets the bar too high for learning and implies that anything that does not lead to transfer is not true learning. This idea has been used by educationalists to argue that traditional ‘transmission’ teaching does not lead to ‘deep’ learning and that we need other methods instead. There is usually little evidence supplied that these alternative methods do actually lead to greater transfer but the assertion gets a lot of currency nonetheless.

Transfer is difficult and not even required in many situations. Who regularly solves novel problems? Professional problem-solvers – engineers, plumbers, statisticians – are usually solving variations on well-known problems (thanks to Barry Garelick for shaping my thinking on this). The elevation of transfer tends to do what Didau cautions us against when pursuing taxonomies such as Bloom’s; it devalues ‘lower’ kinds of objectives and makes learning the basics of a subject seem prosaic and unworthy.

I still believe that Willingham’s take on transfer is worth reading.

An odd review

You may think this an odd way to review a book. However, I am hoping that David Didau will be pleased. He spends many pages explaining just how our cognitive biases arise and how they trap us in flawed thinking. I imagine that he would want to get me thinking and to provoke a response to his ideas, even if that response is disagreement.

I spend a lot of time arguing about education and much of this is an ugly parade of fallacy and emotional responses. Didau asks us to take a different path; to accept uncertainty and the fact that we are likely wrong about at least some of what we think. I suggest that this is a noble call. Let us make our disagreements more agreeable.

And get yourself a copy of Didau’s excellent book.

Standard

David Klahr writes

My recent post on the value of constructivist teaching sparked something of a debate in the comments. Dan Meyer questioned the Klahr and Nigam study that I referred to. He suggested that the condition described as ‘direct instruction’ is not really direct instruction at all. This is because the students were asked a Yes/No question at the outset of each demonstration and explanation. The key passage in the study is the following:

“Children in the direct-instruction condition observed as the experimenter designed several additional experiments—some confounded, and some unconfounded—to determine the effects of steepness and run length. For each experiment, the instructor asked the children whether or not they thought the design would allow them to ‘‘tell for sure’’ whether a variable had an effect on the outcome. Then the instructor explained why each of the unconfounded experiments uniquely identified the factor that affected the outcome, and why each confounded experiment did not.”

This, to Meyer, suggests that this condition is more like problem-based learning because the students are asked to solve a problem prior to being explicitly instructed in it. I contended that this was nothing like problem-based learning. It is more like stating a learning intention; a way of making clear the point of the exercise. The interaction would also ensure student attention.

I therefore decided to put this to David Klahr, one of the researchers involved. The email exchange is below:

From: Greg Ashman

To: David Klahr

Re: The equivalence of learning paths in early science instruction

Dear David Klahr

I am a part-time PhD student of John Sweller and Slava Kalyuga. I am also a physics and mathematics teacher and I write a blog about   education.

I recently wrote a blog post about constructivism in which I referred to your paper with Milena Nigam. You can see it here:

https://gregashman.wordpress.com/2015/09/13/if-constructivist-teaching-is-the-aspirin-then-what-exactly-is-the-headache/

You will notice that prominent maths educationalist, Dan Meyer, has challenged my interpretation of your experiment in the comments. He suggests that the direct instruction condition is similar to problem based learning because students were posed a question at the outset.

I do not wish to misrepresent your work and so I wonder if I could have a little more detail on this condition such as for how long students were left with this question prior to instruction and whether they attempted to answer this themselves.

Of course, if you wished to comment on the blog then that would be excellent. If not, I would like to be able to quote your reply to this email. If you wish not to be quoted then please let me know.

Kind regards

Greg Ashman

David Klahr was gracious enough to reply and to give permission for me to reproduce his response:

From: David Klahr

To: Greg Ashman

Re: The equivalence of learning paths in early science instruction

Greg: thanks for the invitation to join the blogging about this issue.  I’m not a much of a blogger, but if you want to you can post a link to this paper, which puts the Klahr & Nigam paper in a larger context, and which addresses, in depth, several of the core issues in the blog you mention below, feel free to do so.  Here’s the link:

http://www.psy.cmu.edu/~klahr/pdf/What%20do%20we%20mean%20PNAS%20paper.pdf

If you do post it, let me know when its been up for a while and I’ll assume my position as a fly on the wall and peruse the responses.

David Klahr

PS: perhaps the reason that I don’t blog is that I’m too long winded to master the succinctness necessary for the medium.  Its clear that you are a natural:  your 4 sentence summary of Klahr & Nigam is spot on, and shorter than any other version I’ve seen!

PPS: and you can quote this email on your blog  … and I hope you leave in the “PS”, so that your readers will appreciate your summarizing  skills even more.

The linked paper is very interesting on many levels and gives more detail on the experimental method used. It also demonstrates that people have mounted Dan Meyer’s argument before. As Klahr explains:

“…it was suggested that, although the “direct instruction” label is acceptable for an approach in which the teacher designs and summarizes the experiment (as in our type A instruction), that label should not be used in a situation that also includes probe questions (and student replies) as in our type A instruction (Fig. 1). Critics argued that because such interactive engagement with students begins to move from the “talking head” approach often associated with direct instruction toward a type of guided discovery, our type A instruction involves more engagement with the student than is commonly allowed in “pure direct instruction.””

Klahr then goes on to warn us to avoid advocating for general ‘approaches’ and to try to be as specific as possible in describing the conditions that we favour.

I would make a two points:

  • Notwithstanding Klahr’s caution about the idea of ‘approaches’, I find it bizarre to suggest that the Klahr and Nigam ‘direct instruction’ condition represents a constructivist approach to teaching. Of course, if we take constructivism as a theory of learning then any type of instruction is constructivist. However, common understandings of constructivist teaching would not include a teacher setting-up, demonstrating and fully explaining a procedure. I have explored these issues in an FAQ post.
  • It is convenient for constructivists to wish to only allow us to test ‘direct instruction’ conditions that are completely non-interactive ‘talking head’ approaches. I don’t suppose any K-12 teachers actually teach like this and it reminds me of Rosenshine’s fifth type of direct instruction; “Instruction where direct instruction is portrayed in negative terms such as settings where the teacher lectures and the students sit passively.” Indeed, Meyer ventures, “My opinion is that a better test of direct instruction would have had the instructor explicitly instruct students in those four example experiments. Nothing more.” The lack of interaction will mean that student attention is not assured. This would provide a large advantage to any constructivist condition that it is compared with. I suppose that’s the idea.

Meyer was quite insistent that I answer his questions on this issue, even though I dispute the assumptions implicit in them. And so I would now like to draw a contrast. As a proponent of explicit instruction, I have provided plenty of evidence to support my position. Discussions of the nature of this evidence are interesting and important but there is certainly no lack of it.

Dan Meyer, however, has offered no such evidence to support his widely espoused views on maths teaching.

Standard

Messing about on the internet

I am generally not a fan of OECD education reports. The conclusions often reach beyond the strength of the evidence. I will never forget watching a video-linked Andreas Schleicher suggesting that PISA data supports the notion of more personalisation. What, really? Would that be evidence from Shanghai or evidence from South Korea? However, this time, they have released a report that confirms my own prejudices. So I won’t look too hard for the flaws.

The OECD seems to have found that merely giving computers to students does not improve their performance in reading, maths or science and that frequent use is likely to be linked to lower results. This is not a surprise.

There is a useful idea that keeps cropping-up in education and almost forms a model for lots of education research. Essentially, students learn what they are taught and don’t learn what they are not taught. I intend to write one of my more technical blog post about it at some point but Dan Willingham frames it well in his book, “Why don’t students like school?”.  He discusses a teacher who teaches students about a key historical event known as the ‘underground railroad’; a system for African Americans to escape slave states. The teacher gets the students baking the sorts of biscuits that the escapees would have eaten. And so the students would have been thinking about flour and eggs and baking rather than history.

This is exactly what happens when computers are gratuitously introduced into classrooms. A teacher thinks, ‘These students need to know definitions of key words; I’ll get them to find the definitions on the internet and make a PowerPoint.’ It sounds like a good idea until you analyse what the students will be thinking about. They will be thinking about animations and clip art and how to work PowerPoint.

This might be worthwhile if you have identified these things as stuff that is worth learning. But you cannot possibly justify it as a way of learning the definitions. At best it would be extremely inefficient and at worst the students would learn little of what is intended.

And yet, technology is beguiling because these students would look busy in a thoroughly modern kind of way. They would probably quite enjoy making PowerPoints – just as many generations have enjoyed making posters – and so are unlikely to become argumentative and difficult. Add to this leaders who are encouraging teachers to embed technology throughout the curriculum and you have a recipe for lots of self-defeating busywork.

Standard

If constructivist teaching is the aspirin then what exactly is the headache?

It seems that many people see constructivist teaching approaches as simply good teaching. Certainly, I used to believe that there was strong research evidence to support their use and didn’t really question this. After all, pretty much everyone seemed to agree. I had fallen for a mix of argument from popularity and argument from the authority of those who promote constructivism through education schools or as consultants.

Constructivism also possesses an element of truthiness. We all know that old-fashioned, lecture-style teaching is, well, old fashioned. And that has to be a bad thing, right?

In this post, I intend to chase the constructivist rabbit back down its hole. Brazenly mixing my metaphors, I am going to ask, “If constructivist teaching is the aspirin then what exactly is the headache?”

Aspirin

Headache: Poor levels of achievement

We might naively think that if constructivist teaching is simply good teaching then it should lead to better test scores. However, there is little evidence of this. Anywhere.

Instead, a whole culture of rationalisation has grown-up around constructivism and tests. To do well on tests, we merely need to regurgitate rote disconnected facts. Such regurgitation is, of course, useless and so we can dismiss evidence from test scores; evidence that tends to show the superiority of explicit instruction.

Educationalists often take their lead from America and it seems that the U.S. makes far more extensive use of multiple choice bubble tests than the rest of the world. Even so, it’s a little hard to swallow this rhetoric about tests only ever assessing rote memorisation. Even a multiple choice test, if well designed, can assess understanding; simply make one of the choices a common misconception. If the questions are then withheld until the date of the test – an advantage of much-maligned standardised tests – we can determine if students really do understand the principle rather than simply having learnt the right answer.

So I think this dismissal of tests is a convenient excuse.

Headache: Poor understanding

If we do accept the premise that most tests only assess recall then perhaps the advantage of constructivist teaching is that it enhances the non-assessed understanding.

Apparently, back in olden times, students learnt all sorts of things at school that they simply did not understand. In history, they were taught to parrot dates, oblivious to their significance. In mathematics, they were forced to memorise standard algorithms without ever understanding how these algorithms work. Presumably, the same must me true for Shanghai, Hong Kong, Korea and Singapore today whose students don’t actually understand the maths that they so skilfully apply.

I disagree that knowledge and understanding can be dichotomised in this way as qualitatively different things. Although a useful concept, understanding basically consists of more and better knowledge. To use the psychological jargon, it results from well-developed schema. If this is the case, it is unlikely that some forms of instruction will be good for improving knowledge yet quite different ones are required for developing understanding.

A key experiment by Klahr and Nigam tested this notion directly. They taught science students about the principle of controlling variables either explicitly or through the students performing their own investigations. As you might expect, far fewer in the latter group learnt the principle. But of those who did they were no better at later evaluating science fair posters. Their understanding was not superior.

Headache: Poor motivation

So perhaps constructivism does not directly lead to higher tests scores or greater understanding. On the other hand, everyone knows that traditional school is awful and boring; Ken Robinson says so. Perhaps a constructivist teaching approach is more motivating for students? Ultimately, adoption of constructivist methods will lead to a great leap forward as more students develop a passion for science, maths and history. The initial dip in test scores will be far outweighed by the greater uptake of conceptually demanding electives.

You will find studies that seem to show increased motivation from constructivist approaches. But this is usually pretty easy to explain by the fact that enthusiastic teachers implementing a new program that they believe-in will pass that enthusiasm on to their students, particularly if the approach is compared with business-as-usual.

Once you think about constructivism for a little while, the proposition that these methods are more intrinsically motivating becomes deeply implausible. Is it really motivating to be presented with novel problems that you have no idea how to solve? Indeed, the students in Project Follow Through who were exposed to a program that systematically built the skills needed for problem-solving not only outperformed other students in tests of problem-solving, but also had greater increases in self-concept when compared to more constructivist models.

I can see some logic in the motivational power of projects because students can follow their interests (if allowed). But what if a science project takes us away from learning any actual science? A few years of making posters about monster trucks might be fun but once the inevitable confrontation with reality occurs, and students realise that they don’t know any science, you are unlikely to see a huge uptake in science electives.

Constructivists also sink their own argument when they insist on designing learning experiences around the mundane and commonplace in the name of ‘relevance’. The idea that students will be more motivated by a project to discover how their local community disposes of its waste than a topic about dinosaurs is obviously absurd. There is also a whiff of chauvinism; kids like these couldn’t possibly appreciate great art or poetry or the abstract beauty of mathematics because they cannot see any farther than the end of the street.

Bait-and-switch

So, what is it? What is the headache that constructivism cures? When you investigate, it seems that the idea that constructivism represents good teaching is a classic bait-and-switch.

I think that lots of people just like the idea of it.

Standard

What’s wrong with Dan Meyer’s TED talk?

I originally posted the following on my old blog back in 2013. I thought it was worth reposting given recent discussions. 

For those of you who don’t know, Dan Meyer is a bit of a phenomenon. On the basis of having taught maths for six years in California, he now travels the world, holding professional development sessions on his approach to teaching. He has even given a TED Talk.

I was recently made aware of Onalytica’s list of the most influential education blogs. I’m not clear how Onalytica rates influence and it seems to throw-up some rather odd results. However, Dan Meyer’s blog is rated number 1 by this approach and it has been for some time.

I think it interesting that this should be the case because;

1. I disagree with Dan Meyer’s analysis of the teaching of mathematics

2. Dan Meyer presents little evidence to support his case

Dan Meyer has an aphorism that sits beneath the title of his website; “less helpful.” This really summarises his approach. For instance, in his TED talk, he discusses a textbook question about a water tank. The original question has a lot of structure. It is split into multiple parts that lead students through the question in a particular order. This is not accidental. The textbook will have been designed by writers who have an implicit or explicit understanding of cognitive load. Novice learners need this structure because the capacity of the working memory is limited and so it enables novices to focus on a few salient points at a time.

However, Meyer says, “The question is: How long will it take you to fill it up? First things first, we eliminate all the sub-steps. Students have to develop those, they have to formulate those. And then notice that all the information written on there is stuff you’ll need. None of it’s a distractor, so we lose that. Students need to decide, “All right, well, does the height matter? Does the side of it matter? Does the color of the valve matter? What matters here?” Such an under-represented question in math curriculum. So now we have a water tank. How long will it take you to fill it up? And that’s it.”

This is quite poor advice. It is conceivable that some students who posses more mathematical knowledge will be able to cope – this is known as the expert reversal effect. But, for novice learners, this is going to lead to confusion and frustration. I am sure than Meyer is a talented and inspirational teacher who has strategies for mitigating these issues but a less experienced teacher following this advice will find it a struggle. Little learning will take place. Given that conventional means are more consistent with both cognitive science and the findings of educational research, it is odd that a strategy would be suggested that throws up such problems.

My second point about the TED talk in particular – and Meyer’s approach more generally – is the odd nature of the discourse. Meyer does not seem to feel the need to justify any of his claims with reference to any relevant evidence. At one point in the TED talk, we hear a reference to something said by a TV producer but that’s about it. Are we to assume that this whole edifice is constructed solely upon what Dan Meyer reckons?

An interesting contrast would be with Dylan Wiliam. He is one of the most respected and influential educationalists in the world right now, having been deputy director of the Institute of Education in London. Yet, he never presents anything without discussing the research evidence behind it. He doesn’t expect us to accept an argument from authority, even from him.

I do not claim that all assertions are unacceptable. Some points in an argument are obvious and it would be tedious to reference every statement that is made. Some points are clearly a matter of opinion (e.g. my own assertion that thinking hats are silly) and should be taken as such. However, to build an entire methodology around assertions seems a little bit much. Most posts on Meyer’s blog are descriptions of the application of his methods with the central tenets seemingly assumed. Some plucky individuals have made reference to cognitive science in the forums but these are treated largely as an interesting curiosity.

I don’t actually hold Meyer responsible for any of this. Teachers have views which they are entitled to share. However, it is an indictment of our profession that assertions can elevate you to such dizzy heights, even when they appear at odds with the available evidence.

Standard