I got a bit of a mention

In case you missed it, I was recently mentioned in an essay by UK Schools Minister Nick Gibb:

GibbNick2015

The essay comes from a collection produced by the think-tank Policy Exchange. It was put together to mark a lecture given by E D Hirsch Jr. The essay collection is well worth reading.

If you are looking for a blog post in which I try to popularise Hirsch’s ideas then this one is a good place to start.


Two Roads

“Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.”

Robert Frost

Frost’s famous poem is often interpreted as a celebration of free-thinking; of not following the crowd. However, critics contend it has quite a different meaning; of how foolish it is to spend our time dwelling in past decisions and attributing blame – “If only I had…”. There will always be something that you missed on the other path; something of value. Such is life.

I am going to inhabit that ambiguity a little as I suggest the two paths that we face as educators who wish to do a better job.

We can all probably recognise the current state of affairs. Early years and primary school might be different, particularly with regard to reading and maths instruction, but I suspect that most secondary school teaching is a kind of suboptimal explicit instruction. This is not a harsh criticism. You can learn a lot this way but we should always strive to do better because we can always do better. This is human.

If you are keen to improve your teaching then there are two roads that you may choose to travel.

The first road has had many names but it basically involves the teacher talking less and the students doing more. But the things the children are doing are important. They can’t generally be completing long lists of sums or spelling drills. Their activity needs to look more like what people do in the real world. So science classes should involve students doing the sorts of things that professional scientists do. Mathematics must involve solving novel problems.

The idea is that this is both more motivating for students and that it leads to a deeper kind of learning. Rather than just knowing stuff, students can process knowledge better. This might be described in terms of generic skills that they will develop.

This first road represents something of a revolution. It involves a breaking-down prior to any building back up. It is widely encouraged.

The second road is quite different. This is about taking what is already there and making it better. One simple way to improve everyday explicit instruction is to look at how feedback is structured; both to the teacher and the students. There is nothing more pointless than taking-up exercise books every few weeks and marking them. By the time you read what your students have written, the moment has passed and you are onto something else, with no capacity to do anything about it. And your students have little capacity to respond.

It is better to gain regular feedback from students as you go along in order to avoid the ignorance deal; that state where students and teachers avoid feedback in order to collude in the idea that learning has taken place; a deal to avoid cognitive dissonance.

If you ask questions of all students, not just a few, you will get a better idea of what they really understand. To manage this, you will need good classroom behaviour and there are ways to work on that, provided that schools don’t adopt policies that militate against this. Short, regular tests can give good information to teachers, can give the students a clear idea of what they do and do not know and can also consolidate learning through retrieval practice. Space them out and return regularly to important concepts and you can disrupt the forgetting process.

I would also add that good quality curriculum materials that can be refined over time take away some of the chance element of learning; that tweak you made last year which seemed to result in an improved set of test scores can be carried over to this year.

It is clear which road I favour and where I think the evidence lies so let’s now return to the sense of Frost’s poem. Does it really matter all that much? Will we miss-out on something whichever path we choose to follow? Perhaps. And I will return to this another time.

For now, I will leave it for you to think about.

William Bartlett [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons

William Bartlett [CC BY-SA 2.0 (http://creativecommons.org/licenses/by-sa/2.0)%5D, via Wikimedia Commons


Five questions to ask an education guru

Imagine that you are a teacher in a school and you have just sat through a presentation by an education guru of some sort. Let’s call her ‘Marion’. If it is safe to do so, what questions should you ask Marion at the end of the presentation? I have a few suggestions.

1. What are you actually suggesting that we should do?

A typical education guru will say plenty of things that don’t actually lead to concrete proposals. She may spin anecdotes about a fantastic school that she recently visited where everything was marvelous.

Alternatively, she may construct terms and then set about defining these terms. You may find yourself hearing about ‘Productive Change Capacity’, how this is made up of constructive professional dialogue, openness to the views of others etc. and how this can be contrasted with ‘Resistive Change Capacity’ which is made of restricted professional dialogue, closed mindedness and so on. I’ve just made-up these two terms but education literature is full of such tautologies that lead us nowhere in particular.

Instead, if Marion really has anything to offer then she should be able to describe what might change as a result of taking on her ideas. Of course, you don’t expect Marion to know everything about your context and so she might need to ask a few questions too. However, she clearly should have something to offer or there really is no point.

2. What problems do your proposals solve?

Presumably, implementing Marion’s ideas will make things better in some way. Otherwise, why would we bother? So it seems reasonable to ask what current or typical problems these ideas will solve. This also places the ideas on a testable basis. If they solve a particular problem then what will that look like? How will we know that the problem is solved? Perhaps our students will read more at home or they might become better mathematical problem solvers. Perhaps they will feel better about school. Some of these things are easier to measure than others but any meaningful change should have observable consequences.

3. What would convince you that you are wrong?

Testability leads to a key principle of science that is largely absent from education discussions; the idea of falsifiability. In fact, it is so absent that you are likely to need to persist in order to get an answer to this question. I would predict that Marion’s first response would be to explain why her ideas are not wrong, why they are well grounded in theory or research or whatever. So you’ll probably need to clarify with a follow-up question.

The problem is that many supposed educational theories can explain all possible sets of circumstances. We will see an example shortly. If Marion really cannot think of anything that would convince her that her ideas are wrong then we have something more akin to an unshakable belief than something based on evidence.

Strong theories are always falsifiable. Their strength comes from the fact that, despite this, nobody has managed to demonstrate that they are wrong. There is a common story that tells of how the evolutionary theorist, J. B. S. Haldane, was asked what would falsify the theory of evolution and he answered that finding fossilised rabbits in Precambrian rocks would do it.

4. Does adopting part of the approach give part of the benefit?

One way that a proposed intervention can become unfalsifiable is when it only works if implemented fully and with 100% fidelity. If it works, great. If it doesn’t work then that is because you didn’t do it properly. Either way, the rightness of the original intervention remains unchallenged.

Something like this happened with a huge differentiated instruction study in the U.S. It didn’t work but the authors concluded that this was because the intervention was not implemented correctly, leaving the principle of differentiated instruction unchallenged.

Now it may be the case that some approaches will work with a perfect implementation and will not work or will cause harm if implemented with anything less than this. If this is true then ask yourself how much practical value there would be in adopting this course of action. It seems unlikely in the extreme that you will get any team of teachers anywhere to implement something with complete faithfulness to the originators’ intentions.

However, if implementing part of the program delivers part of the benefits then this seems a much better prospect. You can imagine different teachers having strengths in different elements, at least to begin with. And as the benefits accrue then you might start to win over the skeptics.

5. What are the negative effects?

If a consultant cannot describe the negative effects of their proposed initiative then this is either because there aren’t any, the consultant is badly informed, the consultant is dishonest or it has never been attempted before and you are the guinea-pigs.

I cannot think of any intervention, even those that I would recommend, that have no negative consequences. For instance, a push for explicit instruction would meet with some teacher resistance that would have to be effectively managed.

No questions asked

Of course, you will not need to ask any of these questions if they are answered in the presentation. Some of the best educationalists that I have seen will preempt most of these points. Dylan Wiliam, for instance, has spent a lot of time thinking about the negative impacts of attempts to embed more formative assessment and has tried to develop programs that provide benefits if implemented only in part. I have heard him talk of teachers aiming to embed one new practice per year.

And we shouldn’t be too harsh. Just because a consultant tells an anecdote, it does not mean that she is wrong. Anecdotes enliven presentations and often make them more bearable. A consultant promoting a genuinely effective approach might not have been challenged by such questions before; the education community is often too polite and credulous.

However, if after a thoughtful pause, you don’t get a straight answer then I’d be keen to investigate further before plunging into the next whole-school initiative.

“We’ll need to have a think about that, Marion.”

If you are interested in reading more about the evaluation of educational initiatives and ideas in a way that is accessible to teachers then I recommend Dan Willingham’s book, “When can you trust the experts?”


Four tips for reducing maths anxiety

Some might argue that there should be room for a little anxiety in school life. We don’t want to wrap students up in cotton wool because the real world is not like that. Perhaps a little anxiety helps lead to better coping strategies; more resilience. Perhaps.

By GRPH3B18 (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons

By GRPH3B18 (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)%5D, via Wikimedia Commons

However, I think it is true that anxiety can disrupt learning and so we probably want to reduce unnecessary anxiety if we want to maximise learning.

In my earlier post on Jo Boaler’s remarks about multiplication tables, I noted that improvements in competence in a subject lead to improvements in self-concept; how students feel about their academic abilities. So, if we wish to reduce student’s anxiety about mathematics it would seem reasonable to try to increase their self-concept by teaching them in such a way that they become better at maths. I have used this principle, along with others from cognitive psychology, logic and experience to suggest the following four tips to reduce maths anxiety. Please feel free to add your own in the comments.

1. Have frequent low-stakes tests

We know that retrieval practice is effective at supporting learning. However, if we test students infrequently then they are likely to see these tests as more of an event and therefore as something to worry about. Instead, we should build frequent, short-duration, low-stakes testing into our classroom routines. Not only will this make testing more familiar, it will increase competence when students tackle any high-stakes testing that is mandated by states or districts and will thus reduce anxiety on these assessments too.

2. Value routine competence in assessment

If you were to spend your time reading maths teaching blogs then you might think that they only kind of maths performance of value is when students can creatively transfer something that they have learnt to solve a novel, non-routine problem. This is not the case. Routine competence is also of great value in mathematics. There is a lot to be said for being able to reliably change the subject of equations.

If we communicate to students that it is only non-routine problem-solving that matters then we are likely to make them feel inadequate. We can send such a message explicitly or we can send it implicitly by setting large numbers of non-routine problems on and making these the focus of assessment.

Non-routine problems are great for avoiding ceiling effects on tests and enabling some of the most talented students to shine. However, assessment should also include a large amount of routine problem solving to show that this is also valued. As a general rule, I would advocate a gradual move from routine to non-routine.

3. Avoid ‘productive failure’ and problem-based learning

Similarly, some educators advocate framing lessons by setting students problems that they do not yet know how to solve in the belief that this will make them keen to develop their own solution methods or receptive to learning from the teacher. Some children might find this motivating but others – and particularly those with a low maths self-concept – are likely to feel threatened. Motivational posters will not help.

It is true that some studies seem to show that this kind of approach leads to improvements in learning. However, these are often poorly designed, with more than one factor being varied at a time (see discussion here). And it is a matter of degree. In the comments on this blog post, Barry Garelick suggested asking students to factorise quadratics with negative coefficients one they have been taught how to factorise ones with positive coefficients. This still requires a little leap but it is far less of a jump than asking students to develop their own measure of spread from scratch such as in the experiments of Manu Kapur.

Given that there is a wealth of evidence in favour of explicit instruction, where concepts and procedures are fully explained to students, it seems that productive failure is risky and could backfire through its interaction with self-concept.

4. Build robust schema

It is true that you can survive without knowing your multiplication tables. You can survive without knowing most of the things that students learn in school. If you just have a particular gap in your knowledge then you can develop workarounds.

The question is; why would you want to? Knowing common multiplications by heart makes mathematics easier to do because it is one less thing to process. Building and valuing such basic knowledge is both a way of generating little successes for students to experience and a way of aiding the process of more complex problem solving. I think that this is one of the reasons why the ‘basic skills’ models in Project Follow Through were so successful at generating gains in more complex problem-solving.

A guiding principle

In reducing maths anxiety, we should focus primarily on teaching approaches that are likely to make students better at maths. Increase maths competence to reduce maths anxiety.


Jo Boaler is wrong about multiplication tables 

The TES has quoted maths education professor Jo Boaler as stating that the increased focus on memorising times-tables in England is “terrible”:

“I have never memorised my times tables. I still have not memorised my times tables. It has never held me back, even though I work with maths every day.

“It is not terrible to remember maths facts; what is terrible is sending kids away to memorise them and giving them tests on them which will set up this maths anxiety.”

Boaler is obviously alluding to some research here although it’s not clear what this is. What is clear is that she is wrong.

Tables help

Knowing maths facts such as times tables is incredibly useful in mathematics. When we solve problems, we have to use our working memory which is extremely limited and can only cope with processing a few items at a time.

If we know our tables then when can simply draw on these answers from our long term memory when required. If we do not then we have to use our limited working memory to figure them out when required, leaving less processing power for the rest of the problem and causing ‘cognitive overload’; an unpleasant feeling of frustration that is far from motivating.

An example would be trying to factorise a quadratic expression; tables knowledge makes the process much easier.

The fact that Boaler never uses times tables as a maths education professor tells us something but I’m not sure it tells us much about the value of tables in solving maths problems.

You can read the cognitive load argument here.

Anxiety 

I am sure that testing can induce anxiety but it certainly does not have to. Skilful maths teachers will communicate with their students and let them know that the tests are a low stakes part of the learning process.

Tests are an extremely effective way of helping students learn, particularly for relatively straightforward items such as multiplication tables and so, appropriately used, they should be encouraged.

We also know that how students feel about their ability – their self-concept – is related to proficiency and that it is likely that proficiency comes first ie proficiency causes increased self-concept

With this in mind, if we want students to feel good about maths and reduce maths anxiety in the medium to long term then we need to adopt strategies that improve their ability to solve problems.

Learning multiplication tables is exactly such a strategy.

copyright Greg Ashman 2015


What if everything you knew about education was wrong? – A Review

I have finally had the opportunity to read David Didau’s latest tome. I am mentioned in the acknowledgements and a graph of mine is included in the section on differentiation. So you might think it standard for me to now write a few paragraphs about the excellent ideas the book contains or the personable and lively way that it is written. All of this is true. But I’m not going to write that sort of review. Instead, I am going to write about something from the book that made me think and something that I disagreed with.

What if

Liminality of learning

Didau discusses the fact that learning is not performance and that the two have a troublesome relationship. Learning is liminal, Didau claims. it exists in the twilight; on the edge of what is known and what is unknown; where dragons be. Some knowledge is ‘troublesome knowledge’ that is hard to come to terms with. Sometimes, we can help students past the liminal zone by repetition but troublesome knowledge is more of a challenge because it often requires us to revise ideas that we previously accepted. This is what constructivists often suggest before prescribing strategies that don’t solve the problem.

This concept of liminality is key. I tend to see learning through the theory of cognitive load; an interaction between the environment, the working memory and long-term memory; a system mediated and constrained by the limited capacity of the working memory. And a question arises here; should we therefore reduce the cognitive load in tasks to an absolute minimum? The most troublesome aspect of cognitive load theory is the concept of germane cognitive load; the load that leads to learning. It is troublesome because it makes the theory unfalsifiable and so John Sweller now recommends avoiding it in explanations.

It is interesting that germane cognitive load sits in the liminal shadows; precisely the point that Didau would wish us to focus on.

My own view is that we usually underestimate the cognitive load in the tasks that we present to novices and so, as a general rule, reducing it is a good idea. I am not a fan of the notion that children should struggle – I think this unnecessarily increases cognitive load, interacts a great deal with self-concept and can lead to negative attitudes towards a subject. However, I also tend to agree with the quote that Didau attributes to Robert Coe that, “learning happens when people have to think hard.” So there is an uncertain space here that I find interesting.

Learning needs transfer

I disagree with Didau’s definition of learning:

“The ability to retain skills and knowledge over the long term and to be able to transfer them to new contexts.”

The problem is the inclusion of transfer in the definition. It sets the bar too high for learning and implies that anything that does not lead to transfer is not true learning. This idea has been used by educationalists to argue that traditional ‘transmission’ teaching does not lead to ‘deep’ learning and that we need other methods instead. There is usually little evidence supplied that these alternative methods do actually lead to greater transfer but the assertion gets a lot of currency nonetheless.

Transfer is difficult and not even required in many situations. Who regularly solves novel problems? Professional problem-solvers – engineers, plumbers, statisticians – are usually solving variations on well-known problems (thanks to Barry Garelick for shaping my thinking on this). The elevation of transfer tends to do what Didau cautions us against when pursuing taxonomies such as Bloom’s; it devalues ‘lower’ kinds of objectives and makes learning the basics of a subject seem prosaic and unworthy.

I still believe that Willingham’s take on transfer is worth reading.

An odd review

You may think this an odd way to review a book. However, I am hoping that David Didau will be pleased. He spends many pages explaining just how our cognitive biases arise and how they trap us in flawed thinking. I imagine that he would want to get me thinking and to provoke a response to his ideas, even if that response is disagreement.

I spend a lot of time arguing about education and much of this is an ugly parade of fallacy and emotional responses. Didau asks us to take a different path; to accept uncertainty and the fact that we are likely wrong about at least some of what we think. I suggest that this is a noble call. Let us make our disagreements more agreeable.

And get yourself a copy of Didau’s excellent book.


David Klahr writes

My recent post on the value of constructivist teaching sparked something of a debate in the comments. Dan Meyer questioned the Klahr and Nigam study that I referred to. He suggested that the condition described as ‘direct instruction’ is not really direct instruction at all. This is because the students were asked a Yes/No question at the outset of each demonstration and explanation. The key passage in the study is the following:

“Children in the direct-instruction condition observed as the experimenter designed several additional experiments—some confounded, and some unconfounded—to determine the effects of steepness and run length. For each experiment, the instructor asked the children whether or not they thought the design would allow them to ‘‘tell for sure’’ whether a variable had an effect on the outcome. Then the instructor explained why each of the unconfounded experiments uniquely identified the factor that affected the outcome, and why each confounded experiment did not.”

This, to Meyer, suggests that this condition is more like problem-based learning because the students are asked to solve a problem prior to being explicitly instructed in it. I contended that this was nothing like problem-based learning. It is more like stating a learning intention; a way of making clear the point of the exercise. The interaction would also ensure student attention.

I therefore decided to put this to David Klahr, one of the researchers involved. The email exchange is below:

From: Greg Ashman

To: David Klahr

Re: The equivalence of learning paths in early science instruction

Dear David Klahr

I am a part-time PhD student of John Sweller and Slava Kalyuga. I am also a physics and mathematics teacher and I write a blog about   education.

I recently wrote a blog post about constructivism in which I referred to your paper with Milena Nigam. You can see it here:

https://gregashman.wordpress.com/2015/09/13/if-constructivist-teaching-is-the-aspirin-then-what-exactly-is-the-headache/

You will notice that prominent maths educationalist, Dan Meyer, has challenged my interpretation of your experiment in the comments. He suggests that the direct instruction condition is similar to problem based learning because students were posed a question at the outset.

I do not wish to misrepresent your work and so I wonder if I could have a little more detail on this condition such as for how long students were left with this question prior to instruction and whether they attempted to answer this themselves.

Of course, if you wished to comment on the blog then that would be excellent. If not, I would like to be able to quote your reply to this email. If you wish not to be quoted then please let me know.

Kind regards

Greg Ashman

David Klahr was gracious enough to reply and to give permission for me to reproduce his response:

From: David Klahr

To: Greg Ashman

Re: The equivalence of learning paths in early science instruction

Greg: thanks for the invitation to join the blogging about this issue.  I’m not a much of a blogger, but if you want to you can post a link to this paper, which puts the Klahr & Nigam paper in a larger context, and which addresses, in depth, several of the core issues in the blog you mention below, feel free to do so.  Here’s the link:

http://www.psy.cmu.edu/~klahr/pdf/What%20do%20we%20mean%20PNAS%20paper.pdf

If you do post it, let me know when its been up for a while and I’ll assume my position as a fly on the wall and peruse the responses.

David Klahr

PS: perhaps the reason that I don’t blog is that I’m too long winded to master the succinctness necessary for the medium.  Its clear that you are a natural:  your 4 sentence summary of Klahr & Nigam is spot on, and shorter than any other version I’ve seen!

PPS: and you can quote this email on your blog  … and I hope you leave in the “PS”, so that your readers will appreciate your summarizing  skills even more.

The linked paper is very interesting on many levels and gives more detail on the experimental method used. It also demonstrates that people have mounted Dan Meyer’s argument before. As Klahr explains:

“…it was suggested that, although the “direct instruction” label is acceptable for an approach in which the teacher designs and summarizes the experiment (as in our type A instruction), that label should not be used in a situation that also includes probe questions (and student replies) as in our type A instruction (Fig. 1). Critics argued that because such interactive engagement with students begins to move from the “talking head” approach often associated with direct instruction toward a type of guided discovery, our type A instruction involves more engagement with the student than is commonly allowed in “pure direct instruction.””

Klahr then goes on to warn us to avoid advocating for general ‘approaches’ and to try to be as specific as possible in describing the conditions that we favour.

I would make a two points:

  • Notwithstanding Klahr’s caution about the idea of ‘approaches’, I find it bizarre to suggest that the Klahr and Nigam ‘direct instruction’ condition represents a constructivist approach to teaching. Of course, if we take constructivism as a theory of learning then any type of instruction is constructivist. However, common understandings of constructivist teaching would not include a teacher setting-up, demonstrating and fully explaining a procedure. I have explored these issues in an FAQ post.
  • It is convenient for constructivists to wish to only allow us to test ‘direct instruction’ conditions that are completely non-interactive ‘talking head’ approaches. I don’t suppose any K-12 teachers actually teach like this and it reminds me of Rosenshine’s fifth type of direct instruction; “Instruction where direct instruction is portrayed in negative terms such as settings where the teacher lectures and the students sit passively.” Indeed, Meyer ventures, “My opinion is that a better test of direct instruction would have had the instructor explicitly instruct students in those four example experiments. Nothing more.” The lack of interaction will mean that student attention is not assured. This would provide a large advantage to any constructivist condition that it is compared with. I suppose that’s the idea.

Meyer was quite insistent that I answer his questions on this issue, even though I dispute the assumptions implicit in them. And so I would now like to draw a contrast. As a proponent of explicit instruction, I have provided plenty of evidence to support my position. Discussions of the nature of this evidence are interesting and important but there is certainly no lack of it.

Dan Meyer, however, has offered no such evidence to support his widely espoused views on maths teaching.