This is the homepage of Greg Ashman, a teacher living and working in Australia. Nothing that I write or that I link to necessarily reflects the view of my school.

Read about my ebook, “Ouroboros” here.

ouroboros small

I have written for Spiked magazine

Educationalists: Teaching bad ideas

Teachers have lost their mojo

I have written for The Conversation:

Ignore the fads

Why students make silly mistakes

Some of my writing is also on the researchED website workingoutwhatworks.com

Briefing: Meta-cognition

Your own personal PISA

I used to write articles for the the TES. These now appear to have been paywalled. I will probably make them available on my blog at some point. If you have access then you can find them here:

Create waves of learningMaster the mysterious art of explanationTaking a critical look at praiseBehaviourGreat Scott! Let’s push the brain to its limitsThe science fiction that doing is bestMake them sit up and take noticeFor great rewards, sweat the small stuffWhere the grass is brownerStand-out teaching – minus differentiation


Why the Scientific American article on maths education doesn’t add up

PISA recently released a report about the data that they have collected on maths teaching and learning strategies. I analysed some of this data and related it to the claims that PISA made. The report was quickly followed by an article in Scientific American.

The Scientific American article focused on one area of the PISA report in particular – the rate at which students report using “memorisation” strategies. In the working paper used as a basis for this report, the measure used to quantify memorisation is explained. Students were asked the following questions:

“For each group of three items, please choose the item that best describes your approach to mathematics.

Labels (not shown in the questionnaire): (m) memorisation (e) elaboration (c) control

a) Please tick only one of the following three boxes.

1 When I study for a mathematics test, I try to work out what the most important parts to learn are. (c)

2 When I study for a mathematics test, I try to understand new concepts by relating them to things I already know. (e)

3 When I study for a mathematics test, I learn as much as I can off by heart. (m)

b) Please tick only one of the following three boxes.

1 When I study mathematics, I try to figure out which concepts I still have not understood properly. (c)

2 When I study mathematics, I think of new ways to get the answer. (e)

3 When I study mathematics, I make myself check to see if I remember the work I have already done. (m)

c) Please tick only one of the following three boxes.

1 When I study mathematics, I try to relate the work to things I have learnt in other subjects. (e)

2 When I study mathematics, I start by working out exactly what I need to learn. (c)

3 When I study mathematics, I go over some problems so often that I feel as if I could solve them in my sleep. (m)

d) Please tick only one of the following three boxes.

1 In order to remember the method for solving a mathematics problem, I go through examples again and again. (m)

2 I think about how the mathematics I have learnt can be used in everyday life. (e)

3 When I cannot understand something in mathematics, I always search for more information to clarify the problem. (c)”

I am not convinced that these memorisation options represent actual memorisation strategies. Also, the questions are asked in a way that forces a discrete choice. The accepted practice in psychology is to use a scale of agreement with any given statement (e.g. a Likert scale). Without this, we have a validity and reliability problem. For instance, a student might partly agree with all three responses to question a) but when they are forced to select one response then this will be recorded as 100% agreement with that option and 0% agreement with the alternatives. This is the same reason why the Myers-Briggs personality test is invalid and unreliable.

It is therefore hardly surprising that I could find no correlation between the “index of memorisation” that PISA derive from these responses and a country’s PISA mean maths score. These questions probably do not reliably measure the use of memorisation.

Yet the Scientific American article makes a number of claims about memorisation on the basis of this data. Unfortunately, the authors provide no references and they seem to be in possession of data that is not presented in the PISA report (if either author reads this post then I would be grateful for this data). Nevertheless, I think some of these claims are highly unlikely and I wonder whether the authors may have made an error.

I will list these claims below and then comment on them.

1. In every country, the memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.

I cannot tell how a “memoriser” is defined from the PISA report. For instance, is it a person who answers with a class (m) response to all of the questions above, three of them, two of them? Similarly, data on the number of such memorisers in each country is not provided.

I would not be surprised to find out that, in any given country, these memorisers are the lowest achievers but I am not sure what this would tell us. As Robert Craigen points out in a comment on a previous post, memorisers might have resorted to some of these strategies due to poor teaching. They may also have less understanding of, or interest in, the survey questions.

However, I find it highly unlikely that countries with high numbers of memorisers correlate with teens doing poorly on the PISA math assessment. Presumably, countries with higher numbers of memorisers will have a higher overall index of memorisation. If not, this would require the remaining non-memorisers to use far fewer memorisation strategies than the overall mean. If you plot percentage of maths low achievers against index of memorisation then there is no correlation.


2. Further analysis showed that memorizers were approximately half a year behind students who used relational and self-monitoring strategies.

3. In no country were memorizers in the highest-achieving group, and in some high-achieving economies, the differences between memorizers and other students were substantial.

Again, I would like to see the data here but I can believe it.

4. In France and Japan, for example, pupils who combined self-monitoring and relational strategies outscored students using memorization by more than a year’s worth of schooling.

Why select just two countries like this? Again, I don’t have the underlying data but, if I did, it wouldn’t tell us much. It is fraught enough to try to make comparisons across many education systems of different sizes and with different cultures. At least if we include all of them then we might pick up some general trends. I’m sure it would be possible to prove almost anything with just two examples.

5. The U.S. actually had more memorizers than South Korea, long thought to be the paradigm of rote learning.

Again, we would need to know the definition of a “memoriser”.

6. Unfortunately, most elementary classrooms ask students to memorize times tables and other number facts, often under time pressure, which research shows can seed math anxiety. It can actually hinder the development of number sense.

I would love to see this research. Victoria Simms recently reviewed a book by one of the authors of the Scientific American article and found a similar claim:

“Boaler suggests that reducing timed assessment in education would increase children’s growth mindsets and in turn improve mathematical learning; she thus emphasises that education should not be focused on the fast processing of information but on conceptual understanding. In addition, she discusses a purported causal connection between drill practice and long-term mathematical anxiety, a claim for which she provides no evidence, beyond a reference to “Boaler (2014c)” (p. 38). After due investigation it appears that this reference is an online article which repeats the same claim, this time referencing “Boaler (2014)”, an article which does not appear in the reference list, or on Boaler’s website. Referencing works that are not easily accessible, or perhaps unpublished, makes investigating claims and assessing the quality of evidence very difficult.”

7. In 2005 psychologist Margarete Delazer of Medical University of Innsbruck in Austria and her colleagues took functional MRI scans of students learning math facts in two ways: some were encouraged to memorize and others to work those facts out, considering various strategies. The scans revealed that these two approaches involved completely different brain pathways. The study also found that the subjects who did not memorize learned their math facts more securely and were more adept at applying them. Memorizing some mathematics is useful, but the researchers’ conclusions were clear: an automatic command of times tables or other facts should be reached through “understanding of the underlying numerical relations.”

This claim does at least provide a clue as to where to find the evidence although it is a little odd. The neuroscience part of the claim is essentially irrelevant to teachers – why care what ‘brain pathways’ are used? Teachers generally have no opinion on this. We need to focus instead on the quality of learning.

I think I have found the paper. Unusually, it does complete both a neuroscience imaging study and a behavioural study on the quality of learning, as suggested in the Scientific American claim. The participants were 16 university students or graduates. They did a series of trials where they were given two numbers, A and B. In the ‘strategy’ condition, students were given a formula to apply such as ((B-A)+1)+B)=C in order to work out the answer, C. In drill instruction, they were given A, B and the response, C to simply memorise. Surprisingly, the memorisers did pretty well on a later test but, wholly unsurprisingly, they could not extend this to transfer tasks involving new values for A and B. This is entirely consistent with the findings of cognitive load theory were problem solving so occupies our attention that we cannot infer the underlying rule. The strategy example is much more like following a worked example.

However, none of this bears much relationship to memorisation strategies in the PISA report. Is anyone attempting to teach students all of the possible questions that they might be asked and all of the possible numerical answers to these questions? In fact, the use of formulas like in the above “strategy” condition is often criticised as the “rote” learning of formulas and I imagine that this is what maths memorisers – if well-defined – would be trying to memorise.

This research does not seem to apply to the learning of basic maths facts such as multiplication tables. Teachers attempt to teach these to the point of memorisation but the underlying rule is not withheld. Tables are built up from counting patterns, arguments about groups of the same size and so on. Patterns are highlighted like the ones in the 11 and 9 times tables and a few more facts are committed to memory through practice such as 7 x 8 = 56. But these are very simple operations and nothing like the contrivance ((B-A)+1)+B)=C. In fact, the benefit of knowing simple multiplication results ‘by heart’ is that you can then attend to the other elements of a complex operation.

8. Timed tests impair working memory in students of all backgrounds and achievement levels, and they contribute to math anxiety, especially among girls.

This is partially a repeat of claim 6 but also adds the claim that timed tests impair working memory. Again, it would be good to see the evidence to support this.

The PISA definition of good teaching 

In my last three posts, I took a look at the PISA data sitting behind the recent, “Ten questions…” report.

In the comments, Christian Bokhove provided a link to the working paper that the report drew upon. A number of things stood out. 

For instance, on page 74 there is a box explaining fully how “index of memorisation” was calculated. After reading the “Ten questions…” report, I was concerned about what the alternatives were to the memorisation responses and I was right to be. I think this is an extremely weak construct.

The most stunning thing that I have found, however, is PISA’s definition of good teaching:

“In its Analytical Framework (OECD, 2013), PISA defines the three dimensions of good teaching as: clear, well-structured classroom management; supportive, student-oriented classroom climate; and cognitive activation with challenging content (Klieme et al, 2009; Baumert et al, 2010; Lipowsky et al, 2009; Kunter et al 2008).” My emphasis.

It may now seem less surprising that PISA haven’t highlight the following relationship in their own data, even though they have pointed out weaker correlations in the same data set:

PISA data contains a positive correlation

This is Part 3 of a sequence of posts. You can find Part 1 here and Part 2 here

I was prompted to start this investigation of the recent PISA “Ten questions…” report by the graph below:


Specifically, I wondered what had been plotted on the axes for memorisation and teacher-directed instruction so that I could compare them with PISA 2012 maths scores. It would have been really helpful if PISA labelled the axes in reports like this – this is one of the first things we teach children to do in school science.

I first plotted PISA’s “index of memorisation” against 2012 mean maths scores but the ranking was different to the y-axis above. I then tried plotting a different measure from the report data that was based upon the percentages of memorisation-type responses to the questions that PISA asked (although we must treat this construct with caution – see my first post). Yet again, this did not reproduce the ranking. I have now figured out that PISA used a ratio. They have plotted the percentage of memorisation-type responses to questions divided by the percentage of elaboration-type responses. This now explains why the axis goes from “more memorisation” to “more elaboration”.

If you plot this ratio against PISA 2012 mean maths score there is, again, little correlation:


The x-axis on the PISA graph is similarly a ratio of teacher-directed responses to student-orientation responses. This is where we find our first positive correlation with PISA 2012 maths scores:


So there you have it. Correlations do not necessarily imply causal relationships but clearly a higher ratio of teacher-directed activity to student-orientation is associated with better PISA maths performance. Again, it would be necessary to look at what these constructs are.

Here it is with the PISA axis superimposed on it:


The PISA report does nothing to highlight this relationship.

Note: You can find all of the data for the three posts here.

This is Part 3 of a sequence of posts. You can find Part 1 here and Part 2 here

PISA data gets curiouser and curiouser

This is Part 2 of a sequence of posts. You can find Part 1 here and Part 3 here.

Yesterday’s post on the PISA ‘Ten questions…’ report left me with a nagging worry. I had plotted the PISA mean maths scores against “index of memorisation” rather than the “average percentage across 4 questions” measure that I thought PISA used in the graph that went across the internet (I now no longer think they used this*). It didn’t seem likely, but perhaps there was more of a correlation if we used this latter measure instead.



The R-squared for “index of memorisation” was actually marginally higher at 0.0095.

The data set that contained the average percentage measure also contained a number of other measures that were mentioned in the report. So I thought I would run these too. The first was the “average percentage across five questions” measure of whether teaching is “teacher-directed”. This seems to show a real, negative correlation with some interesting anomalies:


I haven’t examined the questions that are used to establish this construct (or the others below) in the way that I have done for memorisation so I am not sure exactly what this means.

However, one issue stands out as very odd indeed. PISA suggest that the opposite of teacher-directed is a “student-orientation” but look what happens if you plot that:


This is by far the largest correlation that I have found. And yet it doesn’t seem to get much of a mention in the PISA report. It is extraordinary that the authors highlight memorisation and teacher-directedness when this elephant is occupying the parlour, stamping its feet and trumpeting the tune of La Marseillaise.

And look at Ireland. It’s a massive outlier in terms of student-orientation but just a little below average on teacher-direction. So what exactly is happening in Irish maths classes? Who is in charge? What do they do?

For completeness, below is a graph of the other measure available – the use of elaboration:


So it seems that a higher score in any of the constructs – except for memorisation – is a bad thing, of which a higher score on student-orientation is definitely the worst.

Note: The data is available here.

*I now think they have used the ratio of teacher-directed to student-orientation. Some axes labels would be really helpful. This ratio is plotted in Part 3.

This is Part 2 of a sequence of posts. You can find Part 1 here and Part 3 here.

PISA data on maths memorisation

This is Part 1 of a sequence of posts. You can find Part 2 here and Part 3 here.

I recently had the following graph Tweeted into my timeline:


It is from the new PISA report, “Ten questions for mathematics teachers… and how PISA can help answer them.” It is an interesting report containing links to the data set.

The first thing that strikes me about the graph is that it says very little. There is not much correlation between the two measures and neither of them is a measure of maths performance. So what are we meant to conclude?

PISA asked students a number of questions and then developed an “index of memorisation”. For some reason, that’s not quite what has been plotted on the y-axis of their own graph (more later) but I decided to plot 2012 Maths PISA scores against this index. This is what I found:


This is not a strong correlation, implying that either the degree of memorisation does not affect maths outcomes (maybe something like teacher quality swamps it) or that the construct that PISA have used – their “index of memorisation” – is flawed. Have a look at how they calculated it:

“To calculate how often students use memorisation strategies, they were asked which statement best describes their approach to mathematics using four questions with three mutually exclusive responses to each: one corresponding to a memorisation strategy, one to an elaboration strategy (such as using analogies and examples, or looking for alternative ways of finding solutions) and one to a control strategy (such as creating a study plan or monitoring progress towards understanding). The index of memorisation, with values ranging from 0 to 4, reflects the number of times a student chose the following memorisation-related statements about how they learn mathematics:

a) When I study for a mathematics test, I learn as much as I can by heart.

b) When I study mathematics, I make myself check to see if I remember the work I have already done.

c) When I study mathematics, I go over some problems so often that I feel as if I could solve them in my sleep.

d) In order to remember the method for solving a mathematics problem, I go through examples again and again.

Statement a) assesses how much students use rote learning, or learning without paying attention to meaning. The remaining three statements come close to the ideas of drill, practice and repetitive learning.”

The values that you get from this index range between about 0.9 and 1.6 out of 4. So that doesn’t strike me as a huge variation. I am not at all sure that questions a) to d) represent my personal concept of memorisation. And remember, students had to pick one of three options to each question. So they might not have been that enthusiastic about the one they chose. We don’t know exactly what the alternatives looked like but it might have been a case of rejecting those alternatives rather than positively selecting statements a) to d). [It would probably have been better to use questions with the stem, “To what extent..” and a Likert scale – the PISA approach reminds me of one of the flaws in the Myers-Briggs personality test].

We also have to bear in mind that these questions would have been translated into many different languages. The phrase “I learn as much as I can by heart” is quite idiomatic. It has positive connotations in English but do those translate? Was a different idiom used?

Instead of plotting their own “index of memorisation”, PISA seem to have plotted the percentage average of how often a) to d) were selected by students*. This has the effect of changing the rankings quite a lot. Four of the top five on their plot are countries where students will have received these questions in English, which further supports the hypothesis that language might have something to do with it.

PISA further slice and dice their analysis by examining question a) only i.e. the question about learning ‘by heart’. So I plotted this against PISA 2012 Mean Maths Score:


The R-squared value is about 0.00009. Which is low.

In their report, the PISA authors attempt to construct the narrative that countries in the Far East are not as teacher-directed or reliant on memorisation as we might have assumed. There’s a little vignette about Japan and its “zest for living” reforms. This is interesting because I predict that when the PISA 2015 results are released in December, we will see a further entrenchment of Far East nations at the top of the table. Until now, educationalists in places like Australia have tended to be selectively blind to this, preferring to point to the examples of Finland and Canada – the Far East has been dismissed due to cultural differences.

Yet, Canada and Finland have both embarked upon reforms since 2000 that, in my view, have already impacted negatively on their international performance. If this trend continues in PISA 2015, as seems likely, then it may be harder to ignore the performance of the Far East. So perhaps we’re being subjected to a little early spin.

Note: Clicking on the graphs makes them larger and easier to see. You can find the source data here.

Update: after publishing this blog post I noticed that this data set is being used in an article in Scientific American, alongside some interesting neuroscience claims.

*this is not the case – see the later posts

This is Part 1 of a sequence of posts. You can find Part 2 here and Part 3 here.

New teachers’ book list

Southern Hemisphere teachers who are beginning teaching in January will be starting to prepare. Therefore, I thought it might be worth put together a list of books worth reading over the summer.

1. Why don’t students like school? – Daniel T Willingham [Link]

If you are going to read just one of the books in this list then this is the one. In a chatty and accessible way, Willingham covers the major findings of cognitive psychology that relate to teaching. What does it mean to ‘think like a scientist’? Why should we be wary of engaging kids with a flashy hook?

Willingham has also written a lot that is freely available online. I often refer to this piece on critical thinking which will give you a sense of his writing style.

2. The academic achievement challenge – Jeanne Chall [Link]

This is not full of practical applications in the same way as Willingham’s book so it might be a luxury for a time-poor trainee. However, I think many teacher education experiences lack any context of the debates that have raged across education over the past century and this perspective is essential if you want to critically analyse new ideas.

Chall draws on her own experience and gives a masterful summary of the evidence supporting teacher-led instruction, as well as touching on the difficulty of getting people to accept this evidence.

3. Seven Myths about Education – Daisy Christodoulou [Link]

Seven Myths is a seminal book that has been responsible for many a change of mind. It sets out to chart seven myths that Christodoulou thinks she has identified and that are prevalent within the education system. This is difficult to do without setting-up straw men. Yet she is up to the task, meticulously documenting evidence that people believe these myths and why these myths are wrong. The resulting discussion has clear implications for classroom teaching.

The evidence-base is a little parochial, situated as it is within the English system of Ofsted inspections. However, I am pretty certain that most teachers from Australia and New Zealand will recognise these ideas immediately.

4. What every teacher needs to know about psychology – David Didau and Nick Rose [Link]

This is a great little book that gives a briefing on the psychological research that applies to education, both in terms of academic learning and motivation and behaviour. Each chapter finishes with a summary of the main ideas as a series of bullet-points, enabling a new teacher to get a handle on some quite complex and diverse ideas.

The one problem is that it doesn’t seem to have been very well proofed so you have to tolerate a few typos and missing words.

5. Behaviour Management Pocketbook – Peter Hook and Andy Vass [Link]

Classroom management is often overlooked in teacher education. There is a philosophy that if you plan good enough lessons then students will behave. This is not true. Neither is it true that good classroom management is something that good teachers are born with. You can work at it and, provided you are supported by your school with robust systems, you can improve at it. This is a great little book full on hints and tips on how to do that.

Tom Bennett has also written an extremely popular book on behaviour management but I haven’t read it so I can’t recommend it.

6. Teacher Proof – Tom Bennett [Link]

I have, however, read Bennett’s treatment of evidence in education. There are a number of good books on this topic (e.g. this and this) but this is quite fun and accessible. Again, I have put this on the list because I want trainee teachers to develop a healthy amount of scepticism and raise an eyebrow whenever anyone tells them that, “research shows…”

7. Ouroborous – me [Link]

Once you have read all the books above then you might consider having a glance at my own ebook. It is relatively short at around 30,000 words and attempts to summarise many of the issues addressed by the books in this list. My contention is that education has a tendency to endlessly re-badge and then repeat fads of the past. If we know about these past fads then we have some chance of guarding against their reemergence in the future.

What’s not on the list

If you read all of these books then you will have a busy enough summer. But there are certainly other books that I would highly recommend as you continue to develop. I picked Daisy’s myth book but there is another book on myths that is well worth reading: “Urban Myths about Learning and Education,” by Pedro de Bruyckere, Paul Kirschner and Casper Hulshof. It lists many more potential myths than Daisy but deals with each more briefly.

Another very interesting book is Martin Robinson’s attempt to develop a curriculum than unites the best of traditionalism and progressivism; “Trivium 21C”. It’s probably not the first thing you need to read but keep it in mind when you start to ask questions about what should be taught in schools.

E D Hirsch is also a must-read on the curriculum. I am currently reading his new book, “Why knowledge matters,” and others include, “The knowledge deficit”. You can get a feel for his ideas in this article.

If you are a teacher of early reading then Dianne McGuinness’s book is worth considering. It’s quite chunky but provides important insights into how the English language works.

I would like to recommend a book for those who are new to teaching maths but, unfortunately, the books that I know on this subject are just awful. Maths, particularly in elementary school, attracts a lot of people pushing flawed ideas loosely based upon the theory of constructivism. The best I can offer as an antidote is this piece by Hung Hsi Wu from American Educator and the critical Kirschner, Sweller, Clark paper. Oh, and keep an eye on this blog.

Bill Lucas is wrong and Victoria is making a mistake

There is a piece by Henrietta Cook in The Age newspaper that reports that schools in the Australian state of Victoria are about to start measuring students’ performance in three non-cognitive capabilities: critical and creative thinking, personal and social abilities, and intercultural and ethical skills. This is apparently at the urging of the British educationalist, Bill Lucas.

This is deeply misguided for two reasons.

1. These capabilities are highly domain dependent

There is no one thing that we can label ‘critical thinking’ that can be trained and tested. As professor of cognitive psychology Dan Willingham points out, children can think critically about subjects they know a lot about and professional scientists can fail to think critically in areas outside of their expertise. Therefore a general score for ‘critical thinking’ is utterly meaningless. Instead, these capabilities need to be assessed within subject disciplines which is exactly what a traditional curriculum already does.

Take the example of problem solving. There is little that is similar between solving a physics problem and solving the problem of how to deliver two daughters to two different birthday parties whilst still completing the shopping. The only thing they have in common is a strategy known as ‘means-end analysis’. Yet this strategy is something that we are all born with and doesn’t need to be taught. As one of my PhD supervisors, John Sweller, explained in his submission to the recent review of the Australian Curriculum:

“It is a waste of students’ time placing these skills in a curriculum because we have evolved to acquire them without tuition. While they are too important for us not to have evolved to acquire them, insufficient domain-specific knowledge will prevent us from using them. We cannot plan a solution to a mathematics problem if we are unfamiliar with the relevant mathematics. Once we know enough mathematics, then we can plan problem solutions. Attempting to teach us how to plan or how to solve generic problems will not teach us mathematics. It will waste our time.”

2. You cannot measure these capabilities reliably

As I started to read the article in The Age, I thought about Duckworth’s critique of attempts to measure non-cognitive skills. To her great credit, Cook later mentions this:

“But some experts, including University of Pennsylvania professor Angela Duckworth – who popularised the term “grit” in education circles – have warned that there is no trustworthy way of measuring these social-emotional skills.

Teachers can misinterpret student behaviour and students who self-assess by filling out questionnaires may provide desirable but inaccurate responses.”

This is absolutely spot-on. Attempts to measure many of these capabilities often involve the use of questionnaires. Students aren’t silly and they rapidly realise the sorts of answers that are required, whether it is what they actually believe or not. So we are at risk of convincing ourselves that we have trained students to have excellent ‘ethical skills’ when we haven’t done any such thing. I am reminded of the question I once had to answer on an immigration form that went something like, “are you entering the U.S.A with the intention of committing a terrorist act?” It’s like that.

Other approaches involve setting-up highly artificial environments – let’s go high-rope walking, for instance – and then trying to infer students’ resilience levels from their responses. But, again, this is likely to be highly domain specific. Resilience in one environment may not transfer at all to another. The body confident student who enjoys high-rope walking may give up really quickly in maths class.

A lack of evidence

Finally, I would like to make my usual appeal to evidence. Where is it? Where has this approach been used successfully? What trials underpin the thinking?

If we have simply imported a guru and decided to do what he reckons then we have not only demonstrated a monumental lack of critical thinking ourselves, but we are also at risk of wasting large amounts of public money.