Featured

Welcome

This is the homepage of Greg Ashman, a teacher, blogger and PhD candidate living and working in Australia. Everything that I write reflects my own personal opinion and does not necessarily represent the views of my employer or any other organisation.

Read about my ebook, “Ouroboros” here.

Watch my researchED talks here and here

Here is a piece I wrote for The Age, a Melbourne newspaper:

Fads aside, the traditional VCE subjects remain the most valuable

Read a couple of articles I have written for The Spectator here:

A teacher tweets

School makes you smarter

Read my articles for the Conversation here:

Ignore the fads

Why students make silly mistakes

Advertisements

Slow motion problems: a teaching method I have learnt this year

Embed from Getty Images

In my maths department, all teachers use the same detailed lesson plan and resources. Each plan contains examples and problems, as well as worked solutions to these examples and problems. The solutions are necessary because maths teachers often use surprisingly diverse approaches.

We meet as a cohort team once we have some assessment data. We then trawl through it, looking for anomalies. If groups are evenly matched, we might look for questions where one class seemed to do much better than the others. In cohorts that are ability grouped, we might look for questions where a less advanced group outperformed a more advanced group.

We then ask the teacher whose group performed well to try to account for this difference. Even with a detailed plan, there will always been slight differences in how a teacher interprets something or improvises in response to a student’s question, for instance. Given that we have locked down everything else, once we surface these differences, we can make reasonable, if not scientific, inferences about the effect of these differences. Sometimes this process is hard because the teacher will insist they ‘just taught the lesson plan’. But there’s pretty much always something there, hiding. We often have the lesson plan documents open in the meeting so that we can make adjustments for next time, based on what we have learnt.

This approach to improvement is not a genius idea. It is a simple idea. But it is an idea that you would struggle to apply in many maths departments due to notions of teacher autonomy. I don’t want to be autonomous. If my colleague is teaching something better than I am then I want to start doing it her way.

And that’s what has happened this year.

A colleague of mine, a fine teacher, has moved into one of the cohorts that I teach and I have learnt lots.

I have used mini-whiteboards for many years. I think they work well because you can collect feedback from the entire class. My routine involves posing the question, giving the students enough time to complete the question – an essential point – and then going ‘3, 2, 1 and show’. At this point, I expect them all to hold up their boards, even if they are not finished or not sure they have the right answer. I train them on this routine early in the year to ensure that it is slick.

However, my colleague doesn’t do this with multipart questions. For instance, if the problem involves solving a trigonometric equation, she might ask the students to rearrange the equation – the first step – then immediately hold up their mini-whiteboards as soon as they have done this. After this step, and once any errors have been corrected, she will ask them to do the same for the reference angle – the second step – and so on. Only later will students complete all steps independently.

When I heard about this, I wondered why I had not thought of it. It makes a lot of sense. There is little point continuing with a multipart problem if you’ve messed up the very first step. My colleague’s approach is entirely consistent with Rosenshine’s Principles of Instruction; a key document we use to aid our planning.

So I have started doing this too. I cannot prove that it has made me more effective, but I reckon it probably has.

As an aside, I used to accept the idea that mini-whiteboards are a maths and science thing. They work well in these subjects, but they don’t lend themselves to writing whole paragraphs, let alone essays, and so they are of limited use in English or history.

I’m not so sure any more because I have started to think we neglect sentence-level work, and sentences do lend themselves to mini-whiteboards. A paragraph is essentially a multipart problem made up of sentences. Why move on to the second sentence if the first one sucks?

Funny fonts may not lead to better learning

Embed from Getty Images

Back in 2011, there was a flurry of excitement when a group of researchers found that students recalled more information if it was presented in a hard to read font, such as something fancy in italics, than if it was presented in a traditional, easy to read font like Times New Roman.

The finding fed interest in the concept, developed by Robert Bjork and colleagues, of ‘desirable difficulties’. In short, the idea is that by having to engage more cognitive processes to read the font, we learn the material better.

This view is in apparent conflict with findings from cognitive load theory, findings that generally suggest that we should reduce any unnecessary cognitive load.

Since 2011, there have been numerous attempts to replicate this effect. Now, a meta-analysis has been published that attempts to draw this evidence together. The findings are mixed, with no overall effect on basic recall or transfer for ‘perceptual disfluency’ as it is known to researchers. However, it does pretty reliably increase learning time and reduce learners’ perceptions of their own learning.

It is tempting to leave the issue there as an effect that has failed replication, but I think it might be worth introducing another idea from cognitive load theory, that of element interactivity.

Element interactivity essentially describes the complexity of a learning task. Cognitive load theory effects have been demonstrated reliably when element interactivity is relatively high, such as when students are learning to solve algebra problems. On the other hand, desirable difficulties effects have mainly been demonstrated on tasks such as learning the names of state capital cities, something that would be relatively low in element interactivity / complexity. It would be interesting to see how effects for perceptual disfluency map against the complexity of the learning task.

Can you teach ‘wisdom’ in a general way?

Embed from Getty Images

A new study in Learning and Instruction reports on an attempt to teach wisdom, generally.

The authors start by noting a couple of previous attempts to teach wisdom in this way. However, these attempts led to no apparent change in student wisdom levels, so you might think that was the end of the matter. But, no, these studies did not have a control group and this matters.

In the new study, there are three groups. The first group followed the ‘Wisdom 1’ course, the second group followed the ‘Wisdom 2’ course and the third group was a control. It does not appear that the college students involved were randomised into these conditions. Instead, it appears that they were selected into these courses in the same way that they would be for any of their other college courses. This lack of randomisation is a major issue and makes it hard to interpret the p-values that are later quoted because p-values assume a random sample.

The two ‘wisdom’ courses seem to have involved a mixture of reading literature, writing in journals and so on.

The intriguing part is how the authors supposedly measured the construct of wisdom. There is, apparently, a pre-existing ‘three-dimensional wisdom scale (3D-WS)’, the validity and reliability of which has been confirmed by previous research. I was not aware of this so I was keen to find out what it involves.

Students are asked a series of questions about how strongly they agree with a statement, or how true a statement is about them, which they rate on a Likert scale (1 = strongly agree, 5 = strongly disagree and so on). The statements include the intriguing ‘ignorance is bliss’, for which I am not even sure what the ‘wise’ answer would be. The researchers are aware that students may give socially desirable rather than honest responses and they attempt to counter this by wording some of the statements in the negative. I’m not sure why this would remove this problem.

The results are interesting, if difficult to interpret given the lack of randomisation. One of the wisdom courses led to no change in wisdom, the other led to an improvement and the control led to a decline! Commenting on this decline, the researchers suggest that a wisdom course that leads to no change in wisdom could actually be beneficial because it may be arresting a decline.

I think that’s a stretch.

Criticising learning styles is not sexist

Embed from Getty Images

The current scientific consensus on the idea of learning styles is clear: People often express a preference for learning in a particular way, they may even have distinct ways of thinking, but the notion of differentiating teaching to match students’ learning styles is one that lacks supporting evidence. We can have a semantic argument about whether this means that learning styles ‘don’t exist’ but I don’t think that advances us very far because it is this last meaning – learning styles as a guide to differentiation – that is the one that is commonly understood in the teaching profession.

The fact that differentiating according to learning styles is a practice that lacks evidence, coupled with the fact that differentiation is time-consuming for teachers, means that it is important to get the message out there. If you are differentiating your lessons to suit your students’ learning styles then you are wasting your time.

On EduTwitter, it is easy to assume that everyone has received this message by now. But we all know that EduTwitter is not the real world and there is plenty of literature and training out there that still perpetuates the learning styles myth.

Now, one prominent proponent of learning styles, Carol Black, has written a blog in defence of the idea. Black seems to think that the idea of matching teaching to learning styles is a ‘straw man’ i.e. nobody actually thinks that we should do this. However, it is not clear to me what she therefore thinks is the value of learning styles theories for teachers. But that does not seem to be the main point of the piece.

Instead, throughout her blog, Black reproduces a number of Tweets debunking learning styles that are all written by men. She comments:

“A disturbing feature of this discourse in education is the frequency with which it takes the form of male researchers and pundits telling female educators that their views on learning are cognitively childish and irrational and should therefore be disregarded. Cognitive psychologist Daniel Willingham, a prominent debunker, has shared some rather patronizing speculations as to why the vast majority of (mostly female) teachers persist in thinking their students have different learning styles (“I think learning styles theory is widely accepted because the idea is so appealing. It would be so nice if it were true.”) His paternal tone is especially disturbing since he makes his case by failing to mention the existence of legitimate competing views from respected scientists and education researchers.”

Is this really about sexism? Are learning styles debunkers really just a bunch of men having a go at what they perceive to be the silly beliefs of women?

No. It is about science, and men and women can access scientific truths equally well.

With very little effort, I managed to find the following Tweets from some pretty smart women. Hopefully, this will restore a little balance to the universe:

 

Tricks and dummies

Embed from Getty Images

I was never actually taught how to play football at school. We just played it. A lot. Our Physical Education teachers would throw us a ball and we would get on with it. Sometimes, they didn’t even bother to referee and so games would descend into arguments about whether someone had handled the ball and whether it was deliberate. I was never great at learning football by discovery but I did figure out one important thing: Keep your eye on the ball. My ability to keep looking at the ball when an attacker was trying to baffle with tricks and dummies enabled me to become a competent defender. That was my niche.

This is good advice for Edutwitter: Keep your eye on the ball.

The central point of my recent Quillette article was that it is a mistake to think of literacy, numeracy, critical thinking and other constructs as learning progressions that are largely independent of context, in the way that the Gonski 2.0 review does. I explained where I thought this argument comes from and I advanced my alternative view on the basis of my understanding of cognitive science. I suggested that we need to pay much more attention to the knowledge that students learn.

There is plenty to refute here. A critic could attack my understanding of the cognitive science or provide evidence that critical thinking can be taught as a general capability. He or she could dispute my reading of Rousseau or his influence. This would be good for debate. I am under no illusions: It is highly likely that some of my arguments are incomplete, flawed or just plain wrong. So it would help if they were properly tested.

However, despite drawing criticism, I am not aware of anyone refuting the main idea in this way. Instead, I have seen people making comments to the effect that they do not like Quillette or they see Quillette or me as part of some movement they object to. Probably the strongest criticisms of my piece were plain contradictions – i.e. that I am wrong or that the piece was poorly researched – and the objection that I should have included more on the history of Australian education. Neither of these really leads anywhere unless a critic can explain why I am wrong or why my omissions affect my argument.

We are in quite dangerous times when it comes to reasoned debate. I started blogging six years ago. At that time, I don’t recall anyone having their argument dismissed on the basis of their skin colour or gender, and yet, it some quarters, that seems quite legitimate now. Ad hominem is still a logical fallacy but some commentators have embraced it as a badge of honour.

It is not emotional labour to explain why people are wrong. It is not good enough to claim that they have not done their homework. These are just tricks and dummies. If you have done the work and you know something that sheds light on why another person is wrong, then the human thing to do is to share that understanding and not to write people off as others who are somehow beyond redemption.

Look for these tactics in the education debate. Keep your eye on the ball.

Simon Jenkins done a thought burp

Embed from Getty Images

Simon Jenkins is a Guardian columnist and a former teacher. You may therefore expect him to have insightful comments to make about England’s education system.

Not so.

Instead, he has written a polemic seemingly based upon what a primary school child and a few teacher friends told him, and laced it with half-remembered facts.

Jenkins has ‘never seen the point of exams’. As a Guardian journalist, he should probably be more aware of the social justice argument that supports the use of exams. Yes, they are an imperfect measure, but the alternatives are worse. Without exams, entrance to university and careers would be based even more on privilege and connections, and on the ability to hire the right person to help you put your portfolio together.

Jenkins has a particular dislike of maths, taking a purely functional view of its value and suggesting that the only reason that the education system focuses on maths attainment is because it is easy to measure.

Really?

Writing is probably the hardest academic outcome to measure and yet we also focus a lot on that, so this claim simply makes no sense.

When Jenkins writes of, ‘All the maths a normal grown-up needs,’ I want to ask him: What is all the history a normal grown-up needs? What is all the literature a normal grown-up needs? What is all the music a normal grown-up needs?

Maths suffers from the misconception that it is just a tool for calculating change at the supermarket, a misconception that is perhaps promoted by well-meaning primary school teachers trying to motivate their students, but maths is of value in its own right. Its mundane uses are hardly the point. Jenkins may as well ask why we get children to write stories when adults don’t need to do that.

Unless, perhaps, you become a Guardian journalist and need to tell the story of the poor primary school child who ‘can handle counting and proportion, but he cannot access the world of complex numbers and algebra.’

Really?

The new English curriculum sounds amazing. I admire the primary school child in question because proportion is probably one of the most difficult concepts to learn. And in Australia, some primary school children might just touch on a little algebra, but complex numbers is a topic reserved only for Year 11 and 12 students studying the highest, most specialised level of mathematics. Are English primary schools teaching calculus, too? Where are they getting the teachers from?

Finally, Jenkins ends with a slur on South Korea:

“Britain is on its way to the purgatory of South Korea, where secondary-school children are made to cram for 14 hours a day to get into university, with suicidal consequences.”

Given the many factors that affect a country’s suicide rate, responsible journalism should avoid assigning it to a single cause such as the education system. There is something deeply unpleasant about reaching for such an argument.

When I last looked into it, the most recent data I could find was from the OECD. The youth suicide rate in South Korea is high, but it is higher in Finland and higher still in New Zealand. Is this also caused by their education systems, because these countries are not known for their focus on exams?

Jenkins hints that schools should focus more on creativity, life-skills and self esteem, but they don’t because these are hard to measure. Well, quite. They are hard to measure for the same reason that they are hard to teach: they are vague, nebulous concepts.

I do not believe that the goal of social progress will be served by educating an illiterate and innumerate generation that is high in self-esteem and creativity. If anything, that sounds like a recipe for entrenching the privilege of those who can afford to opt for a proper academic education, either by going private or employing tutors.

Jenkins should have a little think about that.

What did you learn during your teacher education?

Embed from Getty Images

A few days ago, I tweeted out a horribly typo-ridden poll that gained quite a number of votes. It should have read, “During your time training as a teacher, were any ideas presented to you as facts that you now believe to be untrue?”

It is hardly scientific. A sample from people who follow me or who follow people likely to retweet my poll is not a representative sample of the teaching workforce. We are constantly being told that the preoccupations of educators on Twitter are not shared by the majority of teachers and I am inclined to agree. Nevertheless, around 500 responses from people who are at least pretending to be teachers agreed that they had essentially been taught falsehoods.

I wonder whether other professions feel this way about their professional training?

The quality of teacher education is a difficult issue to grasp because it is hard to research the totality of teacher education courses. If I point out, for instance, that a particular university was teaching learning styles in its teacher education courses, at least until very recently, a critic may reasonably suggest that this is a rare exception. We cannot know whether this is typical without wider research and yet it would be hard to collect, digest and synthesise the curriculum materials of lots of different education courses.

A more systematic way to evaluate the quality of teacher education courses is to survey the knowledge of trainee teachers. We should be able to infer something about the quality of education courses from what new teachers know.

I was reminded of a recent article by Jennifer Stephenson of Macquarie University who sought to review papers published on the knowledge of Australian preservice teachers. Stephenson obtained data from 52 peer-reviewed articles and it makes for depressing reading. The 52 studies identified a number of holes in content knowledge and knowledge of teaching. However, we should avoid drawing firm conclusions because the studies were often limited in scope and in some cases it was not clear that the sample of preservice teachers was representative.

This is an area that clearly needs more rigorous research.