This is the homepage of Greg Ashman, a teacher, blogger and PhD candidate living and working in Australia. Everything that I write reflects my own personal opinion and does not necessarily represent the views of my employer or any other organisation.

I have written two books:

The Truth about Teaching: An evidence informed guide for new teachers

Ouroboros – an ebook

Watch my researchED talks here and here

I have written for The Australian about inquiry learning (paywalled):

Inquiry-learning fashion has us running in wheel

This is my take on the “Gonski 2.0” review of Australian education for Quillette:

The Tragedy of Australian Education

Here is a piece I wrote for The Age, a Melbourne newspaper:

Fads aside, the traditional VCE subjects remain the most valuable

Read a couple of articles I have written for The Spectator here:

A teacher tweets

School makes you smarter

Read my articles for the Conversation here:

Ignore the fads

Why students make silly mistakes

My most popular blog post is about Cognitive Load Theory:

Four ways cognitive load theory has changed my teaching

To commission an article, click here


ACT to redesign education system around tired cliches they probably heard at some conference

As Dan Willingham, Dylan Wiliam and anyone else who has examined the cognitive psychology behind education will testify, knowledge is absolutely critical to the success of an education system. Traditional subjects disciplines, even if they can always be improved and refined, structure this knowledge in coherent units that probably mirror the way that we organise knowledge in our minds.

Without a broad knowledge of the world, students will struggle to understand texts, let alone be able to think critically about them. Critical thinking, in turn, requires knowledge to apply to the object of critical thought. If you want to see an issue from different perspectives then you will need to know what perspectives are likely to be relevant and what the issue is likely to look like when viewed from those perspectives.

Knowledge is central.

It has become routine on EduTwitter for a particular group of commentators to dismiss this as a truism. They deny the existence of any debate. They claim that knowledge has always been a priority in education and that nobody is in the business of claiming otherwise.

Well they are in the Australia Capital Territory (ACT), the administrative region of Australia that includes the federal capital, Canberra.

Imagine taking an amalgam of the most cliched and trope-riddled education blog posts and conference talks – the ones that breathlessly soothsay about jobs that don’t exist yet – and marrying them to early 20th century educational progressivism in order to create a state education policy. According to the ABC, this appears to be happening in the ACT. Here is a quote from Kris Willis, acting school improvement director, about the new education overhaul.

“Facts and figures once held as paramount in classrooms, and knowing facts and figures, is no longer relevant in today’s society… It’s about critical thinking; complex problem solving.”

Apparently, “Future school timetables may not have maths or English classes, but rather ‘critical thinking’ and ‘creativity’.” Of course, there is also going to be a focus on personalised learning.

The struggle is real, people.

See you in London

I am currently on long service leave, one of the perks of working in Australia*. This means that I am able to attend the researchED National Conference at Harris Secondary Academy, St Johns Wood, on the 8th of September.

I will be presenting a session on Differentiation: An article of faith. Here’s the abstract:

Differentiation is an axiomatic concept for many teachers and teacher educators. Yet the term is poorly defined. Moreover, many practices that sit under the umbrella of differentiation either lack research evidence or, where research evidence is available, it points the other way. Why has the concept of differentiation assumed the role that it has and what does this tell us about the teaching profession and how to move it forward?

I will also be involved in a panel discussion about meta-analyses.

If you want to do some relevant pre-reading then these blog posts are a good starting point:

Where is the evidence to support differentiation?

The article that England’s Chartered College will not print.

*Contact me via this page if you are interested in an Australian teaching career

(Un)Desirable Difficulties

Embed from Getty Images

A new review paper has been published buy Chen, Castro-Alonso, Paas and Sweller in the journal Frontiers in Psychology: Educational Psychology. The beauty of this journal is that it is open access and so you can read the whole thing without a subscription.

And I recommend reading it. It is perhaps a little technical, but it does contain an excellent summary of some key principles of cognitive load theory that I don’t think I’ve seen before in an open access article. However, this is an aside.

The aim of the paper is to try to understand the sometimes contradictory findings of research into ‘desirable difficulties’. These are strategies that temporarily make learning a little harder but that supposedly lead to better retention and transfer of learning. Desirable difficulties include the generation effect, the testing effect (or retrieval practice) and varying the conditions of practice. You are probably familiar with some of these strategies because they tend to be the ones promoted by evidence-based education blogs such as The Learning Scientists.

Unfortunately, we don’t always find a positive effect for introducing desirable difficulties. Chen et al., draw on evidence to suggest that this is because we need to take element interactivity into account.

Element interactivity is a controversial idea that has been introduced into the framework of cognitive load theory. In essence, the element interactivity of a task is the number of elements it requires a student to process in parallel in working memory. This will be affected by the complexity of the task itself, as well as the level of expertise of the student. A student who can draw on schema held in long-term memory to process elements will need to process fewer in working memory.

Chen et al. use the example of learning the word for ‘cat’ in a foreign language. Although this may be a difficult thing to do and retain, the element interactivity is low. There is just one item – the word for ‘cat’ – to process. Learning a list of such words would involve processing one item at a time in this way and so it would be a low element interactivity task.

In contrast, a novice trying to solve 2x + 5 = 3 for the first time must process the numbers and operators in parallel. Doing something to the 5, for instance, has implications for the rest of the equation. In this case, we would say element interactivity is quite high. It would not be high for a mathematical expert, however, because she can quickly and easily apply solution methods held in long-term memory.

Chen et al., draw on experimental evidence to suggest that we obtain desirable difficulty effects when element interactivity is low and we see the reverse effect when it is high. For instance, generation effect studies might ask students to either study pairs of opposite words e.g. “inside/outside” or generate the second word for themselves e.g. “inside/o_____”. This is a low element interactivity task and so there is working memory capacity available to do some extra work. In this case, this extra work probably helps by making meaningful (semantic) connections between the pairs of words and that’s why we see a gain for the students who generate the second word for themselves.

However, when element interactivity is high, such as when novices learn to solve algebra problems or to use a bus timetable, it is better to study completed worked examples than to try to generate solutions.

The implications for teachers are that we should be careful about how we introduce desirable difficulties. It is probably a good strategy to make use of the generation effect for learning labels and definitions from the very start of learning. However, if we want students to learn how to solve particular problems or implement procedures involving a number of interdependent steps, we should probably ensure that we have embedded this learning with full, explicit instruction, before we seek to make the process a little harder.

What about the boys?

Embed from Getty Images

There is much to like in Barbara Oakley’s recent opinion piece for the New York Times. Oakley is an engineering professor at Oakland university in the U.S. and her views on teaching are refreshingly well-informed. We will get to those later but we need to deal with something odd first, Here’s a quote from her article:

“A large body of research has revealed that boys and girls have, on average, similar abilities in math. But girls have a consistent advantage in reading and writing and are often relatively better at these than they are at math, even though their math skills are as good as the boys’.”

Having read that, you might think that the article is about how we might improve the reading and writing skills of boys, but it’s not. Instead, Oakley is concerned that girls’ relative ability in literacy will convince them that they are no good at maths, even though they are, and so they will avoid practising maths, not fulfil their mathematics potential and shut-off future science, technology, engineering and maths (STEM) career options.

That is perhaps a fair point, but it does seem a little like there’s an elephant on the couch and nobody is offering it any biscuits. There are many articles and programmes addressing the idea of encouraging girls into STEM careers. Unlike Oakley’s piece, a lot of these efforts are misguided, but we cannot claim a lack of attention. And this attention may be paying off. In Australia and New Zealand, for instance, we now have more female medical students than male medical students.

In contrast, where are all those opinion pieces and programmes aimed at addressing the literacy gap between boys and girls?

Taking a purely instrumental view of education as preparation for future work, relatively few careers require scientific and mathematical knowledge when compared to the much greater number that require good literacy skills. So on the surface at least, it appears to be the bigger issue.

Even if, for whatever reason, you are not particularly interested in the career prospects of these boys, there is a wider societal impact. Economically, if we do not maximise the educational potential of our population then we can expect to be poorer as a society. And higher levels of education are associated with social benefits such as lower levels of violence. You may even be inclined to think that higher levels of education are good for democracy.

A clue as to why we might not be focusing on this particular gender gap may be found in some of the responses I received when I tweeted about Oakley’s article. It seems that people are quite ready to ascribe it to intrinsic qualities of boys. Perhaps boys are neurologically a bit slower in developing or perhaps they are naughtier and so learn less. There may be something in this. Boys, for instance, have worse fine motor skills than girls and it is possible to see how a negative feedback loop might develop: Boys are able to write less in the same amount of time, their writing is messier, they compare this to others and get negative feedback and so they think they are therefore bad at writing.

But if you think the gap is due to the innate qualities of boys and that they are simply naturally worse at literacy then you won’t feel the need to do anything about it. And that’s a problem.

Curiously, I cannot imagine many educators suggesting that a lack of female participation in STEM subjects might be caused by the intrinsic qualities of girls. It’s not actually a helpful attitude for teachers to have. Accurately making predictions about any individual’s academic trajectory is close to impossible and so the best option is to maintain high expectations for all. And personally, I just don’t buy the idea that boys and girls are so different.


When I studied for my postgraduate certificate in education, I completed a literature review task on the fact that boys’ tended to underachieve in the exams that 16-year-olds take in England. This was 1997 and far from being a quirky topic to pick, it was part of the zeitgeist at the time. A lot of people were researching it and discussing it.

I remember writing that one possible strategy to motivate boys about academic work was to connect it to positive traits that are associated with masculinity and becoming a man. For instance, personal autonomy is something that young men were thought to aspire towards as a manly trait and so educators could emphasise the fact that academic achievement gives boys more choices and more freedom to act.

This argument was based on what I had read in my review. It was not my personal view and I am not making this case. I can already hear people noting that women are quite capable of being autonomous – which they obviously are – and I am not sure that there are any positive traits that we associate with masculinity these days and that would resonate with boys. Maybe this is a good thing because, by focusing on motivation, I suspect we are looking at this problem the wrong way around.

Feedback loops

Oakley is correct, in my view, to focus on feedback loops and deliberate practice. Simply trying to motivate boys about literacy will not work just as it does not work as a strategy to get girls into STEM.

There is nothing natural about any academic subject. These subjects are far too young to have been acted on by evolution and so learning them is effortful and slow. We therefore need to use some coercion to get students to engage in them. This can be positive and take the form of encouragement, or maybe turning aspects of practice into a game, but we can never remove the element of hard work.

Sometimes, we try and pull the wool over children’s eyes. We think we can motivate them by a fun demonstration or by giving them choices. But the long term effect of this works against our aim. If a child chooses to read wrestling magazines then that may be initially motivating. However, he will develop less vocabulary than the child who reads Tolkien and the gap between these two children will grow. Eventually, the magazine reader will work out that he is not as developed as the Tolkien reader, decide he is no good at reading and focus on other things that he thinks that he is good at.

When we teach in this way, schools operate as a talent selecting system. Eric Kalenze has a great analogy for this: The education system is a funnel. Children are meant to pass through this funnel to a destiny as a well-educated adult who is able to fully participate economically and democratically in society. Unfortunately, the funnel is upside down. Instead of reducing the achievement gaps between people in core skills, it exacerbates them. A proportion of the students who were already on the right course pass through the funnel. Many others simply bounce off the sides.

If we want children to develop strength in literacy then we have to make them. This does not need to be unpleasant but we do need to insist upon it. We need to help children learn to read and ensure that they read appropriately challenging texts. We need to make them practise their writing. For many, as they see themselves improve, they will gain intrinsic motivation and this will then feed into their future achievement. “I’m getting good at this,” they will think. There is no avoiding the hard work. As Oakley comments:

“All American students could benefit from more drilling: In the international PISA test, the United States ranks near the bottom among the 35 industrialized nations in math. But girls especially could benefit from some extra required practice, which would not only break the cycle of dislike-avoidance-further dislike, but build confidence and that sense of, “Yes, I can do this!” Practice with math can help close the gap between girls’ reading and math skills, making math seem like an equally good long-term study option.”

I agree with everything except the ‘especially’. This is exactly what we need to do to help boys with their literacy. What about them?

Stop worshipping conceptual understanding

Embed from Getty Images

There is a story often told about maths teaching. It is a story of how, in olden times, children were taught rote mathematical procedures. They were never taught conceptual understanding of the principles involved. These days, we have computers to perform mere procedures for us and so, instead, we should focus on conceptual understanding.

This is flawed logic.

Take the principle of equivalence. This is an idea that is often investigated in educational psychology experiments as an example of conceptual understanding. When children first meet mathematical equations, they are of the form 2 + 3 = ?. This means that they reasonably, but incorrectly, infer that an equals sign (=) is a command to write a correct answer. In fact, an equals sign means ‘the same as’ and a failure to grasp this may cause problems later when students have to solve problems of the form 2 + ? = 5.

The principle of equivalence is also important in solving simultaneous equations by elimination. The basic logic is as follows:

If A = B and C = D then A – C = B – D

I remember being quite mystified by this seemingly magical move at school. This may be because nobody ever pointed out the logic and how it utilises the principle of equivalence, or it maybe that they did do this but I have forgotten.

That fact is that a lot of these revered items of conceptual understanding are straightforward items of declarative knowledge. It is the application of these principles that is important and difficult to learn and that’s why maths lessons traditionally devote the majority of time to this.

In contrast, I could train a parrot to give me the correct dictionary definition of what the equals sign means. Does the parrot understand? Maths teachers who focus on conceptual understanding don’t tend to ask students to recite dictionary definitions, but they may as well do so. Making posters or having discussions about a relatively simple item of declarative knowledge is not much different.

The power of mathematics is the ability to move from simple, straightforward axioms, step by step towards something that is not at all obvious but incredibly useful to know.

Yes, it is important to make these axioms clear, and I am quite prepared to accept that maths teachers do not always do this or do not return to them often enough to reinforce them. But to focus on these axioms at the expense of procedures is to completely misunderstand what mathematics is and why it is so powerful.

An interview with Dylan Wiliam

Dylan Wiliam is a world authority on formative assessment and Emeritus Professor of Educational Assessment at the UCL Institute of Education in London. His popular book on formative assessment, Embedded Formative Assessment, was recently released as a revised edition and his latest book, Creating the Schools our Children Need, critically examines the ways we could seek to improve education at a system levelFollowing the recent trial of a professional development approach to formative assessment conducted by the Education Endowment Foundation in the UK, I thought it would be good to catch up with Wiliam and seek his thoughts.

1. The Education Endowment Foundation in the UK (EEF) recently published the findings of its trial of the Embedding Formative Assessment professional development programme. How would you summarise these findings?

I think the first thing to say about the EEF trial of the Embedding Formative Assessment (EFA) professional development programme is that it was what is called in medical research an “intention to treat” study. In other words, the study did not just look at the schools who implemented the programme faithfully. Rather it recruited 140 schools, divided them into two groups, and gave half the schools DVDs with the training materials (experimental group), and the other half just got the cash equivalent (control group). The other difference is that representatives of the school got one day’s inservice training, and minimal support over the two years of the project. We know that many of the schools given the materials did not implement them as intended, and it appears that teachers found the ideas more applicable to the teaching of younger children (11 to 14 year olds) than the students whose achievement was assessed in the project (14 to 16 year olds). The evaluation therefore measured the effect of just giving the materials to schools, and therefore gives us a good idea of what would happen if the programme was implemented at scale.

After the project started, the researchers realised that a number of the schools recruited (12 of the experimental schools, 4 of the control group schools) had already been involved in similar work through the Teacher Effectiveness Enhancement Programme (TEEP), which had used many of the ideas in the EFA programme, which was originally developed in 2007. Since these schools had already been exposed to the ideas of the programme, the evaluators decided to analyse the impact of the pack on just the schools that had not been involved in TEEP.

Two years later, the performance of students in their school leaving examinations (GCSE) in the experimental group and control group schools were compared, and those in the experimental group scored 0.13 standard deviations higher in their average grade across eight school subjects (a significant difference). One year’s learning for students of this age is around 0.3 standard deviations, the students take their GCSE exams half way through the summer term, and we have to factor in the fact that students forget stuff—say 10%—from one year to the next. This means that over the course of the research, the students could be expected to increase achievement by 0.52 standard deviations (0.3*.90 + 0.3*5/6). The students in the experimental groups improved by 0.13 sd more than this, equivalent to a 25% increase in the rate of learning. Given that the cost of the programme is around $2 per student per year, it is a highly cost-effective intervention.

2. I qualified as a teacher in the UK in 1998. I first learnt about the principles of formative assessment by reading the publication you authored with Paul Black, Inside the Black Box, as did many of my generation of teachers. Later, I drew links with ideas like the ‘curse of knowledge‘. What changes, if any, have occurred since 1998 in terms of what we know about formative assessment?

I don’t think Paul and I have changed our fundamental ideas about formative assessment very much since we did the research on “Inside the Black Box” 20 years ago. The basic ideas are simple. First, teachers need evidence about what their students are thinking in order to make good decisions, and the quality of that evidence is often poor. Second, students and their peers have insights into their own learning that is often not used in classrooms. And third, the way we use assessment affects, both positively and negatively, students’ attitudes and motivation. What has changed is that we now know that when teachers develop their practice of formative assessment, their students learn more, even when learning is measured in terms of scores on externally mandated tests and exams. This was suggested by the research we reviewed, but we did not know that this was true in real, messy educational settings and implementable at scale. We also know how teachers can incorporate these ideas into their practice at minimal cost, through the use of self-help school-based “teacher learning communities”. We have also clarified our ideas somewhat—so now we talk about the terms “formative” and “summative” as descriptions of the inferences that are made on the basis of assessment results, rather than as descriptions of the assessments themselves.

Looking back, it seems to me that the biggest mistake we made was to start with the idea of formative assessment as being mainly concerned with feedback, for example by highlighting the negative impact that scores and grades can have on learning. Giving students individual feedback is extremely expensive—after all, it’s effectively one-to-one tuition done in a way that means that students often ignore what is being said. I now think it might have been more productive to start with formative assessment as being responsive teaching. In other words, because students do not learn what we teach, we had better find out what they did learn before we teach them anything else, and we cannot rely on the responses given by confident articulate students as being representative of the thinking of other students in the class.

3. It sounds like your ideas on feedback have evolved. Feedback is a big deal in schools, perhaps partly as a result of the assessment for learning research and partly as a result of the work of John Hattie. Would you therefore like to expand a little on how you now see the role of feedback?

I don’t think my views about feedback have changed that much—rather what changed was the realization that, in many countries, this is not a particularly smart place to start the conversation, since teachers feel—often wrongly in my view—that they have little room for manouevre. I also think that a lot of what schools are doing in focusing on feedback is ill-conceived. Kluger and DeNisi, in their 1996 review of research, found that in approximately 38% of well-designed studies, feedback actually lowered performance. Without some understanding of when feedback improves achievement and when it does not, blanket prescriptions about “doing more feedback” are at best risky, and potentially very harmful, for example if teachers start giving more of the feedback that lowers achievement.

However, there is a much more important point about feedback that those who have sought to quantify the effects of feedback, like Hattie, have missed. In the conclusion of their review, Kluger and DeNisi pointed out that feedback interventions that showed large positive effects on learning should not be implemented if they resulted in the learner becoming more dependent on the feedback. They argued that we should stop trying to figure out how much feedback improves learning and instead look at what feedback does to students. After all, the only good feedback is that which is used. This is why I think we need to look much more at what psychologists call “recipience processes” in feedback—getting students to understand why we are giving them feedback and how they can use it. David Yeager and his colleagues have shown that just telling students they are being given feedback because the teacher has high standards and believes the student can reach them makes students more willing to use the feedback and re-submit work.

4. There is an ongoing debate on this blog and on social media more generally about different teaching methods and curricula; skills versus knowledge and explicit instruction versus inquiry learning. One strength of formative assessment may be that it is pedagogically neutral – whatever and however you want to teach, formative assessment will help you achieve your aims. Perhaps it is the one strategy we can all agree on. What are your thoughts?

The really important thing for me is that formative assessment is neutral with respect to curriculum (what we want students to learn) and pedagogy (how we get students to learn). The big idea—what psychologist David Ausubel called the most important idea in educational psychology—is that any teaching should start from what the learner already knows, and that teachers should ascertain this, and teach accordingly. The problem is that even with a new and unfamiliar topic, after 20 minutes teaching, students will have different understandings of the material, which the teacher needs to know about. What you call the curse of knowledge is part of that—we assume something is easier if we know it—but even if we avoid that trap, we still have no idea what is happening in the heads of our students unless we get some evidence, and if we only get evidence from confident articulate students, then we cannot possibly be making decisions that meet the learning needs of a diverse group of learners. Now of course, the fact that students can do something now does not mean that they will know it in six weeks’ time—we have known for almost 100 years that learning is different from performance—but if they do not know it now, then it is highly unlikely that they will know it in six weeks’ time.

Perhaps more surprisingly, formative assessment does not even entail any view of psychology (what happens when learning takes place). If you’re a behaviorist, you need to know if a student has sufficient reinforcement to make strong links between stimulus and response. If you’re a constructivist, you need to know that the learner has formed reasonably adequate ideas about the material at hand, and does not have any important misconceptions. If you emphasize the situated nature of cognition, you need to know the extent to which a learner’s attunements to constraints and affordances in a particular learning environment are likely to allow them to apply their learning in different contexts. The reasons are different, depending on your view of what happens when learning takes place, but you still need to know what is going on in students’ heads to teach effectively.

5. Finally, if a school leadership team decided to prioritise formative assessment, where should they start?

The recent results of the Educational Endowment Foundation evaluation of the Embedding Formative Assessment professional development pack make that a very easy question to answer. Buy the pack, and use as directed. Organize teachers into groups of 8 to 14, led by a practising teacher (not someone with a formal leadership responsibility), and ask each member of the group to choose one formative assessment technique to try out in their classroom, possibly after some modification. The groups should then meet monthly, for at least 75 minutes, to hold each other accountable, and to give each other support. Allow each teacher to spend as long as they want to work on the same technique until it is “second nature” before suggesting that they move on to something else.

This could be supplemented by some school-wide growth mindset interventions—the effects aren’t huge, but they take up little time. Apart from that, the job of leaders is to ensure that teachers in their schools are getting better at the things that have the biggest benefit for students. Given what we know about the impact of classroom formative assessment, any school leader that encourages teachers to work on unproven ideas like educational neuroscience, lesson study, grit, or differentiated instruction is, in effect lowering student achievement. We need to stop looking for the next big thing, and instead do the last big thing properly.

A big thank you to Dylan Wiliam for giving up his time for this interview.

The toxic ideological cocktail that poisoned Swedish schools

Embed from Getty Images

Kindly helpers have pointed me towards a new working paper from Magnus Henrekson and Johan Wennström. Henrekson is a professor of economics and heads the Research Institute of Industrial Economics in Sweden. Wennström is a journalist, former government adviser and PhD student. They are concerned with the state of Swedish education.

I have written about Swedish education before. No doubt, there has been a decline in standards, but it can be hard to figure out why. My knowledge of the system has been largely based on third person accounts, speculation and a newspaper article by a Swedish professor that I had translated using Google (and which now appears to be paywalled).

With their paper, Henrekson and Wennström have provided much needed detail and they have been kind enough to publish it in English. It is a compelling read.

Previously, there have been two arguments about Sweden. The first is that any decline in standards is entirely due to a set of neoliberal reforms that saw the introduction of parent choice and school vouchers. However, it seems equally legitimate to point, as I have done, to the highly progressivist view of education espoused in Sweden and wonder whether this is the main cause.

Henrekson and Wennström contend that the problem is a toxic mix of both market reforms and educational philosophy. Rather than identifying this philosophy as the tradition of educational progressivism, they see it as an enactment of the more recent ideologies of social constructivism and postmodernism. This makes a lot of sense given the remarkably explicit questioning of the nature of truth that is present in many of the official documents they draw upon. Both progressivism and social constructivism have near identical implications for education and so the distinction hardly matters, yet it does raise in my mind an interesting question about the impact of progressivism in Sweden prior to the second world war.

A point the authors make that particularly resonated with me is about the delay between a change in governing philosophy and effects in the classroom. This delay became apparent when I researched the history of the phonics debate for The Truth About Teaching. Referring to the teaching of reading in the U.S., I wrote:

“Teachers generally work alone with groups of students and so it can be hard to determine exactly what approach is typical. It seems likely that teachers hold on to practices that they have found to be successful, long after experts have starting advocating for an alternative; an effect that will always confound attempts to be definitive about how reading is being taught at any given time. What is clear is that by the middle of the of the 20th century, expert consensus had coalesced around a whole-word approach to reading.”

However, once the grizzled old teachers die-off and there is nobody left who remembers the old ways, the revolution can really get going.

Similarly, after outlining the documentary evidence from Swedish education bills stretching back many years that shows official disdain for a knowledge rich curriculum, subject disciplines and teacher authority, Henrekson and Wennström ask why Swedish education managed to remain strong for as long as it did.

“Against this background, one might wonder why a deterioration of knowledge among Swedish pupils cannot be detected before the 1990s… We argue that the main reason is that more senior teachers upheld a traditional teaching culture.”

We do not yet know what effect the levers that were pulled in the early 2000s in the UK, US and Australia will have on education into the future.

There are also strong resonances in the paper with the situation in England where until quite recently, Ofsted, the English schools inspectorate, used to enforce progressivist teaching methods:

“The Swedish Schools Inspectorate regularly expresses its disapproval of schools that teach in a traditional way and according to a classical view of knowledge.”

Sweden, however, takes this to the next level. Progressivism/constructivism opposes the idea of fixed bodies of knowledge, preferring a more nebulous concept of skill development. This logic leads to schools developing their own grading systems that are highly subjective and fluid and yet no less important for students and their futures. It is notable that, whilst Sweden has slipped down international rankings, there had been rampant grade inflation of these school-assigned grades. Why would there not be? A competitive market creates the incentive for schools to give students and their parents what they want.

Where else do we see this weird mixture of free markets and educational progressivism? The authors contend that the U.S. could be heading down a similar path. It is certainly the direction I sense that many involved in edtech, personalised learning and the like would wish to take us, so there is plenty to think about.

Henrekson and Wennström have produced an excellent and enlightening paper. I urge you to read it.