This is the homepage of Greg Ashman, a teacher, blogger and PhD candidate living and working in Australia. Everything that I write reflects my own personal opinion and does not necessarily represent the views of my employer or any other organisation.
Read an article I have written for The Spectator here:
Read my articles for the Conversation here:
We all know the interests and enthusiasms of the education departments of Australian universities. If in any doubt, take a look at the programme for upcoming AARE conference and you will find lots to do with the politics of identity, gender and equity and relatively little on reading*. All these issue are important, of course, but our education system is not really designed to effectively address them. Reading, on the other hand, is at the core of education and so we might hope for a little more interest. Yet in our schools of education, reading has largely morphed into ‘literacy’, a word that acts as a vessel to contain pretty much anything you like.
If someone criticises this focus, they are likely to be told that they don’t know anything about what happens in early reading classrooms. They are likely to be asked for evidence that reading is not being adequately taught.
It would be hard to survey enough Australian classrooms to generate evidence of this kind and that’s one of the reasons why I favour a phonics check like the one that exists in England. Even without reporting individual school result, which I think would be unnecessary, the broad figures would tell us something about the state of systematic synthetic phonics (SSP) teaching.
Until such a check takes place, it is worth looking at some proxies. One of these is the knowledge teachers possess around reading instruction. If a teacher knows the difference between a grapheme and a morpheme then that doesn’t necessarily mean that they are teaching SSP and teaching it well. However, if they don’t know the difference between a grapheme and a morpheme then it doesn’t seem plausible that they could be teaching SSP at all.
A new paper seeks to answer this and other questions by surveying the confidence and knowledge of final year education students at a range of Australian universities. It makes for pretty grim reading. Students highly rated their ability to teach reading while also displaying huge gaps in the required knowledge.
“As a group, preservice teachers demonstrated a substantial discrepancy between their general confidence to teach early reading and spelling, and their content knowledge of this area, leading to the conclusion that few preservice teachers had sufficient expertise to be effective teachers of early reading and spelling”
Despite a low return rate for the surveys, these findings are in line with previous, less ambitious studies (e.g. here).
I think it is reaching the point where universities need to start offering some proof of efficacy. For too long they have been relying on an absence of direct evidence in this area as evidence that they are doing a good job.
*The small number of references to reading are interesting. One session on the ‘simple view of reading’ looks quite good but a different session seems to be anti-phonics. There is also the launch of an anti-SSP book that was taken apart by Alison Clarke here.
Last week, Dame Alison Peacock of England’s Chartered College of Teaching (CoT) appeared in a programme broadcast on Russia Today – or “RT” as it is now known. RT’s mission is to acquaint, “international audiences with a Russian viewpoint on major global events.” Its adverts were banned last month by Twitter in order to apparently, “protect the integrity of the user experience”. Intelligence services in the U.S. have branded RT a ‘propaganda outlet’ and part of Russia’s ‘state run propaganda machine’. RT is funded by the Russian government.
Clearly, it was inappropriate for Dame Peacock to appear on RT. To their credit, The Chartered College recognised this almost immediately and issued a press release stating, “When Alison took part in the interview, we were not aware that it would be syndicated on RTUK. Had we been aware of the syndication deal we would not have participated in the show.”
Although this largely put the matter to rest, I was still concerned about the Chartered College’s processes. The programme was made by a production company known as “Renegade Inc”. A brief trawl through its output would have confirmed that its TV shows are syndicated to RT. It would also have revealed a number of articles that take a line that is very similar to that of Russia’s foreign policy position. For instance, this Tweet promotes the virtues of a ‘secular and pluralist government under president Assad’:
Renegade Inc have also involved themselves in the debate about Russian influence in the recent U.S. election, with this article suggesting that, “Robert Mueller’s investigation will never – ever – find proof that Trump colluded with Russia to steal the 2016 election using hackers and propaganda.”
Clearly, it was a mistake for the College to partner with Renegade Inc. However, I could understand how it happened given that some prominent educationalists have close links. For instance, Graham Brown Martin is a ‘Renegade’ and author at the Renegade Inc site. Ian Gilbert has a video posted there where he discusses the ‘neoliberal education model’.
Given the possibility of confusion, and without suggesting anything other than a well-intentioned mistake on their part, I asked the College to publicly distance themselves from Renegade Inc and not just RT. However, to my surprise, they issued a ‘clarification’ to their original press statement that seemed supportive of the production company:
“For avoidance of doubt, we wish to clarify that the statement above does not imply criticism of the content of the video produced by Renegade Inc., which is an interesting debate between two education professionals, and that we were not intentionally misled by Graham Brown-Martin or by Renegade Inc.”
Which is astonishing.
Today, it appears that the relationship between the College and Renegade Inc has broken down entirely, with Renegade Inc posting a lengthy critique of Dame Peacock and the College over the incident. One key claim made by Renegade Inc is that:
“Dame Peacock and her college were well aware that Renegade Inc. is broadcast on RTUK because Renegade Inc.’s co-founder and host Ross Ashcroft told her just before filming began in the studio. He even told her the broadcast times.”
So we either believe the College statement or Renegade Inc. They cannot both be true.
I know where my sympathies lie. I believe that Russia is inclined to act in the best interests of Russia and those interests do not necessarily align with the interests of the U.K. I cannot speculate on why RT have decided to broadcast stuff about U.K. education that draws from a particular perspective, but they will have their reasons.
And finally, I can’t help but mention a comment from the Renegade Inc story that does seem to be objectively true:
“Whilst all this needless hysterical noise was raging Graham Brown-Martin was in Kuala Lumpur as a guest of the British High Commission and British Council. The Renegade Inc. programme was shown numerous times there receiving nothing but positive responses. The programme has been shared widely in the region via the Foreign Commonwealth Office as part of their “Education is GREAT” campaign. Ironic isn’t it?”
Yes, it is ironic.
So you have found yourself under pressure to implement inquiry-based learning, project-based learning or something like that. How do you respond?
I tend to see things through the lens of cognitive load theory (CLT). A key finding is that learning new, complex concepts like algebra or writing is likely to overload working memory (but not learning very simple things like lists of words). Guided instruction reduces the load, and fully guided, explicit instruction is better still.
However, despite a few abortive attempts, cognitive load theory does not incorporate a theory of motivation. And another important finding is the ‘expertise reversal effect’ where, once sufficient expertise has been developed, experiments show that problem solving is better than explicit instruction. This may be because, at this stage, it adds to episodic knowledge – ‘ah, I’ve seen something like this before!’ It probably also aids transfer by cycling students through different kinds of problems and so helping them to apprehend the deep structure.
So my ideal sequence, the one I use in my own teaching, is explicit instruction that gradually fades to individual or perhaps group problem solving. The final few weeks of a Year 12 course involve me in very little teaching because students work independently, calling on my input only for clarification or when they get stuck.
However, explicit instruction followed by inquiry or project-based learning in the same domain could achieve much the same result and might have an additional benefit in terms of motivation (I say ‘might’ because I am not convinced by this). This is important. If we examine, for instance, Expeditionary Learning schools in the U.S., my understanding is that they have roughly 8 week units, the last third of which is an inquiry or project. If I was under a lot of pressure to use inquiry then this is the model I would try to adopt. To be honest, I’d struggle to do this in maths but I can imagine it working well in history or science.
If the decision has not yet been made and you are still able to influence the discussion then I would reach for some good papers and send these around to the parties involved. You can’t really go past American Educator for this kind of stuff. Teachers seem to find the articles readable and yet they are well-referenced and substantive.
The best two papers on explicit instruction are:
They complement each other well. Clark et. al. mainly rely on cognitive science and specifically CLT, whereas Rosenshine takes a broader view and draws more on the teacher effectiveness research of the 1960s (which is mostly correlational). It’s actually this kind of triangulation that convinces me of the case for explicit instruction and so it is worth highlighting.
Gregory Yates also has a great little paper which draws on the teacher effectiveness research and that gets teachers thinking about research issues without hitting them over the head with polemic:
I have also written this for The Conversation. It is more forthright but it’s been quite well-received:
Project-Based Learning is shakier, in my view, than Inquiry Learning, but it is being pushed hard, particularly by the kind of consultants who are keen to speculate about the future of work. In this case, it’s worth looking at a review by Britain’s Education Endowment Foundation that they conducted as part of an RCT:
I mention the review rather than the findings of the RCT itself because these were compromised due to so many schools dropping out of the Project-Based Learning condition; which also tells us something.
Inquiry Learning in maths tends to be synonymous with problem-based learning so I would probably return to cognitive science (this is why I can’t see it working at the end of a Unit but I don’t have time to expand on that here). There is an interesting correlation between the decline of maths performance in Canada and a move to more inquiry-based curricula but I’m not sure I could point to much apart from op-eds, although the Howe Institute report is very interesting.
Inquiry in science usually involves asking students to design and conduct their own experiments. My favourite critique of this idea is by Paul Kirschner but it’s a little dense for an ordinary reader:
Interestingly, PISA 2013 found a negative correlation in all participating countries between a ‘student-orientation’ and maths results, with the construct roughly mapping onto some forms of inquiry. Even more strikingly, PISA 2015 found a negative association between increased use of ‘enquiry’ methods in science class and science performance. However, the OECD don’t seem keen to publicise these findings and so, sadly, the best sources tend to be blogs such as mine or this one (note that this piece was written about PISA 2010 and never mentioned these findings, even though they are the strongest correlation, focusing instead on a memorisation which the data suggests doesn’t correlate with anything).
Finally, there might be a content/pedagogy overlap. Teachers might privilege student control over learning. If so, the links in this blog post might help. I also find that when many people argue for inquiry learning, at root is some idea that it is superior for developing critical thinking skills, problem solving skills or creativity. These are all highly dependent on domain knowledge, which doesn’t necessarily refute inquiry as an approach but it disrupts the idea that these can be developed in a general sense and that inquiry will therefore deliver this. If someone points to critical thinking then I tend to use Dan Willingham’s American Educator article on the subject. If they point to problem solving then I might use this by Tricot and Sweller:
Unfortunately, its very technical. I don’t really have a ‘go to’ article on creativity but the same principle applies (and I would be keen for links to any in the comments).
The case of generic skills is partly why I have moved away from thinking about the reality of content and teaching methods as independent of each other, whether or not they are in principle.
Let me know if you encounter and argument that isn’t covered here.
This post is inspired by the logic of a post by Ollie Lovell.
Like most of you, I had come to believe that eating cake makes you fat.I was convinced by the evidence that cake is an energy dense food of little nutritional value and that, all other things being equal, eating lots of it would lead to weight gain.
I personally avoid eating much cake, preferring a diet rich in fruit and vegetables and so I had come to think of myself as being on one side of this debate.
However, a couple of things happened recently that made me reevaluate my position by rethinking the definition of ‘eating cake’
First of all, it is quite possible to eat cake only rarely, while enjoying a diet full of healthy food and while pursuing an active lifestyle involving plenty of exercise. Such a combination is actually very good for you and won’t make you fat. In fact, it is probably one of the healthiest lifestyles out there. And yet, technically, someone following this balance of food and exercise could be described as someone who ‘eats cake’.
Nevertheless, I continued to be unsatisfied. The definition of ‘eating cake’ was still expressed only in behavioural terms. It didn’t seem anywhere near complicated enough.
So I read some papers by cake lovers in order to gain a more nuanced understanding which I will now share.
Let us think about what happens when you eat cake: You roll it around in your mouth, tasting it with your tongue and even smelling it. It is a sensation of flavour. We could therefore argue that any time we have this sensation – any time we taste any food – we are in essence ‘eating cake’.
With this new definition, there is clearly no relationship at all between eating cake and weight gain and so I think it has the potential to end this divisive debate, once and for all.
In my own PhD research, I run randomised controlled trials (RCTs). These involve setting up two or more experimental conditions, varying only one factor between them and then randomly assigning subjects – in this case, students – to each of the conditions. RCTs are considered the gold standard for working out if one thing causes another because you manipulate just that one thing and nothing else. By randomly assigning students, we know that there are no other systematic differences between the members of the groups that could account for any difference in outcomes.
You may therefore expect me to be an evangelist for experiments. You might expect me to take a dim view of other ways of trying to establish cause and effect. But that’s not quite right.
I am also impressed by correlations and the evidence that correlations provide adds to the evidence we have in education. It is true that correlation does not equal causation, but this is the starting point for a discussion rather than the end point. I mentioned correlational evidence on Twitter recently and Dylan Wiliam responded with a link to this paper by Austin Bradford Hill, written in 1965 and addressing correlations in medicine; a field commonly known as ‘epidemiology’. It makes for an interesting read.
It’s worth outlining the key problem with correlations: We might find that as one thing changes, another thing also changes; it rises or falls. However, this does not necessarily mean that the first thing caused the change in the second thing. Three possibilities are worth illustrating:
- There is no relationship and its just chance that the two things correlate. I may, for instance, note a pattern where shorter teachers tend to have more pens in their pockets than taller teachers, but this may just be due to the particular teachers I have sampled. If we repeated it with a different group of teachers we may find no pattern or a reversal of the pattern.
- Changes in both things are actually caused by a third factor. An example of this might be the discovery that living in Florida correlates with an increased risk of dementia when compared to the U.S. average. It this case, living in Florida does not cause dementia. Instead, it would likely be the fact that Florida has a larger proportion of senior citizens when compared to other states, and that senior citizens are more likely to get dementia, that is the cause of Florida’s increased rate of dementia.
- The arrow of causation may point in a different direction to the one we assume. For instance, we might see a correlation between students’ motivation for mathematics and their maths ability. Perhaps motivation causes students to work harder and this increases their ability. Alternatively, being more able at mathematics might cause students to be more motivated. Both sound plausible and there may even be a causal arrow pointing in both directions; a virtuous circle.
Given such issues, why not toss out correlational research altogether and simply conduct experiments?
The answer is that experiments are hard to do. Big, long experiments are particularly hard to do so if we want to know the effect of a policy change in a state education system then running a true experiment is virtually impossible. This would not be such a problem if experiments were perfectly scalable; if small, short experiments just generated little versions of the results we get with bigger, longer ones. But there is much reason to doubt this. Lab-based findings rarely have a smooth path towards implementation as a long term policy change.
In contrast to the difficulty of running large experiments, it’s pretty easy to amass correlational data and it’s getting easier all the time in this data-rich age. Correlations can also circumvent some of the ethical issues with experiments, such as when one group of students perhaps has to miss out on a promising intervention in order to act as a control. You can also often have a sort-of control group for correlational data; a ‘quasi-experimental’ design. For instance, regression discontinuity is a technique where a small change causes an individual to flip from one category to another. Imagine two children, the first of whom is born on the 31st December and the second who is born on the 1st of January. If the cut-off date for school entry is the 1st of January then, although the two children have very similar ages, the January child will have a whole year more of schooling. A different kind of quasi-experiment might involve two neighbouring districts adopting the same policy change at different times, with the late adopter then acting as a control. These two examples are drawn from a paper by Stuart Ritchie and Elliot Tucker-Drob that analyses the effect of education on general intelligence.
Correlations also have the advantage of testing real-world examples. In education, we are plagued by bad experiments where a gold-plated version of the favoured intervention is tested against a do-nothing or bog-standard control. It is probable that an inferior teaching method, delivered with lots of thought and plenty of commitment, will fare better than a mediocre enactment of a technically superior teaching method. Correlations can tell us something about everyday, ordinary examples of the two approaches under investigation e.g. this study of science teaching methods.
However, we are still left with the cause and effect problem. Bradford Hill offers some useful suggestions for evaluating correlational data but some of this is clearly most relevant to medicine and public health. I would like to focus on just a few things that I would look for when assessing the validity of inferring cause and effect from a correlation.
Key is what Bradford Hill refers to as ‘consistency’ and what we might also term ‘replication’. If we see this correlation in a range of different situations then we can probably rule out the idea that it’s a chance finding. For instance, if three quite different states adopt the same education policy at different times and, subsequently, maths scores rise in each of these states then that would seem to be telling us something. It is particularly convincing if we can take a correlation and replicate it in an experiment.
An example of this would be the process-product research of the 1960s and 1970s that sought to correlate various teacher behaviours with test score rises. A number of behaviours were identified that we might broadly term ‘explicit teaching’. However, these could just have been proxies; a particular teacher personality type, for instance, might have caused teachers to teach in a particular way and also have caused the test score gains. To try to figure this out, we could and should ask whether it is plausible that teacher behaviours cause student learning and of course it is – a plausibility test. However, that still doesn’t rule out a third factor.
Which is why a number of researchers set up experiments (e.g. here) where they taught teachers these behaviours and then looked to see if these teachers’ students performed better than a control group. We still have a problem if these experiments are badly designed but if we have a large number of correlations and reasonably well-designed experiments all pointing the same way then I think it is reasonable to infer a cause and effect relationship.
Ultimately, our inferences should depend upon triangulation. It is about more than exactly replicating an experimental finding. To be reasonably sure of a cause and effect relationship, we need to see similar effects in a range of different correlations of different designs, sizes and duration, ideally supported by experimental evidence. It’s a lot to ask for but I think we have the tools at our disposal to amass such evidence in a way that is relevant to common debates about teaching approaches.
So I noticed a tweet by Warwick Mansell, the U.K. journalist who is best known for shaming schools who enforce their school uniform policies. The purpose of the tweet, now deleted, was to draw our attention to a discussion between Dame Alison Peacock of the Chartered College Of Teaching and Graham Brown–Martin, an advocate of edtech, 21st century skills and that kind of thing.
The odd aspect of this was that the discussion was hosted on Russia Today, a known propaganda arm of The Kremlin. Why would a body in receipt of millions of pounds in U.K. government cash seek to prosecute its case via such a controversial medium? It doesn’t seem to make sense, particularly given today’s criticism by the British Prime Minister, Theresa May, of Russian propaganda tactics.
I couldn’t possibly speculate, but I did wonder if it was something to do with Graham Brown-Martin. The piece that was broadcast on Russia Today was made by Renegade Inc., the ‘talk show that allows us to think differently’. Despite being apparently produced in the U.K., Renegade Inc.’s output seems supportive of Russian policy and is quite dismissive of a link between Russia and any meddling in the 2016 U.S. election (this piece about Syria is also interesting).
Graham Brown-Martin has a profile page on Renegade Inc. If I were a betting man then I would wager that CoT have been drawn into this sorry affair through these links. It was probably a huge mistake by Peacock and the College of Teaching, but such naivety does not serve the teaching profession well. Propaganda relies on the use of friendly faces to convey its message.
Time, perhaps, for some critical thinking.
Update: Statement from the Chartered College of Teaching. Key point: “When Alison took part in the interview, we were not aware that it would be syndicated on RTUK. Had we been aware of the syndication deal we would not have participated in the show.”
If ever you have the misfortune to be on a night out with me, and if we are in the vicinity of a karaoke bar, then there is one inevitable outcome: I will end up singing the T-Rex classic ’20th Century Boy’ in what English comedians, Vic Reeves and Bob Mortimer, would term the ‘club style’. It’s not pretty but it works as a metaphor for the rest of this post so bear with me.
The education department in New South Wales has just organised a conference full of the usual sort of people to talk about the future of education. As you can imagine, the future will apparently be very different from the past and this include the jobs that people will do. In order to prepare for this, we need some kind of revolution in education that involves tossing out solid stuff and replacing it with fluffy stuff.
Today, a piece was published in The Conversation that is abridged from a book produced to coincide with the conference. A number of claims are made that I would dispute, such as the idea that, “In this digital age, the need for children to learn and memorise facts is diminishing.” Instead, we apparently need to teach 21st century skills such as problem-solving, critical thinking and collaboration.
This contention is plainly wrong on a number of levels.
Firstly, our current understanding of cognitive science suggests that critical thinking skills cannot be uncoupled from knowledge. In order to think critically about something, we need to know a lot about it. The same is true for problem-solving; we generally solve problems by making use of strategies that we have learnt; that we know. When we solve novel problems for which we have no strategies, we use means-end analysis; an approach that nobody needs to be taught. Knowledge is important because it is something that we think with. We cannot think with knowledge that is sitting out there somewhere on the internet. If we diminish the need for knowledge then we will have the perverse effect of degrading students’ critical thinking and problem-solving skills. Is this what the education department in New South Wales wants?
What of collaboration? Can this be taught? There are clearly elements of cooperating with others that children pick up from early socialisation. But these are biological primary skills that we have evolved to learn. There is no reason to think that working with three mates on a geography project in Year 8 will do anything to help prepare students to work in quite specifically designed teams when they start employment. If there is evidence to suggest otherwise then I would be interested to read it. Again, I suspect that ability to collaborate in a particular area will depend up knowledge of that area.
Yet there is no doubt that many big businesses call for students to be inculcated with these kinds of ’21st century skills’ in order to make them better employees. So what does that mean?
Firstly, people who run banks or sell washing machines are not experts on cognitive science and so they are just as likely to be mistaken about this as anyone else.
Secondly, I have to wonder whether big business is trying to duck out of a responsibility here. Imagine if the army said, “We really need schools to teach students how to fire guns – this is an essential skill that schools are just not delivering and that will be even more important in the future.” The likely reaction would be that it is not the business of schools to teach this; that schools have a broader purpose.
And that would be right. When did we reach the point when education became solely about meeting the wishes of future employers? It is certainly not about that for me. Education is about making life richer; about opening people’s horizons to see things and have experiences that would otherwise have been denied them.
Nobody can know the future. We hear confident predictions all the time about AI or jobs that don’t exist yet, but we have to bear in mind that these are simply guesses made by pundits who have no clairvoyant powers. The best guide to what will be useful and important in the future remains that which has been useful and important in the past; that which endures. If we attempt to revolutionise education at the behest of big business then there is a chance that we will gain nothing and lose a lot.
Finally, when did we decide that the ability to collaborate or think critically were uniquely 21st century skills? Then again, perhaps I’m just saying that because I’m a 20th century boy.