Welcome

This is the homepage of Greg Ashman, a teacher living and working in Australia. Nothing that I write or that I link to necessarily reflects the view of my school.

Read about my ebook, “Ouroboros” here.

ouroboros small

I have written for Spiked magazine

Educationalists: Teaching bad ideas

Teachers have lost their mojo

I have written for The Conversation:

Ignore the fads

Why students make silly mistakes

Some of my writing is also on the researchED website workingoutwhatworks.com

Briefing: Meta-cognition

Your own personal PISA

I used to write articles for the the TES. These now appear to have been paywalled. I will probably make them available on my blog at some point. If you have access then you can find them here:

Create waves of learningMaster the mysterious art of explanationTaking a critical look at praiseBehaviourGreat Scott! Let’s push the brain to its limitsThe science fiction that doing is bestMake them sit up and take noticeFor great rewards, sweat the small stuffWhere the grass is brownerStand-out teaching – minus differentiation

 


How to motivate your students

how-motivation-works

Despite writing rather a large number of posts on motivation, I sometimes get accused of ignoring motivation when I encourage the use of explicit instruction. It is suggested that explicit instruction may be effective for teaching kids things but perhaps other kinds of teaching are more motivating for students.

As teachers, we all have an implicit theory of motivation and so I thought I would try to codify mine a little. The most popular general theory of motivation is probably Deci and Ryan’s self-determination theory (SDT). This posits the need for people to feel autonomy, competence and relatedness in order to feel intrinsically motivated. However, there are two problems with applying SDT to education.

Firstly, expert employees may well desire a lot of autonomy but students are novices who often don’t enjoy the activities from which they will learn the most. This then sets an instructional task that will build one of the pillars of SDT – competence – at odds with one that will allow autonomy. How do we resolve this?

Secondly, there appears to be some evidence that competence (or mastery) is the main driver of long term motivation.

My view is that ‘motivation’ is too large a term in a similar way that ‘feedback’ is too large a term. I think we need to chop it up a little and ‘interest theory’ helps here. This is the idea that there is situational interest that is derived from some feature of the present context, and personal interest which is roughly the same thing as intrinsic motivation – personal interest is when you have a self-generating interest in a topic.

The first thing we need to do is give away personal interest. We can create the conditions in which it might occur. We can make these conditions as favourable as possible. Yet we can’t, whatever we do, ensure that everyone will develop a personal interest in maths or literature or oxbow lakes. Personal interest is not within our control.

Instead, as teachers we are limited to manipulating two things: situational interest and behaviourist carrots, sticks and routines. Situational interest includes attempts to relate the content to students’ everyday lives and it can be risky. For instance, we might decide to engage students in Shakespeare by getting them to write a rap but this rap may end up having very little to do with Shakespeare. If so, we haven’t made any progress: We have created situational interest but not in the thing that we intended to teach.

Behaviourist measures are much maligned and Deci and Ryan (and their populariser, Alfie Kohn) would argue that they can never lead to intirinsic motivation. Yet if we believe that competency may lead to intrinsic motivation and that behaviourist measures can lead to competency then there is a clear pathway for this to happen.

I reckon this is our best bet. If we insist on teaching our subject well and try to make it interesting without sacrificing the content then, who knows, our students may grow to love it.


The OECD’s seven principles of learning

There is a book on my shelf titled ‘The Nature of Learning’ edited by Dumont, Istance and Benavides for the Organisation for Economic Co-operation and Development (OECD). I sometimes dip into it because it contains a chapter by Dylan Wiliam on formative assessment that is a good summary of the arguments that he makes at length in his own books. There are also some chapters that I am less impressed with. For instance, there is a chapter on inquiry learning that doesn’t really mention the criticism that has been leveled at the approach.

In the final section, the editors summarise seven key conclusions that they have drawn from the different chapters. I hadn’t realised that these have taken on something of a life of their own and that some schools are using these seven principles to inform their teaching approach. There is a handy summary document on the OECD website that covers these seven principles and I want to address some of the claims that it makes.

The document suggests that the seven principles are drawn from the ‘learning sciences’ and yet I don’t recognise much science in the discussion. For instance, it claims that, “Today, the dominant concept [of how people learn] is socio-constructivist in which learning is understood to be importantly shaped by the context in which it is situated and is actively constructed through social negotiation with others. On this understanding, learning environments should be where: Constructive, self-regulated learning is fostered; The learning is sensitive to context; It will often be collaborative.”

This seems to be built mainly on Vygotsky’s theories and I would not personally describe these as settled or established science.

The seven principles are:

  1. Learners at the centre
  2. The social nature of learning
  3. Emotions are integral to learning
  4. Recognising individual differences
  5. Stretching all students
  6. Assessment for learning
  7. Building horizontal connections

Some of the claims made in discussion of these principles are fairly innocuous, others are potentially misleading and some are outright wrong.

The idea of placing ‘learners at the centre’ is a bit of a trap for those of us who advocate for whole-class explicit instruction. We advocate for these methods because we believe that they provide the best outcomes for all students. We have the students’ interests at the forefront of our minds when making these claims. However here, as elsewhere, placing learners at the centre is associated with specific teaching methods. In this case, if you place learners at the centre then you are obliged to follow a, “mix of pedagogies, which include guided and action approaches, as well as co-operative, inquiry-based, and service learning.” So an approach based on explicit instruction is essentially ruled-out.

It’s not clear how guided the ‘guided and action’ approaches are but we are told that learning activities must, “allow students to construct their learning through engagement and active exploration.” To most interpretations, this would rule out a teacher standing at the front, speaking and asking questions because this would be seen as passive. As I have already mentioned, the evidence for the efficacy of inquiry learning is weak and so I don’t see why we need to include this strategy.

On the other hand, there is some evidence to support cooperative learning. However, as Slavin suggests in his chapter in ‘The Nature of Learning’ – and elsewhere – in order to be effective, you must have in place group goals and individual accountability. Very little group work that I have observed applies both of these conditions and they are not stressed in the seven principles, meaning that teachers taking their cue from these principles may well implement ineffective group work. Moreover, the principles imply that cooperative learning is essential and state that, “Neuroscience confirms that we learn through social interaction – the organisation of learning should be highly social.” I don’t think this is true and the appeal to neuroscience seems spurious (see Bowers on the issues surrounding the use of neuroscience to support education arguments). I think we can all learn perfectly well without any cooperate learning at all. Learning by reading a book is an obvious example.

We all recognise that emotions are important but it’s not clear what should be done about this. For some students, placing them in a group will create negative emotions. Others will be anxious about tests. But we know that testing is effective so if we allow students to opt out then their learning will suffer. I think this is where the art of teaching comes in – the ability to monitor a room for its emotional backstories – and I think this is why teachers won’t be replaced by robots any time soon.

The dodgiest of the seven principles is the one on recognising individual differences. The summary document mentions learning styles – an idea thoroughly debunked by science. And it can lead to the same vicious consequences as any other method that focused on student difference: labelling and lower expectations for certain groups of students. 

Stretching all students is a noble idea that is at odds with the direction to recognise students differences. Assessment for learning can certainly be an effective approach but we have seen in England the way in which this turned into pointless marking policies and jargon. Finally, building horizontal connections is definitely a good idea – although hardly a principle. It might lead us into error if the only way that you can do this is assumed to be through inquiry learning.

Any school designing their learning around these principles places themselves at a great risk of harming the education of their students and wasting the time of their teachers. For a truly scientific set of principles, I would recommend the Deans for Impact report instead.


How AITSL judges teaching

The Australian Institute of Teaching and School Leadership (AITSL) recently asked teachers to take part in a survey. I clicked the link and immediately noticed something a little odd. I was asked to answer multiple-choice questions about my teaching and the survey explained that, “The items in each question [the possible answers] are hierarchical with regard to expertise.” So if I chose the first response in each question then that is representative of the lowest level of expertise. This is an odd structure because people don’t like to think of themselves as lacking expertise and so this might bias the survey results.

I then started the survey. The exact set of questions you get depends upon what you enter as your birth month, so this will vary for different teachers. I found that I couldn’t select any of the answers for some questions. For others, I began to wonder what evidence was being used to decide that some teacher behaviours were characteristic of a higher degree of expertise than others.

For instance, question 22 was:

22. Engage in substantive conversation

○ I pose questions to the whole class and respond to individual student answers

○ I encourage interaction between students and between teacher and students about the ideas of the topic

○ I structure conversation to enable student talk to predominate over teacher talk

Huh. So the first response shows the lowest level of expertise and the third response the highest?

Teacher effectiveness research actually suggests that whole class questioning is a strategy used by the most effective teachers who also ensure that students maximise their time involved in academic learning. I am not aware of any research that shows that more effective teachers ‘enable student talk to predominate over teacher talk’. It seems likely that this would reduce academic learning time.

The survey continued like this and so I stopped taking it seriously and started to simply record the questions. The questions and statements in the survey are based upon AITSL’s classroom practice continuum, the most striking feature of which is that it looks like a lesson observation rubric. And we all know that lesson observation is not really a valid way of assessing teacher performance, right? Perhaps not.

So I decided to contact AITSL about the classroom practice continuum through the contact page on its website. I asked if they would be able to send me information regarding the evidence used for producing the Classroom Practice Continuum. Specifically, I asked for the evidence that they had drawn upon to support the claim that the following statements are characteristics of teachers who have greater expertise:

– The teacher makes students responsible for establishing deliberate practice routines.

– They provide students with a choice of learning activities that apply discipline specific knowledge

– The teacher facilitates processes for the students to select activities based on the agreed learning goals

– The teacher supports the students to generate their own questions that lead to further inquiry.

– They negotiate assessment strategies with student

Sue Buckley of AITSL responded, was really helpful and seems very nice. She wasn’t able to provide evidence for the specific points above and I wasn’t surprised by this, given that I suspect that there isn’t any. But she was able to provide information on the evidence base more generally.

Which is intriguing.

Firstly, Sue pointed me to section 2 of the ‘Looking at Classroom Practice’ document. This explains that an expert teacher group was convened in order to assist AITSL with developing a classroom practice continuum that aligned with the AITSL Standards. This was, “guided and informed by Professor [Patrick] Griffin’s methodology that is based on the learning theories of Rasch, Glaser, Vygotsky and Bruner.” In the validation process, the development of quality criteria was informed by additional learning theories that all use developmental models of learning, including the theories of Piaget, Bruner, Griffin and Callingham, Anderson and Krathwohl, Gagne, and Dreyfus and Dreyfus.

This seems a little odd. Not only is it based upon theory rather than teacher effectiveness research, but some of these theories are demonstrably flawed. Stage theories such as those of Piaget and Dreyfus and Dreyfus, and Bruner’s ideas on discovery learning have largely been debunked (e.g. here and here). Piaget and Vygotsky tend to be considered as the fathers of modern constructivism and yet, in 2011, John Hattie stated that, “We have a whole rhetoric about discovery learning, constructivism, about learning styles that has got zero evidence for them anywhere.”

I am inclined to agree with John Hattie’s frank assessment but he is now the Chair of AITSL. So at least some of the theories that AITSL have used to construct this continuum have been debunked by their own Chair. This strikes me as an eccentric position for an organisation to be in.

AITSL then managed a feat that seems nothing short of a miracle. I have to admit that I am not familiar with Rasch analysis but I think I am going to read more about it because of what it was able to achieve. In order to validate the newly minted criteria, the folks at AITSL wrote them into a set of survey questions and were able to get 2561 teachers to respond to the survey (it seems like the survey I attempted was a repeat of this process). They then used, “Rasch analysis to identify both teacher ability and the relative difficulty of the criteria,” thus validating the criteria statements. Yes, you read that right. They were able to identify teacher ability via a survey. This is astonishing. We have no more need for lesson observation. We can forget the tortuous attempts to determine teacher effectiveness via value-added analysis. All we have to do in order to work out who the best teachers are is give them a survey and do Rasch analysis.

Unless they used the teachers’ survey responses to the criteria statements to work out their ability. But that wouldn’t make any sense because they were trying to validate those very same statements. The logic would be circular:

  • Statement X is a good measure of teacher expertise.
  • How do we know?
  • Because the more expert teachers tend to select it.
  • How do we know that these teachers are more expert?
  • Because they selected Statement X.

Perhaps another proxy was used such as level of experience? But that would only couple loosely with teaching ability and might just demonstrate that more experienced teachers are better able to say the right things. I’m just not clear on this point and I am not sure that it provides any evidence for the validity of these criteria.

Apart from a discussion of some of Hattie’s own research – research that does not seem to be clearly reflected in the continuum – the only other empirical evidence is a comparison with a similar continuum developed in the U.S.

In her email to me, Sue Buckley mentioned that a literature review has now been completed that compared the practices within the continuum to the five lesson observations instruments used in the Measures of Effective Teaching (MET) project. This is interesting because it confirms that the intended purpose of the continuum is as a lesson observation tool. And yet it is evidence from the MET project that led to people like Rob Coe (linked above) questioning the validity of lesson observation as it is usually conducted.

In order to gain any kind of reliability, MET project teachers were observed teaching multiple lessons by multiple raters. Not only that, the raters viewed videos of the lessons rather than viewing them live and the teachers did not know the criteria on which they were being judged. This is important because it eliminates the effect of teachers trying to demonstrate what they believed the observers wished to see.

Even with all of these safeguards in place – ones that could not be practically replicated in schools – the resulting lesson observation scores were less accurate at predicting the future test score gains for any given teacher than were the prior test score gains of that teacher. In the end, the researchers settled on a measure that combined classroom observation scores with past test score gains. This was worse at predicting future standardised test score gains than prior test scores alone but was slightly better for predicting performance on teacher developed tests.

AITSL’s review found that ‘almost all of the elements in the MET scales can be found in the Standards and the Continuum’. This does not make a convincing case for the continuum. We don’t even know whether there are things in the continuum – such as the statements I highlighted above – that are not in the MET scales.

All Australian teachers should be concerned about this issue. As Britain emphatically moves away from judgments based upon lesson observations, the Australian government is indicating that it is going to use the AITSL standards to determine performance-related pay. If that is the case, we need a robust system built on quality empirical evidence and not something based on a menagerie of educational theories, many of which are known to be false.

Note: This is a lengthy post and so I have avoided an additional explanation of why I think many of the statements in the continuum are not only wrong, but possibly quite harmful. For a flavour of this evidence, you could take a look at Richard E Clark’s work from the 1980s that shows that students tend to enjoy the instructional activities that are least suited to them and yet student choice of activities is encouraged in the continuum.


What should @Birmo do?

In January of this year, Australia’s education minister, Simon Birmingham, wrote a breathless press release about committing $6 million to a new app that would encourage students to participate in science and maths.

At the time, TIMSS 2015 had just finished testing the science and maths skills of students across a range of nations including Australia and Kazakhstan. We now know that while the performance of Australian students stagnated and has remained largely unchanged since 1995, Kazakh students have surged and overtaken them.

There are a number of exceptionally daft ways that we could respond. An app is just not going to cut it. In fact, any program that tries to fix things through ‘engagement’ is a red herring. Apps, talks from scientists, funky demos and theatre performances have the potential to create brief situational interest but this won’t necessarily translate to a long term interest in these subjects. The main way you develop long-term interest is by teaching kids in such a way that they learn lots so that they start to feel confident and see how everything fits together.

And much as I am in favour of targeting funding at the students who most need it, do we really suppose that Kazakhstan spends more money on education than we do? Clearly, we could do a much better job with the levels of funding that we already have.

Birmingham yesterday did suggest some good ideas. Requiring higher academic standards from teachers entering the profession is a good step in this time of oversupply. It is also a good plan to develop more specialist teachers in primary school.

However, recruiting smarter people with science and maths background will achieve little if we then train them in ineffective teaching practices.

Right now, for instance, project based learning and ‘makerspaces‘ are all the fashion. These ideas are based upon constructivist theories that have been repeatedly debunked since at least the 1980s and that few serious cognitive scientists now subscribe to in their entirety (see this book for a flavour of the current state of this argument).

If you peruse information about teacher science and maths education courses (here’s a typical example) or review the kind of research that education schools conduct then you will see the dominance of variants of inquiry learning. Again, these are constructivist approaches lacking in a strong evidence base.

The most effective way to teach maths and science – as well as anything else – is deeply unfashionable explicit instruction. This result has been validated many times but, given that it’s the wrong result, it tends to be ignored. For instance, you won’t find it featuring in the recent AARE conference program.

This is where Birmingham could have a real impact. I am not sure exactly what leeway he has with current streams of funding but he could certainly link any future funding for teacher education or professional development to the use of evidence-based approaches such as explicit instruction. He can also ensure that the role of explicit instruction is on the table when Australia’s education ministers meet this month.


A first look at TIMSS 2015

TIMSS is a series of international tests in maths and science that first took place in 1995 and that has been repeated every four years since then. The 2015 data has just been published and I have been trying to quickly digest the Australian version of the report. This post therefore has a bit of an Aussie slant – I will comment on other countries but I lack data on statistical significance.

It’s worth noting that TIMSS is a more abstract kind of assessment than the better known PISA. PISA sets questions in contexts, for instance by using mathematics to solve a practical problem. This means that there is quite a heavy reading load for PISA test items. In comparison, TIMSS has a more traditional feel, asking some context-free textbook questions such as 42.65 +5.728 = ?.

TIMSS assessments test maths in Grades 4 and 8 as well as science in Grades 4 and 8. The headline for Australia is that its overall performance is pretty stagnant :

  • Grade 4 Maths – mean of 517 – Significant improvement on 1995 but no significant change since 2007
  • Grade 8 Maths – mean of 505 – About the same as in 1995
  • Grade 4 Science – mean of 524 – About the same as in 1995
  • Grade 8 Science – mean of 512 – About the same as in 1995

So we haven’t gained much traction in these areas in the past 20 years. Why not? This is the kind of question that education research should be addressing.

It seems reasonable to look at the performance of a single country over time like this and try to draw a few inferences but I am more sceptical about comparing the performance of different countries. For instance, Shanghai is often cited for its PISA results but this is a city and not a state. In Australian terms, it would be fairer to compare Shanghai with Canberra. Similarly, it seems unfair to compare countries with smaller and more homogeneous populations with places like the United States. However, I still find the following results to be quite stunning:

  • Singapore, Korea, Japan, Hong Kong and Chinese Taipei are the only East Asian countries represented. They take out the top five places in both maths assessments and five out of the top six places in the two science assessments
  • In the Grade 4 maths test, the highest performing country outside East Asia is Northern Ireland (which did not take part in the Grade 8 assessments).
  • In the Grade 8 maths test, the highest performing country outside East Asia is Russia.
  • Not only did the countries listed above significantly outperform Australia in these areas but countries such as the United States, England, and Kazakhstan all significantly outperformed Australia in all areas.
  • 30% of Australian students – nearly a third – hit the ‘low’ or ‘below low’ benchmark in Grade 4 maths compared to 21% in the U.S., 20% in England, 14% in Northern Ireland and just 2% in Hong Kong (Hong Kong has none in the ‘below low’ category).
  • Interestingly, Finland was outperformed by the United States and England in Grade 4 maths, although I don’t know if this was significant. However, Finland did better than these countries in Grade 4 science (Like Northern Ireland, Finland only entered students at Grade 4).

Perhaps we might pause before sending more delegations of worthies to Finland to marvel at phenomenon-based learning. Instead, Australians might be better to head to Kazakhstan.

I await next week’s PISA results with interest.


All that marking you do? Waste of time.

One of the worst myths we have in education is not learning styles or that we only use 10% of ours brains, it is the myth that feedback is the same thing as marking.

John Hattie has done much to popularise the idea that feedback is highly effective but this conclusion highlights one of the problems with Hattie’s kind of meta-analysis – there’s a whole bag of quite different things sitting under that label.

Hattie himself acknowledges that of all forms of feedback, feedback to the teacher is one of the most powerful kinds. Yet we continually think of feedback as something that teachers supply to students, in writing. And Dylan Wiliam points out that, while the effects of feedback are large, a worrying proportion of them are negative. It seems that telling a student that she has done something right or wrong can have unpredictable consequences.

Imagine a classic physics question. Students are presented with a diagram of a book on a desk complete with an arrow to show the weight of the book and an arrow to show the push of the desk back up on the book. The question is: what is the Newton’s third law pair of the weight of the book?

desk-and-book

I’ll give you two options for answering this question:

1. The students write the answer in an exercise book, perhaps at home. You then collect in the books and mark them.

2. The students write their answers on a mini whiteboard and hold them up during the lesson.

From experience, a lot of students will get this question wrong, even after correct instruction. The right answer is ‘the gravitational pull of the book on the Earth’ but this feels weird. The students’ eyes are drawn to the other arrow and they choose the push of the desk on the book.

So if you follow option 1, you’ll get a load of exercise books full of the same error which you will need to explain and correct, in writing. These explanations will have to be brief if you’re ever going to get to bed. Moreover, if this question was set as homework then some students who had help with their homework won’t get your written feedback, even though they probably need it.

Teaching has been reduced to the teacher corresponding individually and in writing with different members of the class.

But if you choose option 2 then you, the teacher, gain instant feedback. Students are present in front of you so you can ask them why they gave the answers that they gave. You can then tailor a more extensive explanation to address the issues that the students raise, and you can monitor and adjust for the emotional impact at the same time. All of this is feedback but none of it is marking.

English teachers are probably thinking that this is all very well but it won’t work in English. It’s not as straightforward, no. But the same principles apply: Correct what you can with the students in front of you. It helps if you can break things down rather than always relying on assessing whole pieces of writing. The traditional approach where a teacher circles and highlights parts of a written response before writing a paragraph at the end, is likely to be ineffective because there is too much for the student to take on.

Although there is plenty of evidence for feedback, there is a general lack of evidence for simply marking. This is why the English schools inspectorate have now issued new guidance to inspectors to stop asking for ever more detailed marking.

So feedback is potentially very powerful. But if you’re spending loads of your time marking then you might want to have a think about what you’re trying to achieve and if there is a better way of achieving it.


A principled objection #AARE2016

mcg

Last year, we met the ‘phallic teacher’. This year it’s the ‘phallic lecturer’. Australian education’s annual festival of daftness – The Australian Association for Research in Education (AARE) conference – has come to the Melbourne Cricket Ground. Neoliberal imaginaries and French philosophers are all the rage.

I’d quite like to go to one of these gigs but it comes at a time of year when we’ve rolled over our classes to the 2017 timetable. I have two new Year 12 groups to teach today and that concentrates the mind.

Instead, I will be following via the Twitter hashtag #AARE2016. When I mentioned this on Twitter, alongside the fact that I would be highlighting the funniest tweets, I provoked something of a backlash. One associate professor commented that, “Academic freedom is important, but what you are doing here is anti-intellectual. You are trolling an acad prof assoc.” Linda Graham offered some career advice on the subject of respecting the expertise of academics:


What has caused this loss of a sense of humour? Perhaps there are some things too precious to poke fun at. Perhaps it is with immense solemnity that we should contemplate presentations on, “Queer(y)ing ‘agency’ using a Butlerian framework of thinking: What might alteration ‘look like’ through this prism of thought?”

But this is not just about highlighting silly conference papers. There is a serious point here – a point that I have a right to make.

I am not opposed to blue skies research (as long as we are allowed to poke fun at it, should we wish). The world is enriched by the pursuit of philosophy or art. But AARE is not a pure mathematics conference. It is about education and education is one of the largest social enterprises we have. Governments pump vast quantities of taxpayer money into it; money taken from the pay packets of nurses and bus drivers. Yet, in the anglophone world at least, we don’t seem to be seeing much improvement. Why not? What is all this research achieving?

I think I know why we are in this position. If you take a look at the AARE 2016 program and strip it of all the posturing about, “Bourdieu’s theory of social practice and Vygotsky’s cultural historic activity theory,” then you will find papers about practical approaches to teaching. The trouble is that the methods pursued seem to fly in the face of what we already know about effective practice. For instance, ‘direct’ or ‘explicit’ instruction has a strong track record dating back to the process-product research of the 1960s. You might think that researchers would be trying to improve and refine these methods but there is no reference to either term in the entire program.

The use of phonics is mentioned in the title of just one presentation, despite being the teaching method with probably the strongest evidence base in the whole of education (see here, here and here) and a topic of considerable importance given the current proposal for a phonics check. And this single mention is in a presentation on teachers’ beliefs about ‘commercial’ and ‘pre-packaged’ phonics programs. For those of you who aren’t up with the lingo, commercial = bad.

So, what practices are being promoted at the conference? Well, there’s lots of inquiry based learning and makerspaces (the latter apparently being a tool to ‘engage’ women in STEM subjects). This is despite such approaches being based upon the kind of constructivism that even serious constructivists have moved away from (see the discussion here). We have papers that classically ‘beg the question’ such as, “How does inquiry-based pedagogy motivate students to learn mathematics?” What if it doesn’t? What if it’s useless? What if it is creates situational interest – ‘motivation’ is too broad a term to use in the context – but leads to poor learning outcomes? Let’s just hold on and examine a few assumptions here.

This is why the AARE enterprise is so fruitless. Utility should not be the only aim of education research but it should at least feature. Somewhere. Instead, we have lots of derivative research that sits entirely within a jargon-laden, self-congratulatory, self-referential bubble.

Do you want to get ahead in education research? The first rule is to learn how to eduwaffle. The second rule is to respect your elders and betters (whilst voicing platitudes about critical thinking).