Thinking critically about Jair Bolsonaro

Embed from Getty Images

What is your opinion of Jair Bolsonaro? If you know who Bolsonaro is then you will already have constructed a schema involving him and the things you know about him. When you read his name, that schema will be activated and I predict that you will find an opinion within easy reach.

If you don’t know of Bolsonaro, then let me mention a few things about him. He is running for president in Brazil and has just progressed through the first round. He is keen to tackle corruption and he was stabbed during the course of the election campaign, putting him in hospital for several weeks.

What do you think about him now that I have activated a few more schema?

Continuing the drip-feed of knowledge, Bolsonaro is ex-military and has suggested appointing military generals as ministers. He has made comments widely criticised as homophobic and misogynistic, and he is a supporter of family values.

Again, I will have triggered a few more schema and your reaction to these will depend on your own political views.

I am trying to demonstrate something that progressive educators and postmodernists should furiously agree with: we interpret new information by relating it to what we already know. This is the essence of constructivism. I predict that at no point in reading about Bolsonaro did you employ a critical thinking heuristic because that’s not how your brain works. If I had asked you to consider Bolsonaro from multiple perspectives, you would not have been able to do so unless you knew what those perspectives were. If you did know what those perspectives were, I predict they would largely have been activated alongside the rest of your Bolsonaro schema when I first asked your opinion. If you were in possession of incorrect information about Bolsonaro, this would have been activated too.

To suggest that we can develop all-purpose critical thinking skills is not only a rejection of constructivism but a form of magical thinking. By trying to avoid the laborious process of knowledge-building, we assert an approach at odds with how the human mind works.

If we want our students to be able to think critically about figures such as Bolsonaro, there is no real alternative to a sequenced approach to building accurate and balanced schema about politics and history; a huge project that schools are uniquely suited to tackling. That’s pretty much why we invented them in the first place.

Advertisements

Do timed tests cause maths anxiety?

Embed from Getty Images

We all know that students will sometimes become anxious when faced with a school test. Part of our role as teachers is to mitigate this anxiety. We should avoid talking about tests in a way that heightens their perceived importance and instead stress the role they play in learning. Personally, I often frame tests as ‘just checking in to see how well I’ve taught you’.

Whereas I accept that tests will sometimes be a source of pressure, that is different to the more specific claim made by Professor Jo Boaler and others that timed maths tests cause maths anxiety.

Maths anxiety

First, we need an understanding of what maths anxiety is. According to the Centre for Neuroscience in Education at Cambridge University, maths anxiety can be defined as:

“…a feeling of tension and anxiety that interferes with the manipulation of numbers and the solving of mathematical problems in … ordinary life and academic situations. The severity of Mathematics Anxiety can range from a feeling of mild tension all the way to experiencing a strong fear of maths. The prevalence of extreme mathematics anxiety is estimated at between 2-6% at secondary school level in the UK, and other cases, whilst less severe, can still have a significant effect on the people who suffer with it.” [footnotes and references removed]

Notice that maths anxiety is not a one-off event, it is a medium- to long-term condition. To prove experimentally that timed tests cause maths anxiety, we would need to run a randomised controlled trial where one group of students is subjected to timed tests and another group is not, with a follow-up at a later stage to measure the prevalence of maths anxiety in the two groups. Although not impossible to do, it seems unlikely that someone would run such a test.

An alternative may be to look for a correlation between timed tests and maths anxiety out there in the real world. If we found such a correlation, we would then have to rule-out the possibility that having maths anxiety somehow causes students to be subjected to more timed tests or that some other factor may cause both. This would be a debatable question but ultimately it could be answered with a sufficient weight of evidence.

What is the evidence?

Boaler has written an article that is linked via her YouCubed website. Despite purporting to demonstrate that timed tests cause maths anxiety, the closest Boaler gets to direct evidence is to quote a study by Randall Engle as evidence that:

“…researchers now know that students experience stress on timed tests that they do not experience even when working on the same math questions in untimed conditions.”

This does not provide evidence that timed tests cause maths anxiety, it provides evidence that test conditions can be stressful. Moreover, I have read Engle’s paper and I cannot see any mention of the study described by Boaler.

The rest of the research cited by Boaler relates to the fact that stress can impair performance on maths tasks. Again, I am prepared to accept this but it does not prove the central claim.

Victoria Simms went on a similar hunt for the evidence of a link between timed tests and maths anxiety when reviewing Boaler’s bookMathematical Mindsets:

“[Boaler] discusses a purported causal connection between drill practice and long-term mathematical anxiety, a claim for which she provides no evidence, beyond a reference to “Boaler (2014c)” (p38). After due investigation it appears that this reference is an online article which repeats the same claim, this time referencing “Boaler (2014)”, an article which does not appear in the reference list, or on Boaler’s website.

I am wondering whether “Boaler (2014)” is meant to be the same article that I looked at and that uses the Engle reference. Perhaps Jo Boaler would like to clear this up?

What other evidence is there that relates to maths anxiety?

The Cambridge Centre for Neuroscience in Education observes that although low mathematical achievement and maths anxiety are correlated, the direction of cause-and-effect is unclear. Does maths anxiety cause low achievement or does low achievement cause maths anxiety? Are they perhaps reciprocal, with low achievement causing maths anxiety which causes future low achievement and so on?

In this case, it is at least plausible to argue for precisely the opposite case to that made by Boaler. The purpose of timed tests, particularly for maths facts such as number bonds and times-tables, is often to ensure that students have these facts available automatically and don’t have to work them out. Why is this important? If you simply know that 7 x 8=56 then you don’t have to use your limited working memory resources to work this out and you can therefore deploy them on some other component of a maths problem. Coupled with the kinds of explicit teaching methods that research has shown to be effective, such approaches may actually be a far better way of tackling low achievement and therefore maths anxiety.

Who should decide what is best for students with special educational needs?

Embed from Getty Images

There was a recent attempt at school-shaming on Twitter*. School-shaming occurs either as the result of a newspaper article, perhaps about a headteacher clamping down on a school uniform, or, as in this case, when someone feverishly scrolls through a school’s website, looking for things they disagree with. School-shamers then take to Twitter to denounce the school and seek support from a Twitter mob. The impact of these campaigns can be profoundly negative on the schools involved. There is also an imbalance of power – often stories emerge from students or parents and the school cannot give their version of these events without breaching confidentiality.

However, something out there has changed. It was heartening to see a number of teachers speak out against this latest attempt at school-shaming, but there was a further twist. The school-shamers had attempted to argue that the fact that the school in question had high academic and behavioural standards meant that it was ‘selective’ because parents of children with special education needs wouldn’t want to send their children there. As I have noted before, this is an odd stretching of the definition of ‘selection’. By this standard, schools with poor behaviour or low academic standards are also ‘selective’ because many parents won’t want to send their children there either.

In this case, it was an overreach. These self-appointed experts in special educational needs were making pronouncements that did not resonate with everyone. One experienced teacher commented that, “I mostly work with SEN & vulnerable children & know the vast majority of their parents would be falling over themselves to at least get in to see & talk to such a school.”

Accommodate or address?

Which brings me back to a point that I have raised before and that I think is fundamental to any discussion of special educational needs. It is relatively easy to observe that a student struggles with their reading or struggles to concentrate in class, but the key question is what do we do about it? We can accommodate the difficulty by working around it. For instance, if a child struggles to read then we could give them a special pen that converts print into audio. Alternatively, we can address the need with a reading intervention. The same is true for other difficulties. For instance, we can either accommodate the behaviours of a child with Oppositional Defiant Disorder or we can attempt to address them.

For an individual child, it may be appropriate to sometimes address their needs and sometimes accommodate them. For instance, a child could be part of an intensive writing intervention but be given the option to not write in history class. However, I think the discussion, certainly the one driven by self-appointed experts in special educational needs, tends to assume that accommodation is the only option. That’s why they look at a school’s behaviour policy, suggest that it does not accommodate challenging behaviors or a lack of attention in class, and conclude that the school is therefore not catering to special educational needs.

Given that we are talking about abilities and behaviours with profound life-changing consequences, such as the ability to read and write, or the ability to behave in a socially acceptable way, my personal preference is to address special educational needs as much as possible and only accommodate them as a last resort or as a form of respite from intensive intervention.

Parental choice

As with any of my other opinions, I might be right and I might be wrong. Perhaps the best way to deal with special educational needs really is to accommodate them. Who should make this decision? In the absence of definitive evidence, I would suggest that parents are those who are best placed to make this call. If a parent thinks that a school with low academic and behavioural standards is the best option for their child then perhaps they are in the best position to judge. Alternatively, if a parent feels that a school with high academic and behavioural standards is preferable then perhaps they should also have this choice available. What strikes me as unreasonable is any attempt to try and ban schools with high academic and behavioural standards and so remove this option for parents.

I am pretty sure that there are plenty of parents in England, Australia and across the world who would like the choice of a free place at a school with high expectations, but who feel they do not have this choice. Don’t take it away from those who do.


*For obvious reasons, I do not intend to post links to the tweets in question, but I can supply them privately, on request.

Your role in changing education research


Education research can lack relevance. It often takes a postmodern and slightly silly stance, closely related to what the recent ‘Sokal Squared‘ hoaxers refer to as ‘grievance studies’. Not only does such research lack relevance to teachers, there is growing evidence that the whole identity politics project leaves the public at large pretty cold. As an aside, I fear that an adoption of identity politics by mainstream left-of-centre politics could shut them out of power for many years to come. But that is not what this post is about.

There have been a number of attempts to refocus education research on topics and methods that are of more practical significance. Probably the largest of these projects so far are the Education Endowment Foundation in the U.K., Evidence for Learning, its franchise in Australia and similar endeavours in the U.S such as I3. These projects tend to conduct their own large-scale randomised controlled trials and/or summarise the results of other educational trials using meta-meta-analysis.

There are two main problems. Firstly, the large-scale randomised controlled trials that these organisations conduct are often ‘under-powered’. Due to the fact that the test subjects are whole schools, you need a lot of schools involved in order to gain a clear result and many of the trials hit the floor of the number required. Such an environment makes it hard to runs trials with two experimental conditions plus a control – the best way of working out which approaches are the most effective.

Meta-meta-analysis is no substitute, adding only an illusory rigour to the comparison of teaching methods. It may be possible to do this kind of analysis well in the future but we haven’t managed it so far.

Aware of these difficulties, it is tempting to look to cognitive science as a more solid foundation for making inferences about teaching methods, due to the fact that it tends to be more robust. The problem – and it’s not really a problem – is that much of the relevant research tends to be conducted with undergraduate psychology students. But what if you could do some cognitive science research yourself, in your own school and in your own classroom?

You can.

I’ve been conducting research for four years now as part of a PhD course. If you don’t have the prerequisites to start a PhD then you can consider an experimental Masters. In Australia you can even convert your experimental Masters into a PhD over time if all is going well.

What does this look like? I recommend small randomised controlled trials that are randomised at the student level. In other words, you randomly allocate students to one of two or three groups. The key to obtaining worthwhile results is to change only one tiny thing between the groups. This is the opposite of trialing a whole package of measures in the style of the Education Endowment Foundation. For instance, in my research I change the order that two events happen for my two groups of students and attempt to measure the effects on learning. This kind of research helps build the science of learning, piece by verifiable piece.

I recommended small randomised controlled trials be part of the mix when Evidence for Learning was established but, as with everything else about improving the teaching profession, we will probably have to build this ourselves. If you are interested, you need to find a supervisor with expertise in experimental work (if you are such a supervisor then feel free to mention this in the comments) and you need to be aware that everything you do will be subject to rigorous ethics approval processes – you can’t just start running experiments. That’s actually a good thing. And you also get access to research papers.

So please consider getting involved. What’s stopping you?

Should we scrap standardised testing?

Embed from Getty Images

From 1997 to 2010, I taught in the UK. During this period, my 16-year old students completed GCSE exams and my 18-year-old students sat A-Levels. In fact, due to the modular nature of these exams at the time, students sat them continuously through a period spanning the ages 15 to 18. Until 2008, 14-year-old students also completed SAT exams in English, maths and science until they were abolished in 2008, much to my personal frustration. I arrived in Australia at the same time as NAPLAN literacy and numeracy testing was being rolled-out. My U.K. experience meant that the idea of such tests was as familiar to me as it was apparently strange to some of my Australian colleagues.

My teaching career has therefore been one of 20 years of standardised testing. In this time, I have worked in schools that gained high scores and I have worked in schools that gained low scores. I have also worked in schools that have improved their scores and I have run departments that have been subjected to the various tests. This is my view.

Badly designed

I find it odd that people make standardised testing into a pantomime villain. This has led to the kind of reflexive reactions we saw on the recent edition of Q&A.

Tony Jones: Can I have a ‘hard-working teachers’ everybody!

Crowd: [Cheers and whoops]

Tony Jones: Can I have a ‘standardised testing’ people!

Crowd: [Boos and hisses]

(I paraphrase)

Standardised testing is a neutral concept. Some standardised tests are well-designed and some are rubbish. The fact that they are standardised is not a bug but a feature. It means that there is some external standard to compare a school’s results with. For instance, as a head of maths, imagine I find that my students perform at about the state average on number but their scores on statistics are lower than the state average. I can start asking questions about why this is the case: maybe our number programme is strong or our statistics programme is weak? Perhaps we have deliberately under-emphasised statistics in order to get the number work right. If so, is that what we want? I cannot ask these questions if all I have to go on are internally created assessments.

However, in order to give me useful information of this kind, the tests need to be well-designed. A Grade 4 or above reading test that uses a randomly selected topic is likely to be as much a test of general knowledge as it is of reading. And privileged children have an advantage in gaining general knowledge due to dinner table conversations, trips to museums and all the rest of it. That’s why as well as arguing for a knowledge-rich curriculum, I have called for the NAPLAN reading and writing contexts to be set in the previous year’s Australian Curriculum content.

Poor responses to tests

Standardised tests are not, by themselves, a guide as to how to improve. A lot of policymakers seem to have fallen for the idea that teachers and schools know how to teach more effectively, are choosing not to (why?) and that standardised testing would therefore provide the impetus for them to choose these more effective practices. That is simply not the case.

Education is unfortunately awash with bad ideas. Setting aside, for now, those educationalists who are genuinely hostile to any agenda of improving academic performance, those of us who wish to be more effective run a gauntlet. One common idea is that children should be taught reading comprehension strategies and that these will enable children to access any text. This idea contains a grain of truth. Limited training in reading comprehension strategies does finesse performance on reading comprehension tests, but such training offers diminishing returns. Extended practice of these strategies provides no additional improvement because reading performance is ultimately limited by general knowledge.

A school that takes time away from science, history and the arts in order to expand a literacy programme that focuses on drilling these strategies is therefore making the wrong call.

Similarly, children should be made familiar with the format of a standardised test before taking it and there is nothing wrong with saying things like, ‘notice that this is a two-mark question so you have to make two distinct points,’ but endlessly drilling and rehearsing exam questions is not going to be as effective as teaching the relevant subject content in a logical and coherent sequence.

And we saw after the introduction of the phonics check in England that some teachers were drilling children in nonsense words. Not only is this a misunderstanding of how the test works, it is highly unlikely to improve performance.

So we should not assume that the responses of teachers and schools to any standardised test will be to reach for more effective practices. The point is that the test will help inform us whether they have been more effective.

Reward and punish

Policymakers are also capable of responding counter-productively to standardised testing. My pay has never depended on standardised test scores and the idea of giving less money to schools that do badly on these tests seems perverse. If anything, these schools need more resources. But ideas like these seem to be out there and often become conflated with arguments about the inherent value of the tests, such as in the G.E.R.M. conspiracy theory.

The G.E.R.M. conspiracy theory and the politics of testing

One narrative that I don’t think we should pay too much attention is the Global Education Reform Movement conspiracy theory, G.E.R.M. If it sounds like something that the baddies in a comic book might call themselves then you are probably thinking along the right lines. According to this theory, standardised testing is just one part of a sinister global agenda to standardise everything about education in the interests of private companies or something. It sounds like an argument from the political left but…

Who introduced NAPLAN into Australia? Julia Gillard. Not only was she a Labor education minister at the time, she was a member of the ‘left faction’ of the Labor party. In contrast, Pasi Sahlberg, arch-enemy of G.E.R.M., has recently been collaborating with Adrian Piccoli, touring New South Wales and working with Piccoli and the new Gonski Institute. Piccoli is a former Liberal education minister (‘Liberal’ means the opposite in Australia to what it means in America and the Liberals are our mainstream right-wing party, roughly equivalent to the Republicans in the U.S. and the Conservatives in the U.K.). There is nothing wrong with such a collaboration, but it does confound simplistic left-right characterisations of the issue.

The accountability question

Gillard also introduced the MySchool website. This is probably the most contentious component of the NAPLAN programme because it allows access to parents and other members of the public who are then able to compare the results of different schools (although not in a simple league table as is often suggested).

I think this is a question of democracy. If information is available about the punctuality of a publicly-owned railway company then, as a taxpayer, I think I have a right to know. If information is available about the death rate at my local publicly-funded hospital’s accident and emergency department and how this compares with national figures then, as a taxpayer, I think I have a right to know. This is not just my opinion, it is so central to our current understanding of democratic accountability that Australia, the U.K. and the U.S. have all instituted freedom of information laws to give various levels of access to information about public services. If we do not have this information, how do we make informed decisions at the ballot box?

Collecting standardised test information and refusing to share this with stakeholders therefore strikes me as a little authoritarian. However, there are legitimate concerns about exactly what is reported and we could probably make improvements. Do we need to report at the individual school level? What kinds of measures make the most sense?

Focusing on growth

Throughout my experience with standardised testing, I have always focused on growth. I have tended to view this in two ways.

Firstly, I have the crude aim that next year’s results be an improvement on last year’s and I ask the question of what this might involve. Sometimes, cohorts of students will vary over time and sometimes, as was the case with G.C.S.E.’s in England in the 2000s, grade inflation may make improvement easier to achieve than it should be. However, focusing on improvement has always served me better than focusing on arbitrary targets.

Even when I worked in a school that had the lowest standardised test scores in the local area, I did not pay too much attention to what other schools were scoring. You can learn from other schools if you have a relationship with them, but you won’t learn much by studying their numbers.

Secondly, sophisticated approaches to analysing standardised test results enable a look at the aggregated progress of individual students. NAPLAN and MySchool are able to do this, giving an even clearer picture of how a school may be going in a particular subject area. For instance, this plot of a randomly selected school (not my own) shows how reading has improved from Year 3 to Year 5 and compares this with similar SES schools and schools with similar starting points:

This is not a ranking or a league-table. As a teacher, I think it is useful to have this information as part of the mix, provided we do not place too much emphasis on one single measure.

Let’s not subscribe to simplistic conspiracy theories. Let’s not throw out helpful data as part of some ideological crusade. No, standardised testing alone will not fix education, but it does provide information that I have found useful over the years. Don’t throw that away.

What did we learn from the #QandA teaching special?

Last night’s edition of the Australian current affairs programme, Q&A, was billed as a ‘teaching special’. I think it demonstrated a number of key points about Australian education, at the same time as raising a few questions. It has caused me to reflect on the following:

1. It’s good to have a practising teacher on a panel

Eddie Woo is a maths teacher and media personality who rose to fame through his YouTube channel. Prior to last night’s debate, I wasn’t really sure what Eddie’s position was on the major issues. Disappointingly, he made claims about schools being ‘industrial’ and suggested that curriculum be decoupled from student age in a similar way to the recommendations of the recent Gonski 2.0 review. Woo used the example of the sequence of grades that people who learn a musical instrument pass through and how these are not associated with any particular age group. I think this is a flawed argument for a number of reasons. Firstly, it does not take account of the need to build general knowledge of the world and the fact that the order in which this is done can be pretty flexible. Secondly, it ignores the way that subjects are interlinked such as through the literacy demand of senior maths and science subjects. Instead, such as stance tends to view curriculum as a discrete set of decontextualised skills; an approach that is particularly unsuited to areas such as reading and writing. Finally, it ignores the social aspects of being the 12-year-old in a class full of seven- and eight-year-olds; an inevitable result.

Nevertheless, Woo brought an element of practical sense to the discussion, tempering the more hyperbolic statements made by some in the audience and on the panel. This shows that, even when we are wrong, the practical experience of teachers is essential to any discussion of education.

Yet I wonder if the ABC would have had a teacher on the show if they didn’t have someone with Woo’s profile to call upon.

2. The media has a blind-spot on Finland, as demonstrated by Tony Jones

Tony Jones, the moderator, deferentially and uncritically kept referring to Finnish education in his interactions with Pasi Sahlberg. It doesn’t appear that Jones is aware that Finland’s PISA results – the results that drew attention to the country in the early 2000s – have significantly declined in recent years.

This lack of background research was apparent when Alyssa Meli, a Year 12 student, asked the panel about the pressure of Year 12 exams. Jones clearly wanted to focus specifically on Year 12 because he shut down Jennifer Buckingham when she wanted to discuss NAPLAN. Yet he threw to Sahlberg for the anti-testing counterargument against Australia’s exam system, as if he was completely unaware of the high-stakes and time-consuming matriculation exam that Finnish students sit at the same age.

Later in the programme, Vivian Zhu opened up the possibility of a little critical thinking about Finland with an excellent question:

“Given that Finland has a largely homogenous population in terms of race and religion and less inequality than Australia, do you think that modelling the Australian education system after the Finnish one is really viable in terms of culture and economics?”

Jones threw this to Sahlberg and then quickly changed the question to one that focused on a more positive angle about Finnish teachers having masters degrees. Nobody else had a chance to comment, despite Jennifer Buckingham having written articles on the issue.

I wonder whether the temptation of a simple narrative is too strong for media generalists who have to move swiftly from one subject to another and it makes me wonder about the media treatment of subjects that I know a lot less about.

3. We have conflated the concept of professionalism with that of a lack of accountability

When it comes to teachers, people seem to link professionalism with a lack of accountability. NAPLAN testing is wrong, it is claimed, because it takes away teacher autonomy. This seems an odd argument. Engineers don’t get to autonomously design bridges and surgeons don’t get to autonomously perform operations. Both occupations are professions and both occupations are governed by a strong set of standards and are accountable for measurable results.

I do sympathise with the antipathy towards needless bureaucracy, much of which is driven by flawed ideas and managerialism, but this is a separate issue.

4. How can indigenous STEM education be improved?

I was struck by Cindy Berwick’s comments about indigenous maths education. Berwick talked about running Science, Technology, Engineering and Maths (STEM) programmes for indigenous students in which they learnt math through culturally relevant activities such as studying the aerodynamics of boomerangs. She also suggested that a lack of cultural awareness meant that NAPLAN assessments effectively discriminate against indigenous youngsters.

I am sure Berwick’s camps are excellent, but if we are going to address an issue such as this then we will need to do so on a daily basis. Textbooks and assessments that are used every day will need to have cultural relevance. Readers of this blog will also be aware that I see motivation and achievement as reciprocal, with a greater part of future motivation resulting from a previous sense of achievement. Indigenous youngsters will need to be taught maths well in order to gain that sense of achievement and this needs to be part of the focus.

The sink school


I once worked in a ‘sink’ school. Let me explain what that means.

We did not achieve great results for children at the time. This resulted in the school being under-subscribed. Parents would try to get their children into other local schools rather than come to ours. The parents who were best at avoiding us were the savvy ones and the wealthy ones who could afford property nearer to other schools. So we drew mainly from nearby social housing schemes.

The fact that we were under-subscribed meant that we would tend to take students who had been excluded from other schools. If we then ended-up excluding them, there were few places left for them to go. The school was certainly diverse but it was not the model of comprehensive education that had been promised to British taxpayers in the 1960s because the demographics were heavily skewed to families with low household incomes.

What would you do about such a situation?

One option might be a coercive one. You could prevent schools from excluding altogether and then my sink school would not have had to take these students from other schools. But do you really want that? Even those who think current rates of exclusion are too high would generally accept that there are some circumstances that warrant it. Should a teacher have to go back into a classroom to teach a student who has physically assaulted him or her?

Setting that question aside, once you’ve banned exclusions, you will still have the problem of parents moving houses to escape from schools they perceive to be bad. And banning exclusions might make that situation worse. As a parent, I would do anything to avoid sending my child to a school where violence was a part of daily life and I would feel pretty justified in doing so. If a child had been violent and not been excluded as a result of this then I might conclude that it was a school where violence was accepted.

So perhaps we need to bus children around. Perhaps we need to use a lottery to determine which schools children go to and then force them to go to the one that has been selected for them. It is possible. But many parents might then choose to go private. You might even see a boom in private sector schooling, perhaps of the low-cost, no-frills kind. The sink school would still not be comprehensive.

So you would probably need to ban private schools.

As a policy, this seems an unlikely one to take to voters, but for the sake of argument, let’s imagine you do and you win an election. To summarise, your policy would compel children to go to specific schools that the state has decided they must go to and allow no opt-out from this. Is that what you want?

But there is another way to fix a sink school.

You can start to tackle poor behaviour, not just with sanctions – although they are a necessary backstop – but also with increased resources. That’s what happened at my school. We instituted a whole-school detention system and we used the money invested by the Labour government of the day to staff onsite alternative provision. We employed ‘behaviour improvement workers’ who could work with individual students. Some students had a pass to leave a lesson if they felt they were losing control*. It wasn’t a perfect system by any means. There were still behaviour incidents and we still had to occasionally exclude students, but the behaviour improved a great deal. And exam results went up. And the number of students increased.

It may seem righteous to harass a named school on social media, pontificating that you think the school’s stated approach to behaviour management may put off some parents and arguing that therefore the school is ‘selective’. But if that’s your criteria for selection then my sink school was highly selective. The school you are trying to shame at least has a chance of attracting a genuine cross-section of society because the majority of parents don’t want their children to go to chaotic and violent schools.


*Note that this practical, real-world solution is a mix of the kinds of things behaviour ideologues favour and the kind of things they are totally against. This is not a coincidence.