Project-based school improvement

Embed from Getty Images

To improve a school, it is necessary to focus on one or two things and then relentlessly go after them for about four or five years. That seems to work. However, it doesn’t sit well with the way that most schools operate.

One reason that we are at the mercy of quacks and their quick-fix fads is that these fads serve a pressing need within schools. This is not the need of school improvement but rather personal advancement. The ideal career path for someone who wishes to be a school leader is to cycle through a number of schools, leading a project at each.

A project is ideal for discussion at interview because it is the kind of object that people can see all the way around. As an interviewee, all you need to do is gently orient people towards the needs that the project was intended to meet, describe what it involved and then claim success. Nobody can prove you wrong and you can position yourself as someone who gets thing done; an implementer who sees stuff through.

In my experience, schools are overburdened with such projects. Some are new and shiny. Others are already in the recycling bin. A few sit on deckchairs in the twilight like glimpsed ghosts, goading teachers to ignore them.

And each project comes with a sunk cost; the money spend on training, the ego paid out in evangelising. So they don’t go away easily. They are like tar in the curtains.

Project-based school improvement is a vicious system that funnels money and resources away from good, simple ideas and towards the pockets of a few waffle-mongers.

There has to be a better way than this. Teachers must be able to gain credit as leaders for saying that they joined a school, inherited a plan, worked that plan daily and left before it was even close to being finished. Because that’s what real school improvement looks like and that is exactly the kind of apprenticeship you need to serve in order to become good at it.

Advertisements

Prove me wrong

Embed from Getty Images

Galaxies are big and far away. Two consequences of this are that you cannot rotate them and you cannot walk around them to look at them from a different angle. As part of my degree I did a literature review on a certain class of galaxies known as ‘BL Lacertae Objects’. These are highly luminous and scientists think this may be because they are spewing out a jet of radiation that aligns with our line of sight from Earth. This would mean that there are other galaxies out there that have these jets but that don’t appear to us as BL Lacertae Objects.

In my literature review, I looked into the possibility that BL Lacterae Objects and Fanaroff-Riley Type 1 galaxies are the same, with the latter being the parent population. Scientists try to address this question by counting the number of each, modelling the jets and seeing if this is consistent.

This is science but it is not an experiment. In my recent post on the use of science in education, I wrote about the kinds of experiments I am conducting as part of my PhD. But the scientific method is a disciplined form of inquiry and experiments are just one form that such an inquiry may take.

Pinning down exactly what the scientific method involves can be hard. Some would want to include the process of peer review. I would wish to stress that, in formulating a scientific hypothesis, scientists draw on prior knowledge and look for mechanisms that are plausible, consistent with that prior knowledge and no more complicated than they need to be. However, I would not include that in my definition.

Instead, I think that the core of the scientific method involves the testing of falsifiable hypotheses:

This idea of falsifiability is closely associated with the philosopher Karl Popper and it is the key way of differentiating between science and nonscience. It is particularly useful in providing warning signs while developing a new scientific theory. For instance, Cognitive Load Theory (CLT) has veered into the unfalisfiable with the concept of ‘germane’ load. I would argue that this issue has now been addressed but critics of CLT can justifiably point to it as a serious concern.

The field of education has a habit of generating unfalsifiable ideas. If I were to encourage teachers to ask one question of themselves and of others it would be, “How would we know if this idea was wrong?” If you can answer this question then you have a starting point for evaluating the evidence. If you can’t then we have something that is ‘not even wrong‘. At least disproved hypotheses serve the useful purpose of delineating truth and falsehood. Unfalsifiable assertions serve very little purpose at all and waste too much of our time.


Better than science

Embed from Getty Images

It is often argued that science is no good for analysing educational practices. The use of science is dismissed as ‘positivism’ or perhaps ‘scientism’. The claim is that human relationships are really complicated and so cannot be subjected to the same analytical techniques as atoms and molecules. We cannot possibly know how any given individual will react to a particular approach and so the determinism of science is profoundly flawed. Some have even argued that such complexities mean that there is no such thing as a teaching method.

Although I accept the limitations of science – we cannot use it to decide what is moral – I am sceptical about the idea that science has little to offer education. It is similar to claims that people used to make about medicine. The whole point of using a statistical approach is to tease out underlying mechanisms. Statistics take account of the fact that students are not identical. True, I can never claim that if I use a specific technique with a particular student then I will obtain a certain result. However, I can makes claims about the likely effect based upon a large number of students and I can make generalised claims about the relative effectiveness of technique A compared to technique B.

Yet I would accept that there are problems that arise: There is the replication crisis in the social sciences more generally where different groups of researchers cannot reproduce the findings of an original study. This may relate to ‘p-hacking’ – the tendency to analyse and reanalyse a data set until something turns-up that appears to be statistically significant. Some things will appear to be significant just by chance so perform enough analyses and you can manufacture ‘significant’ results out of nothing. We could point to these issues and argue for better quality research or we could use them to bolster a claim against science.

So let us conduct a thought-experiment and assume the anti-science case. Let’s accept the argument that science is simply the wrong tool for examining education and think about what this would imply. I think there are two logically consistent positions that someone could take if this is what they believe.

We cannot make any causal claims about different educational approaches

We might suggest that, essentially, nothing can be known. We may be happy to toss out the scientific evidence for explicit forms of instruction but we would have to do the same with the evidence for collaborative learning. We would not even be able to make claims that there is ‘no best way’ to teach because that would imply knowledge of the relative effectiveness of different educational approaches.

This would be a hard path to follow because people make causal claims about teaching all of the time, including those who eschew science. It is almost impossible to talk about education without doing so. A mentor might suggest that a student teacher should ask more ‘higher order questions’. Why? On what basis?

Adopting this stance would require us to accept that anything goes apart, perhaps, from methods barred on ethical grounds. Even then, some techniques will cause obvious harm such as the use of physical punishment yet we could argue that approaches that lead to less learning or that waste time and resources are unethical and yet we would have no way of judging this.

We have something better than science

The other logical stance is to assert that there is some process superior to science that we can use to assess causal claims in education.

There are two main candidates that appear in the literature and are often intertwined. The first we might paraphrase as, “A great philosopher once wrote…” This is the practice of taking the writings of (usually French) philosophers and borrowing from them. We read phrases such as, “Using Bourdieu’s concept of habitus…” and so on.

I am not sure why I or anyone else should accept an argument from authority. How do we know that these guys are right? What should we do when different researchers’ exegesis of their works leads to different conclusions? It all seems a bit scriptural to me.

The second approach is the use of qualitative studies. Such studies have descriptive value in fleshing-out what different practices may look like in the classroom but they are weaker than science in teasing out causes because they are subjective and therefore prone to the myriad biases that plague human thought. For instance, if we are favourably disposed to something then we are likely to accentuate the positives and deemphasise the negatives. In contrast to science, which has the potential to determine whether effects transfer to different contexts, a subjective description of a particular classroom is simply a subjective description of a particular classroom.

Such descriptions represent a sophisticated version of the personal testimonial. Many teachers are utterly convinced of the effectiveness of a method through their own experience of employing it. Yet testimonials are the hallmark of quack science and for good reason. They are not systematic and are effected by various biases such as the sunk cost fallacy – you don’t want to think that something in which you’ve invested 20 years of your career is a load of old rubbish – and regression to the mean.

Sticking with science

For my part, exploring these alternatives makes me want to stick with science. I don’t mind being called names for doing so because it seems like the best bet. If someone can develop a process that is better than science for establishing causal relationships in the social sciences then I am happy to change my position. Simply highlighting science’s flaws is not enough. I want to know that you have something better.

It also strikes me as a dodgy argument to highlight the flaws in science and suggest that nothing can really be known, only to then turn around and claim that adoption of a particular pedagogy will lead to greater motivation or deeper learning.

How do you know that?


This TEDx talk about maths teaching is pretty bad

What is it with fake Einstein quotes? I recently read an amusing blog where the author set out a quintessentially progressive argument for abandoning traditional exams, setting kids project work and focusing on generic skills such as problem-solving over knowledge acquisition. At the same time, he argued that the debate between traditionalism and progressivism is a pointless false dichotomy because all good teachers adopt a range of ‘teaching strategies’. How did he open his piece? With a fake Einstein quote about fish climbing trees.

Similarly, in the TEDx talk that I am about to discuss, we are treated to a fake Einstein quote about play being the highest form of research. The talk is by Dan Finkel and the fake quote is not actually the worst thing about it. It is called, “Five Principles of Extraordinary Maths Teaching” – a title worthy of the Extraordinary Learning Foundation™. Have a view for yourself:

To me, it is eerily reminiscent of Dan Meyer’s TED talk and perhaps Meyer is due a credit. The difference is that Meyer’s talk has more substance whereas Finkel focuses on the deep and meaningful presentation. There is also a six year gap between the two talks, suggesting that these ideas are an enduring theme in commentary about U.S. maths education. This is worrying because neither speaker feels the need to list any supporting evidence for the teaching approach that they both broadly suggest and that can be summarised by Finkel’s five principles:

  1. Start from questions
  2. Students need time to struggle,
  3. You are not the answer key,
  4. Say yes to your students’ ideas,
  5. Play!

We have known since the 1980s that asking students to struggle at solving problems with little teacher guidance is a terrible way of teaching maths and yet this is exactly what Finkel proposes. He even suggests ways to avoid answering students’ questions and commends his approach as one that works for teachers who don’t know the answers themselves.

An early experiment conducted by John Sweller helps explain the surprising ineffectiveness of problem solving as a way of learning maths. Students were asked to get from a starting number to a goal number in a series of steps where they could either multiply by three or add 29. Although many of the students could complete the task, they failed to spot the pattern; for each solution, it was necessary to simply alternate the two steps. Why? The process of problem solving was so demanding on working memory that there was little left over to spot patterns and learn anything new.

Finkel’s example problem is even worse. In it, the numbers 1-60 are colour coded. The students are supposed to figure out that the coding relates to the prime factors of each number. Yet, in order to do this, students would already need to know the factors of a good proportion of these numbers. If they know these factors then they might figure out the coding but what exactly have they learnt?

This illustrates why problem based learning is so inequitable. It selects against those with low prior knowledge and in favour of those with high prior knowledge. Students who already know their factors may gain some practice and perhaps consolidate knowledge of a few more factors from this exercise, although it will not be an optimal way to do this. Those who don’t know their factors will be confused by the problem and will be faced with a teacher who insists that he or she is ‘not the answer key’.

They most certainly will not be developing a generic skills of ‘problem solving’ because such a skill does not exist.

I am sure supporters of Finkel’s approach would suggest that I have missed the point here. It is not the actual maths that matters, they might claim, but the process of thinking mathematically. And Finkel’s methods are surely far more motivating than the traditional approach where children in linen gowns write in chalk on slates and have to vomit up rote, disconnected facts for fear of the strap.

I am not convinced by this. What is ‘thinking mathematically’ and how is it different from being plain good at maths? We know how to achieve the latter and it involves a lot of hard work. Even if we accept that thinking mathematically is more than the sum of its parts then students still need to know all of those parts –  as in the factors example – and the evidence is clear that explicit forms of instruction are the best way to achieve this.

And the argument about motivation is simply the wrong way around. It places motivation before achievement rather than achievement before motivation. The evidence suggests that self-efficacy is key to motivation. Self-efficacy is a belief that you are able to tackle a given task. We build self-efficacy in maths by making students better at maths. There is no evidence that I am aware of that extended periods of struggle add to motivation. In fact, it is common sense that sustained struggle is demotivating. Instead, we need to give students the feeling of success. This will sustain them as, later in training, they tackle difficult and complex problems.

I understand that this is not how some people would like the world to be. It does not fit the dominant learning-by-doing ideology. But sometimes we need to sacrifice our outdated beliefs if we want to make progress rather than make a record of them in TED talks.


Exclusion is neither bad nor good

Embed from Getty Images

Ten or so years ago, I was assistant headteacher at a high school in London. As part of my role, I line-managed two heads of year and this meant that I took on some of the most difficult discipline issues in these year groups.

I had worked with one student for some time. He had a challenging home background and was disruptive. I had taken him to see the headteacher more than once. I had liaised with our school’s behaviour improvement workers about a suitable program. I had investigated when he had tracked-down and threatened a peer in the corridor. Then, one day, as the students were lining up outside the hall for an assembly, he took the needle from a set of compasses and stabbed it into the legs of three students. This was directly in front of me and I saw him do it. I had to physically restrain him.

The student had been temporarily excluded before and, with a heavy heart, I suggested to our headteacher that he be permanently excluded. I then went home and stayed up all night preparing the paperwork; one flaw in it and the exclusion would be overturned on appeal. Once a headteacher has made the grave decision to exclude, the worst possible outcome is for that decision to be overturned.

I tell this story to make what, to me, is an obvious point. I did not recommend exclusion because I thought it would be a good thing for the student. It was a decision taken gravely because we knew this student would be likely to fare worse without the support of our community and its resources. The exclusion was in the interests of the other students; the threatened, the stabbed.

I have already mentioned the resources that we had available such as the behaviour improvement workers. These were a team of three or four led by a psychologist. They had their own area of the school where they could withdraw students from classes in order to work with them, typically on issues such as anger management. A limited number of students had a pass that allowed them to leave lessons and come to this area. The behaviour improvement workers could also support students in lessons.

We also had an area of the school set up for ‘inclusions’. Students would arrive and leave at a different time to the rest of the school and work under the supervision of a member of staff. I can already imagine people snorting, ‘but that’s not inclusion!’ I know that. ‘Inclusions’ were given this name because they were used as an alternative to temporary exclusion. For many students, it was a far worse punishment.

These measures were expensive. We were part of a project that attracted additional funds under the then Labour government. Initially, as part of this project, we were effectively banned from excluding any student. This later relaxed but there remained more hurdles to exclusion than in other schools.

I am not a fan of looking at headline exclusion rates and making inferences about whether they are good or bad. A school with poor systems may allow far too many students to spin out of control to the point that they are excluded. However, a new principal turning around a failing school or a school trying to deal with a gang issue may see a similar spike in exclusions. It’s the wrong level of analysis.

Of course, there are hardened ideologues who would assert that there should be no exclusions at all. They might point to the negative effects of exclusion on the excluded. These arguments fail to place exclusion in the context of the interests of the whole school community.

There is no experimental evidence that I am aware of about the effects of different exclusion practices on schools. I don’t think there could be. So all we can do is look at epidemiological studies.

I was therefore interested to find this analysis of suspension practices in New York City via a Robert Pondiscio article.

There have been two recent reforms to suspensions in New York. The first, under mayor Bloomberg, stopped teachers issuing suspensions for first time, low level offences. Quite right too. Overall school climate – as assessed by a survey that New York City regularly issues to teachers and students – seems to have remained stable under this reform.

The second reform under mayor de Blasio led to a similar reduction in suspensions. This reform toughened-up the suspension process by requiring principals to seek permission from district administrators in order to suspend a student. The introduction of this reform correlates with a significant decline in school climate across the district.

We can draw nothing definitive from this data. There might have been another factor that led to a decline in school climate. But I think that this evidence should at least make us pause before we introduce policies aimed at eliminating exclusions.

Exclusion is not a good thing to be applauded as a sign of toughness. It is not a bad thing to be dismissed as a signal that teachers and schools don’t care. It is, instead, a necessary measure to take in order to protect a school community when all else fails.


What can we learn from Ontario?

Embed from Getty Images

Since the advent of PISA, Canada and the Canadian province of Ontario in particular have been held up as something of an exemplary education system. This is reasonable enough because Ontario has consistently performed above the average for OECD countries. On the PISA standardised scale that is intended to represent equivalent levels of performance over time, the OECD average hovers at or just below 500. The following chart maps Ontario’s performance since the first PISA assessments in 2000:

It would be easy to conclude that states that are not performing as well as Ontario should try to copy what Ontario is doing. There have certainly been efforts to study the Ontario system and disseminate the knowledge gained through this process. However, I think we need to be careful in making inferences in this way.

Firstly, mean PISA score differences between countries and states do not simply depend upon the quality of the education system. Demographics and levels of wealth will play a large part. So will cultural effects such as the value placed upon certain subjects and the amount of out-of-school tuition that takes place. These vary widely between different states.

Perhaps of more interest than a direct comparison between countries and states is the trend in any one country or state. Although demographics and cultures may change over time, they are likely to be far more static within one region than is the variation between regions. If a state is improving or declining then that might tell us something.

If we look at the Ontario data then it would be hard to conclude that it is improving. Performance seems to have peaked at around 2006. Since 2003 there has been a significant decline in maths performance. There has been a debate in Ontario about its maths curriculum and some have linked its embrace of constructivist teaching approaches to this decline. You cannot prove a cause with a correlation but in this case it strikes me as highly suggestive.

It is also worth noting the long lag between policy changes and an effect on PISA scores. PISA items are highly reading intensive and yet reading is taught in the early years of school whereas PISA assessments take place at age 15. So if we are keen to look at which policies might be most associated with Ontario’s peak year of 2006 then we would certainly need to include an examination of policies enacted in the late 1990s. In contrast, current initiatives and trends would tell us little.


ResearchED Melbourne 2017 sparks witch hunt

Embed from Getty Images

I have been loosely involved in researchED Australia over the last couple of years. For the first event in Sydney in 2015, I suggested a couple of speakers. It was me who recommended that Kevin Donnelly be invited to take part in a panel discussion. At that time, he and Ken Wiltshire had just completed a review of the Australian Curriculum commissioned by the federal government and so he seemed like a perfect fit for such a conference. What research evidence had informed their review? Others didn’t see it that way. Donnelly is a noted social conservative and so, rather than come along and challenge his views, a number of people stated that they would boycott the event.

Last year the storm was more muted and it mainly just involved people expressing surprise and annoyance that they had not been invited to speak. This year has seen the temperature rise again with a group of Australian academics fanning a full-scale Twitter witch hunt.

In the promotional material, someone spotted that one of the things researchED looks at is, ‘the lies you may have been told during training’. ‘Lie’ is an emotive word and ‘myth’ would probably have been a better choice in this context. However, it is clear that this statement does not claim that all trainee teachers are told untruths and it does not suggest who might be telling them. In my experience, trainees are just as likely to be told dodgy things by other trainee teachers and by their placement schools as they are at university.

Nevertheless, this clause was seized upon and interpreted as a statement that university lecturers lie to preservice teachers:

The Australian College of Teachers (ACE) was copied in to this tweet – they are partnering with researchED for 2017 and so the intention was presumably to exert some pressure.

This pattern continued with the Australian Council of Deans of Education being tagged in a subsequent tweet:

One academic decided to send the offending clause to her own Dean and tweet about this:

The outrage continued with various people mounting their high horses and demanding an apology while a number of rather bemused bystanders asked what the fuss was about.

Eventually, events took a surreal turn. I wrote a blog post that was totally unrelated to researchED about a new Brookings report on preschool education. This was then called ‘toxic’ and a couple of researchED presenters and a think-tank were copied in with the following statement:

Again, I can only assume that the intention of this was to apply pressure.

So what is this all about? Why all the fuss? Why does researchED present such a threat to this group? After all, it is only a conference and it often hosts discussions that thrash out different opinions. I don’t really know what is at the root of this animosity and could only speculate.

More surprising, perhaps, is that one of the antagonists spoke at last year’s event. Since then, she has expressed concern that neither Tom nor I attended her talk. This can be a difficult issue because researchED events have a number of talks scheduled simultaneously and so it is not possible to see all of them.

Anyway, I suppose this fuss must mean something. ResearchED is an edgy event – an event that some people clearly would prefer you not to attend. So why not come along and find out why? One day of the annual ACE conference is given over to rED – the 3rd July. There is also an event at Brighton Grammar on the 1st July headlined by John Hattie. Read about it here. It would be great to see you there. I’ll be the one with the broomstick and the black cat.