Shut up, bloggers.

Embed from Getty Images

Over at the blog of the British Educational Research Association, Dr Pam Jarvis has been blogging on the topic of, “In pursuit of a secure base? Education commentary in times of socio-political uncertainty.” I agree that it doesn’t sound promising but bear with me.

Jarvis’s thesis is that:

“…when nations experience socio-political uncertainty, the population becomes collectively anxious and the state responds by ‘behav[ing] like the parent of an avoidant child… and tries with increasing state power to quell expressions of discontent’.” [Reference omitted]

She then identifies Trump and Brexit as responses to the 2008 global financial crisis, alongside those in education who ‘seek monolithic control’ and who embrace a ‘quest for certainty’. That doesn’t sound good! Boo! Hiss!

Jarvis then lists three isolated quotes from the journalist Toby Young, and from education bloggers David Didau and Old Andrew. She suggests that these commentators do not understand education like what she does because they ‘lack… discursive engagement with theoretical and empirical evidence.’ Finally, and rather ominously, she refers to the British Educational Research Association’s ethical guidelines on how to interact with peers, the implication being that the three who she has mentioned have fallen short in some way.

This is obvious gate-keeping. It draws from a bag of rhetorical tricks that we see increasingly used by those with traditional power in education when they engage with the burgeoning social media debate. There is no obvious connection between a disparate group of people Jarvis happens to disagree with and Trump or Brexit and so she manufactures one. To utilise a neologism, it is an attempt to smear-out the ‘toxicity’ of one group so as to corrupt our view of another.

The second tactic is what we might describe as the ‘but you haven’t read Milligan (1971)’ fallacy. It is easy and lazy to accuse your opponents of not having read things. It is easy because nobody has read everything and nobody is likely to spend their time reading lots of hogwash they disagree with. If a lack of reading leads people into error then it is a far more devastating blow to patiently explain exactly what that error is. But that is harder that just saying they haven’t read enough.

The Didau piece from which the quote is drawn is actually an extended and nuanced discussion of a live issue. The quote that Jarvis has selected (or cherry-picked) really does not do this justice. Didau is challenging the concept of dyslexia and whether it is a real condition. I know a lot of reading researchers and I would guess the consensus position is that dyslexia is a helpful label because it directs resources to children with reading difficulties. Nonetheless, it is notoriously difficult to define, interacts strongly with teaching practices and comes with the attendant issue of labeling and inappropriate responses to that labeling. In essence, the post involves Didau exploring and responding to an issue of great uncertainty, just as he did in his seminal book, What if everything you knew about teaching was wrong? It is therefore eccentric to associate Didau, as Jarvis does, with a ‘quest for certainty’.

The ‘certainty’ part is clearly a straw man. Nobody involved in education trades in certainties and yet it has become common to attack critics of the orthodoxy for holding such a position.

I would find it annoying to be quoted in such a piece, but at least we are starting to see organisations like the British Educational Research Education mounting (fallacious) arguments and showing their true colours. It is best to have this all out in the open so we can see clearly the different positions and engage critically with them. In this case, the position seems to be ‘shut up, bloggers.’ The response should be ‘this is only the beginning.’

Advertisements

Why I’m asking Santa for a watch

Embed from Getty Images

I haven’t worn a watch in about three years. Recently, I had cause to dig through some of the clutter in the study and look for the one I used to wear. I hoped to find it and replace the battery, but it was not there. It is now a former watch, lost, destroyed or dwelling somewhere in the twilight of my world.

Recently, Gladys Berejiklian, the premier of New South Wales, announced that mobile phones would be banned in government primary schools from the start of next year. There are two clear reasons for the ban. The first is cognitive – mobile phones are a distraction from whatever task teachers wish students to focus their attention on. For those familiar with the language of Cognitive Load Theory, mobile phones are a source of extraneous i.e. unnecessary and counterproductive cognitive load. The second reason is affective – mobile phones can be a source of stress. Real-world bullying can be mitigated by avoiding the bullies, but a child is never out of reach of virtual bullies if they have a phone in hand.

Michael Carr-Gregg, a respected psychologist, ran the review in New South Wales and lent his weight to the new policy. It’s hard to argue with.

Similar bans and restrictions are occurring across the world as education bureaucracies grow aware of the issue and try to turn back the clock – or watch – to a time before iPhones in schools. France banned mobile phones and smart watches from schools back in September and I keep seeing similar bans in individual schools being mentioned on Twitter.

We have reached this watershed because of the results of a community-wide, collective and ongoing analysis of the positive and negative effects. Five years ago, it was plausible to believe that these devices would herald a new era in learning. Children would be able to use them to interact with lesson materials and personalise the learning experience. But it never really happened. Smartphones have limited value for setting multiple choice quizzes or for taking photos of improvised board work, but these are hardly revolutionary and the same ends can be accomplished by other means.

On the other hand, the negative effect of phones on a generation who no longer talk to each other at lunch time are clear to anyone who visits a school without restrictions in place. And those are just the more obvious negative effects.

And so my own school has introduced restrictions. And it would hardly be acceptable, in such circumstances, for me to check my phone during lessons in order to tell the time, as has become my habit. And as all teachers know, you cannot rely on the clock in the classroom. It is deceitful. It is not your friend.

So that’s why I’m asking Santa for a watch this year.

Walking quietly away from Gonski 2.0

Embed from Getty Images

The Australian government is publicly committed to implementing the recommendations of the recent Gonski 2.0 review of Australian education, so it cannot simply rip it up. However, education minister Dan Tehan has signaled a key shift away from one of the Gonski priorities.

A key Gonski recommendation was, “Give more prominence to the acquisition of the general capabilities e.g. critical and creative thinking, personal and social capability” – so-called ‘soft skills’. In a new speech reported in the Sydney Morning Herald, Tehan essentially contradicts this position. While insisting that soft skills still have a ‘role’, he highlights the comments of Australia’s chief scientist, Alan Finkel, on the importance of disciplinary knowledge.

This is an encouraging sign. The pursuit of generic skills is misguided, as I have pointed out many times on this blog. The key misconception is to see them as general. With the exception perhaps of social skills, they are highly specific to a particular discipline and once you recognise this, you realise that subject disciplines already teach these ‘skills’ as students move towards the expert end of the novice-to-expert continuum. And that is a crucial point. The reason they are at the end of this continuum is because they build upon all the more basic disciplinary knowledge that comes before. That is why there are no magical shortcuts to critical thinking and problem solving.

Two linked issues still worry me about Tehan’s speech. Rhetoric about an ‘overcrowded’ curriculum is accurate if we imagine that it is redundant rehearsal of generic skills that is surplus to requirements. However, we have seen this argument used in the past to strip out important stuff such as science and history content in order to endlessly drill children in things like reading comprehension strategies. Indeed, such asset-stripping could be seen in some eyes as the kind of return to ‘basic skills of literacy and numeracy’ that Tehan is calling for.

One big plus is that I understand that Tehan will also be calling for a review of teaching, examining the attractiveness of the profession and looking to reduce out-of-hours working. Done well, this could have a positive impact on teachers that would, in turn, impact on students’ experiences.

Teacher knowledge matters


In my experience, most people think they know the cause of the seasons, but a lot of them are wrong. Some think the Earth is closer to the Sun in summer, but this cannot be true because otherwise we could not have summer on one point of the Earth at the same time as winter on a different point. Some think that particular parts of the Earth are closer to the Sun in summer, making them warmer, but this again is incorrect. The distance between the Earth and the Sun is far larger than any small variations in how the Earth is aligned.

The actual cause of the seasons involves abstract thinking. In the winter, the tilt of the Earth is such that the Sun’s energy is shared over a much larger area of the Earth’s surface than in the summer.

Why does this matter? Because highly intelligent adults often have misconceptions such as these ones about the seasons and the only way to get past them is by gaining specific knowledge.

Similarly, the same adults could have misconceptions or lack knowledge about reading instruction or mathematics instruction, despite being otherwise highly intelligent. It therefore seems clear that secondary teachers need a good grasp of the subject they teach and primary teachers, who are often generalists, need a good grounding in all foundational areas.

Yet, when you look at the research, it is possible to conclude that teacher knowledge does not matter. Typically, such research uses the proxy of additional qualifications and compares the effectiveness of teachers with, say, a masters degree with those who do not have one.

One such study is instructive. Conducted in North Carolina, it makes uses of that state’s extensive data collection and does indeed show that teachers with masters degrees are no more effective and, in some cases, are even less effective than those without.

But wait a second, this argument was about specific knowledge and we may have fallen in to the trap of assuming the existence of general skills and abilities that don’t actually exist. Why should writing essays about neoliberalism or Freire stop you from holding a misconception about the seasons?

And if we look at the North Carolina study in more detail, we can see that the researchers examined a number of factors other than masters degrees. North Carolina also requires teachers to sit licensure tests that assess their knowledge of the curriculum. Higher scores on these tests do correlate with more effective teaching.

So yes, teacher knowledge matters, but only if it is relevant to what they need to teach.

Another flawed Reading Recovery study to add to the pack

Embed from Getty Images

Few educational interventions have generated as many studies as Reading Recovery.

Reading Recovery involves giving struggling readers one-to-one tuition. I have no doubt that it has an effect, as any one-to-one intervention would have an effect. Some people claim that no other intervention has as much evidence of effectiveness at scale. That may be true, but that is likely to be because no other intervention has been tested so much at scale.

When I suggest that it has an effect, I need to be careful about what I mean. Most studies compare Reading Recovery to doing nothing and the students in the ‘do nothing’ control group have usually had a diet of so-called ‘balanced literacy‘ – the whole-language wolf hiding in the Grandma’s bed of phonics. I have written before that we can’t be sure why Reading Recovery works. It could be the proprietary techniques taught to Reading Recovery teachers or it could just be the effect of one-to-one tuition. Indeed, other forms of one-to-one tuition seem to be more effective than Reading Recovery.

It would therefore be really useful if an organisation such as the Education Endowment Foundation used its extensive taxpayer funded resources to run a three-armed randomised controlled trial testing Reading Recovery against a one-to-one systematic synthetic phonics programme and against a control. My prediction would be that both interventions would outperform the control but the phonics intervention would outperform Reading Recovery. Unfortunately, the Education Endowment Foundation seem more interested in kooky Philosophy for Children and in trying to prove the evils of ability grouping.

In the meantime, we can expect to see more studies like a new one conducted on behalf of the KPMG Foundation. On the surface, the results of Reading Recovery seem extraordinary. For instance:

“49% of the Reading Recovery group achieved the nationally expected level of qualification for educational progression (5 or more GCSEs at the former A* to C grades, including English and Maths, equivalent to grades 8 to 4 in the current system), compared to a national average of 54% for all pupils in the same year. Only 23% of the comparison group reached this level.”

Although extraordinary, it is plausible that an early reading intervention could have such a profound effect. After all, academic learning relies heavily on reading. It is a foundational skill. Unfortunately, the study does not provide evidence to justify such a conclusion because of the way it was designed. It was not a randomised controlled trial and it was not even a good example of a quasi-experimental study.

Researchers identified 148 struggling readers in schools that did not offer Reading Recovery and 145 struggling readers in schools that did offer it. Teachers then selected just under two thirds of the students (91) in the Reading Recovery schools to give the intervention. They then compared the results of these 91 students with the 148 in the comparison schools. Can you see the problem?

You either need to compare the results of all 145 of the initially identified students with the 148 in the control, or you need to select roughly two thirds of the control using the same criteria with which you selected the Reading Recovery students and compare this cohort with the 91 students who had the intervention.

The authors claim that the 91 students who were selected were those who were the most in need of the intervention:

“It was not possible to offer Reading Recovery to all the children in Reading Recovery schools. Of the 145 children in Reading Recovery schools, 91 received Reading Recovery (though not all were successfully discontinued), 54 did not. The selection of children to receive Reading Recovery is made by the teacher and teacher leader, informed by children’s performance on the assessments and on age (the lowest achieving children are prioritised, and older children often taken first).”

If this is true then including the additional 54 students would likely only improve the GCSE results for this cohort. On the other hand, if the addition of these extra 54 students washed out any gains, we might conclude that Reading Recovery has no net effect. After all, there is nothing wrong with an intervention which only works with a proportion of the students, but we need to know if this is the case or whether it is actually a zero-sum game. And there is reason to believe that it might be. There are those who suggest, for instance, that contrary to the claims made about selection in this report, it is the more able students who often end-up in a Reading Recovery intervention:

None of this speculation would be relevant if we could see data for all 145 students rather than 91, but we cannot.

A fundamental principle of science is that we compare like with like. There is a hint in the data that the 91 Reading Recovery students were also more affluent than the 148 comparison group. Only 43% qualified for free school meals compared with 62% in the comparison group. The researchers tried to control for this by running separate statistical tests on the free-school-meal and non-free-school-meal populations of each sample i.e. comparing free-school-meal Reading Recovery students with free-school-meal control students. I’m not sure that quite solves the problem of the mismatched groups because wealth is a continuous variable whereas access to free-school-meals is a binary (i.e. the free-school-meal students in the comparison group could still be less affluent, on average, than the free-school-meal students in the Reading Recovery group). There were also slightly more boys in the control group (65% versus 60%). Clearly, socioeconomic status and gender impact on reading outcomes and so they may have been a factor here. These problems would have been avoided by randomisation.

There is also some interesting data on special educational needs that is presented as part of the results of the study. For instance, at age 14, “There were significantly fewer Reading Recovery pupils with a SEN status (35%) than comparison group pupils (52%).” We are invited to believe that this was an effect of the Reading Recovery intervention, but it may actually represent an underlying difference between the populations in the intervention and control groups. Many special educational needs take time to be identified and these may well have been present and unidentified or latent at the time of the initial allocation of students to groups.

In short, this new study demonstrates nothing much, even if we are inclined to believe that Reading Recovery has some effect.

The reason it is necessary to critique studies of this kind is that there are so many of them. As they pile up, commentators make statements to the effect that no other reading intervention has generated such a wealth of positive evidence and the individual studies get buried behind Hattie- or Education Endowment Foundation-style ‘effect sizes’ that teachers and school leaders take as evidence of effectiveness.

But it is not evidence. It is a house of cards.

If at first you don’t succeed, punt another £850k on it

Embed from Getty Images

All I knew about the next session of the interview was that it was called, “a discussion with the headteacher on a topic of his choosing”. It was my first or second interview for the job of deputy headteacher (vice principal) in London and so I was not sure whether this was normal.

The headteacher was sat at a desk and I was invited to sit opposite him. A number of school governors observed. The headteacher asked me what I would do if a head of subject requested to use ‘ability grouping’ i.e. place students into different maths or science or history etc. classes based upon their prior level of attainment. I gave a pragmatic answer. At that time, I was not well versed in education research, but I did know that the research on ability grouping was inconclusive: Whether you group by ability or not, it doesn’t make much difference. I suggested that I would probe and question to understand the head of subject’s position and ensure that it had been thought through, but I would ultimately be guided by their preference as the subject expert.

This was the wrong answer. The headteacher replied with something of an ideological rant on the evils of ability grouping. I held my pragmatic line.

At the end of the session, I sought the chair of governors, withdrew from the interview and went back to school to teach my classes.

The evidence available then is much as it is now. Ability grouping versus mixed ability teaching really doesn’t seem to make much difference. If anything, there may be small gains for the most able students and small losses for the least able. If so, never has such an inconsequential position been held with such furious passion by so many.

However, this evidence is not exactly gold standard and is the subject of debate. Most of the evidence has been correlational or quasi-experimental, with very little coming from randomised controlled trials. That makes it hard to separate out the factors. For instance, in the high-profile Kulik and Kulik (1982) meta-analysis, only 13 out of the 51 studies used random assignment. In the wild, it is also possible that the effect of ability grouping could be complicated by practices that might lead, for example, to lower ability groups being assigned a disproportionate number of new and inexperienced teachers. That is why I welcomed the initiation, in 2014, of a large-scale project to evaluate the effect of ability grouping using randomised controlled trials. Best practice in grouping students was headed by Professor Becky Francis of the UCL Institute of Education, funded by the UK’s Education Endowment Foundation, and aimed to test the effectiveness of the best possible versions of ability grouping and mixed ability teaching.

My enthusiasm dimmed when I realised that mixed ability teaching and ability grouping were not going to be compared directly with each other. Instead, the ‘best’ versions of each were to be compared with a control.

My enthusiasm was finally extinguished when, prior to the release of the data on effectiveness, the research team published one of those ubiquitous French-philosopher inspired papers labeling ability grouping as ‘symbolic violence’. How could any results now be regarded as impartial?

When the results finally did arrive, they consisted of two nulls: There was no effect of the ‘best’ version of ability grouping compared to the control and no effect of the ‘best’ version of mixed ability grouping compared to the control. So we are back to square one, provided you accept the data. The total cost of the project was £1,184,349.

In a truly extraordinary move, the Education Endowment Foundation have committed £850,000 to having another go. Becky Francis is still involved, although is not the project lead, and this time it looks like they will eschew randomised controlled trials in favour of a quasi-experimental study, making this an odd project for the Education Endowment Foundation with its commitment to randomised controlled trials.

This is a reversal of the normal situation. Usually, a quasi-experimental design of this kind might be conducted as a pilot study prior to going to a randomised controlled trial. It would be small-scale and presumably cost far less than £850,000. To progress from a randomised controlled trial to a quasi-experimental study seems eccentric. Mind you, the weaker trial design is perhaps more likely to lead to a result one way or the other.

I am wondering whether, this time, the result will favour mixed ability teaching. What are the chances?