Three tips for developing digital learners

We all want to develop digital learners, right? Over the last twenty years, the explosion in online content and the ability to quickly find the answers to key questions has been nothing short of revolutionary. As teachers, we seek to prepare our students to make the most of these advances. But what is the best way to do this?

On answer is to ditch elements of the traditional curriculum. If you can look up facts quickly then why commit them to memory? Instead, we should ensure that students gain plenty of experience of using digital resources so that they develop the skills of digital literacy. Some have even argued that we should change the way we assess students and perhaps allow them to access the internet or to collaborate in exams. Yet this ideas may be misconceived.

1. Grow those crystals

A new piece of research sheds some light on the issue. In a set of two studies, a group of researchers from Germany examined how successful different people were at finding information from the internet – a measure of digital literacy. For instance, participants were given a list of symptoms and asked to identify the disease. However, the researchers also measured a number of other characteristics of the participants. So they were able to see how well these other things correlated to digital literacy. This kind of research cannot definitively identify whether one thing causes another but the findings are consistent with a large body of work on reading comprehension.

Two of the attributes that they measured were fluid and crystallised intelligence. Participants who had higher levels of these kinds of intelligence scored better on digital literacy. Unfortunately, fluid intelligence is essentially raw processing power and it turns out that there’s not much that we can do to improve this, despite the claims made by advocates of brain training.

On the other hand, there is loads that we can do to increase crystallised intelligence and teachers are well-placed to do this. Crystallised intelligence is simply knowledge of the world. In the study, the researchers used a test of crystallised intelligence that asked participants general knowledge questions such as, “What is the definition of gross national product (GNP)?” There were 28 science questions and 36 from across the humanities and social studies.

Given that this is a correlation, we need a plausible theory for why crystallised intelligence might help with digital literacy, otherwise they might both be caused by something else such as fluid intelligence. Again, we can return to research on reading comprehension. Greater world knowledge enables greater reading comprehension because it allows us to make mental models and understand vocabulary. In other words, crystallised intelligence enables us to make sense of what we find when we search the web and helps us relate it to what we already know.

2. Become a domain expert

The researchers also found that students who did best on the test of digital literacy were the ones who knew most about health sciences, the knowledge area from which the digital literacy test questions were taken. Again, this makes sense in terms of students being able to understand and interpret results of searches.

This has a clear implication for teachers. If we want to make use of internet research within a school context then we should ensure that students have plenty of relevant knowledge before they begin. For instance, a science research task on renewable energy resources should be preceded by teaching the students a lot of content knowledge on renewable resources. It might also be worth assessing that students have reached a level of mastery with this content before moving on to the research task, perhaps through a quick quiz. Students are then likely to find the research richer and more rewarding.

3. No need to panic about exposing kids to tech

The researchers also made another interesting finding. The amount of everyday computer use by the participants did not relate to performance on the digital literacy test.

We might expect that in order to do well on the internet search task, participants would need to practice this skill and so those who had a greater level of computer usage would perform better. There is a clear directive in many states, districts and schools that requires the embedding of technology into lessons in order to adequately prepare students for the future. Yet this study seems to suggest that knowledge is far more important and that the additional skills required to successfully use computers are relatively trivial in comparison, at least for tasks such as internet-based research.

There is no need to strip our classrooms of technology but we can probably all relax a little more about it and use it only when and if we think it supports our learning goals. By building knowledge, even if we are using just a paper and pen, we can rest assured that we are growing our students’ digital literacy.

There is an excellent American Educator article that explores this issue and that is just as relevant today as it was when it was published in 2000.


How to improve our national tests

When Robert Randall rose to speak at the recent Evidence for Learning Evidence Exchange, he made a plea. The best preparation for NAPLAN – Australia’s programme of national literacy and numeracy tests – is to teach the national curriculum.

Randall is CEO of ACARA, the body responsible for both the Australian Curriculum and the NAPLAN tests. He wanted to know how test data can be more useful to schools. But in order to achieve this aim with reading and writing, Randall needs to tweak the tests themselves and join-up the two bits of his job a little more. I’ll explain why.

There is good reason at present to abandon teaching the curriculum in favour of test preparation, particularly if you are coming from a low base. There are a number of hacks that you can teach children to improve their NAPLAN writing responses such as how to structure an argument or narrative, how to use complex sentences and so on. And you can practice these by responding to past NAPLAN prompts.

Similarly, you can teach children reading comprehension ‘skills’ and, indeed, this is what ACARA think they are testing. However, I am convinced by Dan Willingham when he characterises these as ‘tricks’. They work but the pay-off is pretty immediate and further practice is redundant.

Instead, when you present children with a random selection of texts you are mainly measuring their general knowledge. This makes these tests vicious and unfair to students from low socio-economic backgrounds who tend to have less general knowledge than their wealthier peers.

The problem is that the myth that it’s all about reading comprehension skills has really taken hold. Children have lessons where they select random books from bins that are supposedly at their reading level and then use these books to redundantly practice these skills. This misses an opportunity to teach them some of the knowledge that they lack. A good alternative might see everyone reading and discussing the same book as a whole class.

Or perhaps children could also be read to. We know that children’s oral comprehension exceeds their reading comprehension for much of primary school so by restricting knowledge growth only to what they can read we are missing a trick.

But if a teacher did this and systematically set about growing general knowledge in this way then there would be little reward in NAPLAN. The children might know lots about The Romans but this won’t help them with a randomly selected text about horses.

The solution is obvious: set reading and writing NAPLAN prompts within contexts covered by the national curriculum for the previous year. Teachers then have agency. The tests will be fair and actually relate quite well to the teaching.

Unfortunately, the Australian Curriculum is pretty thin gruel full of wiffle-waffle. Recent revisions have seen it adopt the unscientific ‘expanding horizons’ model of social studies which had been largely debunked by 1980. And anyone who thinks I am responsible for setting-up a false dichotomy between knowledge and skills should look at the science curriculum. The whole of biology, chemistry, physics and geology is reduced to just one of three strands, the others being ‘science as a human endeavour’ and ‘science inquiry skills’ where students have to learn nebulous things like ‘questioning and predicting’.

There is some limited scope if you ignore the fluff and focus on the actual content. For instance, the Year 2 science curriculum states that children, “explore the use of resources from Earth and are introduced to the idea of the flow of matter when considering how water is used.” Still, it’s pretty hard going to find much that is specific. The Humanities and Social Studies curriculum is particularly bad. What should be a rich introduction to the world is impoverished and vague.

The sad thing about this is that the humanities curriculum was deliberately dumbed-down in this way following a review because principals complained about the amount of content crowding out time for literacy and numeracy.

What do they want that extra literacy time for? Redundant reading comprehension skill-drills and practice at writing narratives and persuasive texts: Joyless, soulless test preparation.


The story of an inspirational student

John bowled into the science lab a little late, did a couple of laps and then leant against one of the benches with a wide smile on his face.

I didn’t teach him but I had heard about him. He was from a single-parent family with a Nigerian background and he’d almost been excluded from school earlier that year. It was the Easter holiday and I was running a revision class for the Year 9 Science SATs – this was a test that has since been abandoned in England.

These revision classes were important to me. We were a school that was working hard to turn itself around. We were carrying two permanent vacancies in our science department that were filled by an ever-changing roll-call of substitute teachers. The revision class was a chance to tie this together for some of our kids and, as head of science, I knew that this work was excellent preparation for the later GCSE’s.

Despite his demeanor, I took John’s attendance at this optional revision class to be a good sign. I went in hard with my no-nonsense act – it’s all an act – and John collected himself, sat down and began to participate.

A few months later, I observed a Year 10 science class of which John was now a member. The lesson was a little unruly and the teacher had no control over which students responded to verbal questions. John responded to most of them. They were all factual recall and John was mostly correct.

The class was group three out of five with group one being the highest performing science students and group five being the lowest. I spoke to the key stage coordinator who had put these groups together. The classes were meant to regularly change based upon test results but something odd was going on with John. He had missed a couple of tests and refused to complete another one. These were all factored-in as zeros rather than blanks and this had depressed his test score. I made a awkward but obvious call and insisted that the grouping was redone. As a result, John moved up to group two.

A few months after that, John appeared on my own class list. That year I was teaching a top group in Year 10 (I feel the need to justify myself here and point out that I had a range of classes, including a lower group in Year 9). I taught John physics and some chemistry whilst my partner teacher taught biology and the rest of the chemistry to the same class. John seemed a little different to me now. He was quieter; more serious.

I took the same group through to Year 11 and John eventually gained A* grades in GCSE science. He opted to take physics at A-Level and so my relationship with him continued. But something quite unusual had started to happen.

John was organising the other students. He arranged study sessions where they would work through homework or revision together. He would come to class with lists of questions which prompted me to include a questions session at the start of each lesson – something I still do now. He was doing pretty well and we discussed his university options. I had studied at Cambridge and he showed an interest, so we sent him along to the open day.

He came back with a story. John had attended the open day with other students from the physics class who were ethnically very diverse. There were students with family origins in Somalia and Arabic countries. They all went into a shop together to look at souvenirs and were followed around by the shopkeeper. I was appalled at this example of blatant racism but John thought it was funny. “They’ll have to get used to us,” he said.

And he meant it. John did well in his A-Levels and took up a place at Cambridge to study engineering. I’ve since lost track of him – I’m not great like that – but I hope he did well and is enjoying life.

I learnt a lot from that guy.

Note: I have changed John’s name


Mirror, mirror, on the web

sharples

Dr Jonathan Sharples’s opening gambit was pretty odd and pretty interesting. For some reason he wanted us to decide between real and fake names for shades of paint. Imagine if this was a key curriculum objective (bear with me). How would you teach children to pull this off successfully? Perhaps you might ask them to visualise the shade – does it make sense? What colour would it be? Would it appeal to someone likely to be purchasing paint? We might describe the deployment of such questions as a ‘meta-cognitive strategy’.

The first three paint names popped up on the screen: Elephant’s Breath, Churlish Green and Norwegian Blue. Unlikely as the other two sounded, I immediately knew that the fake colour was “Norwegian Blue” because that is the breed of parrot in Monty Python’s dead parrot sketch. Which, when you think about it, is a pretty nifty demonstration of the fact that direct knowledge trumps the use of meta-cognitive strategies every time.

Sharples was speaking at the Evidence for Learning (E4L) Evidence Exchange. He heads-up the Education Endowment Foundation (EFF) in England and the recently founded E4L has a license to use the EEF knowledge base in Australia. E4L also intends to run the kind of randomised controlled trials (RCTs) that the EEF runs in England.

Part of the E4L (and EEF) strategy is to produce a toolkit offering a meta-analysis of different education interventions alongside an effect size stated in months of additional progress. This has the potential to enable educators to think critically about different practices but I wonder whether it is going to do that. At the Evidence Exchange, it became apparent that when people look into this toolkit, they see themselves. Like it’s some kind of mirror.

There were a lot of people who are clearly doing great work and achieving some amazing things but it hardly seems due to the toolkit. Instead, they have looked to it for verification. We heard about the use of feedback, for instance. Although this initiative pre-dated the toolkit. There was a fascinating talk by a Victorian primary school principal on mastery learning. However, he made it clear that it was a subject he had been interested in since 1970 and the work of Benjamin Bloom. So you look into the toolkit, see something you recognise, feel validated and off you go.

Unless you are from the Grattan Institute.

Grattan researcher, Jordana Hunter gave a presentation on ‘targeted teaching’ or what is normally known as ‘differentiation’. The Grattan folks had apparently read some literature that showed that this approach was effective (I disagree and you might want to read my analysis). They then asked around, found two schools who were implementing differentiation effectively and wrote about how these schools did this. This is an odd methodology because there is no comparison group of less effective schools. It means that we cannot make causal inferences about what these schools did that made them successful.

And whatever the literature was that they had read that convinced them of the need for such an approach, it was obviously very different to the literature read by the people who constructed the E4L toolkit. The toolkit analyses an approach known as ‘individualised instruction’. It’s hard to pin down exactly what this is and exactly how far the instruction is personalised because it is based upon meta-analyses of differing interventions, but the E4L verdict is clear:

“Individualising instruction does not tend to be particularly beneficial for learners. One possible explanation for this is that the role of the teacher becomes too managerial in terms of organising and monitoring learning tasks and activities, without leaving time for interacting with learners or providing formative feedback to refocus effort. The average impact on learning tends overall to be low, and is even negative in some studies, appearing to delay progress by one or two months.

Empirical research about individualised instruction as a teaching and learning intervention in Australian schools remains limited, and the few Australian-based studies on individualised instruction also tend to focus on either ‘gifted’ or ‘struggling’ students.

The available Australian research suggests that it is not the most effective or practical intervention and shows that teachers face practical difficulties employing this intervention, such as curriculum restrictions and significant increases in their workload.”

Maybe this is something very different to the kind of differentiation being promoted by Grattan. But given that the Evidence Exchange was about E4L then exactly where is the support for the Grattan method in the toolkit? No matter. Nobody seemed to notice this odd discrepancy. After all, differentiation is in the Australian Professional Standards for Teachers so it must be effective.

When Sharples looks into the mirror he sees meta-cognition. There is an extraordinarily vast array of things that the toolkit groups together as ‘meta-cognition and self-regulation’ which spans affective strategies aimed a student motivation and resilience, all the way to prosaic methods for planning. The measured effects are not restricted to cognitive ones. When you take all this into account, it is hard to interpret the extra 8 months of progress this whole mass of different things is suggested to produce.

On his own graph of effects versus costs, Sharples has reduced this to simply ‘meta-cognitive’ and he is clearly a fan.

Meta-cognitive strategies are particularly suited to the kinds of interventions that organisations like the EEF run. Dan Willingham has referred to meta-cognitive reading strategies as a ‘bag of tricks’ and with good reason. They are not skills in the sense that a sequence of deliberate practice will make you improve at them. They are useful hacks that, once known, produce a one-off hike in performance. If you take a student who can’t structure a piece of persuasive writing and teach them the ‘firstly… secondly… thirdly…’ hack then you will see an immediate and significant jump against a standardised persuasive writing test. But how significantly have you improved their writing skill? The slower, much more liminal, curriculum-centred process of building vocabulary is far harder to capture in this way.

I was surprised to see Sharples focus on The Philosophy for Children study. This has been much criticised and with good reason (see here and here). Briefly:

  • The principal researcher on the study has a philosophical issue with conducting tests of statistical significance and so didn’t do one. I would have thought that EEF studies were public goods and so individual researchers should not be able to impose their tastes on them in this way.
  • The outcome measure that they said they were going to use before the trial did not show any effect and so is not the one they used in their report. Instead, once they had the data they decided to do a different analysis looking at progress since KS1 results. This is the well-known problem of researcher degrees of freedom – analyse a study enough ways and eventually you will find something that looks like an effect.

I asked Sharples about the lack of statistical significance and he suggested that they have rerun the numbers and the results stand-up. I look forward to reading this paper.

Sharples also displayed a list of four trials that he said could broadly be categorised as meta-cognitive (Although I think he said that there were six – I might be wrong). He claimed that all of these trials showed a positive result, the implication being that, whenever they had tested meta-cognition, it had worked. But isn’t the “Let’s Think Secondary Science” program a meta-cognitive intervention? And didn’t that fail?

Mirror, mirror…


Come to Sydney to do some education research

An excellent opportunity has opened up to do PhD research at the University of New South Wales (UNSW) in Sydney. This is where I am currently completing my own PhD.

Unusually, there is a generous, tax-exempt stipend of $40,000 (Australian dollars) available as well as a $10,000 support package and international students will have the usual fees covered by UNSW. There is currently no fee for Australian students who pursue a PhD.

The area of study looks fascinating – it involves investigating explicit instruction alongside academic motivation.

The full advert is below:

 

Scientia PhD Scholarship in Educational Psychology
(‘Exploring the Link between Explicit Instruction and Student Motivation and Engagement’)
School of Education
Faculty of Arts and Social Sciences
University of New South Wales, Sydney, Australia
 
Expression of Interest and CV due November 11, 2016
 
To commence an application, visit (Fundamental and Enabling Sciences: Educational Psychology link)
 
A prestigious full-time Scientia PhD scholarship is available for a suitably qualified candidate to undertake research studies under Professor Andrew Martin leading to a PhD in education (educational psychology). It is an extremely generous Aus$40,000 per year (tax exempt) stipend, with up to $10,000 support package, and may be held for up to four years, subject to satisfactory progress. For international students, the international student fee will be covered by UNSW. PhD students in the School of Education can also apply for conference funding from the School.
 
The proposed PhD is a quantitative study of school students and investigates the links between two seminal areas of contemporary educational psychology: (a) academic motivation and engagement and (b) explicit instruction. This PhD will explore an integration of these two areas, with particular focus on key aspects of motivation and engagement (e.g., self-efficacy, self-regulation, anxiety etc.) and key dimensions of explicit instruction (e.g., structure, scaffolding, worked examples, deliberate practice etc.).
 
Applicants must have (1) a qualification in education, counselling, and/or psychology with a major formal research component or (2) a qualification in education, counselling, and/or psychology with research experience (either as a student or as an employee) in the areas of educational/developmental/counselling psychology. The successful candidate will also be required to demonstrate competence in quantitative research methods and statistics. 
 
Enquiries may be directed to Andrew Martin, Professor of Educational Psychology, School of Education, on 9385 1952 (international: +61 2 9385 1952), or email: andrew.martin@unsw.edu.au. Applicants MUST follow the procedures at https://www.arts.unsw.edu.au/research/research-culture/scientia-fellowships-scholarships/2017-scientia-scholarships/ : thus, (a) an Expression of Interest Form available at the above link and (b) a CV to Professor Martin. The EOI and CV are due to Professor Martin by November 11, 2016. The successful applicant is expected to commence in Semester 1 or 2, 2017.

Positioning mathematics education researchers to influence storylines

I’ve been reading a fascinating paper by a group of maths education researchers with the title, “Positioning Mathematics Education Researchers to Influence Storylines.” It is interesting for what the researchers do and don’t say. It was published in March in the house journal of the U.S. National Council of Teachers of Mathematics (NCTM) and represents the work of the NCTM’s research committee.

The committee seem to be concerned that a group of Canadian parents and mathematics professors have influenced the debate in the Canadian media about maths teaching methods and they propose the use of ‘positioning theory’ by maths education researchers as a way to fight back and avoid this situation arising in the U.S. This involves infiltrating bodies that make decisions on maths education and manipulating ‘storylines’ about maths in the media.

They identify three storylines that they disapprove of. They are:

1. There Are Two Dichotomous Ways of Teaching Mathematics

2. Mathematics Education Research Is Not Trustworthy

3. The Main Goal of Mathematics Education Is to Produce a STEM Workforce

The first two of these ‘storylines’ are essentially true.

There are broadly two distinct ways of teaching mathematics. There is an explicit approach where all concepts and procedures are fully explained and there is the constructivist alternative where students are asked to figure certain things out for themselves, with varying amounts of guidance. Maybe they are not dichotomous: It may be true that at any one time you can only be teaching maths one of these ways but that doesn’t mean you can’t mix these approaches over the course of a unit of study. But I don’t think anyone is claiming that you can’t use a variety of methods. The argument in Canada, as I understand it, is that a shift in the balance towards relatively more constructivist maths has coincided with a decline in the performance of Canadian maths students.

Oddly, the article praises Jo Boaler’s attempts to reach out to the wider public through her blog and other activities and yet Boaler herself has done much to promote the idea that there are two dichotomous ways to teach maths. In her famous UK study, she compared one school, ‘Phoenix Park’, that followed an approach described as ‘project-based’ and ‘open-ended’ to another school, ‘Amber Hill’, that followed a more ‘traditional’ one. This methodology was broadly repeated in the U.S. where the ‘reform-oriented’ school, ‘Railside’, was compared with two other schools which each offered a mix of maths courses.

And this particular approach to research might help explain some of the scepticism that exists about its trustworthiness. We are talking about very small samples of schools here: Two in the U.K. and three in the U.S. We might have simply chanced upon particularly effective examples of constructivist teaching and particularly ineffective examples of more explicit teaching. In fact, I reckon I could produce the opposite finding – a win for explicit teaching – by carefully researching teaching methods and then selecting schools on the basis of their reported results prior to running such a study. It is therefore quite reasonable to caution about drawing too many conclusions.

When you look at the broad mass of research conducted by maths education researchers then the picture becomes even less clear. Many studies don’t even study the effectiveness of a teaching method. Instead, they report an initiative where teachers are trained in a constructivist maths teaching method and then report whether this training led to sustained changes in teaching practice (e.g. here, here, here, here here and here). The teaching method is simply assumed to be superior, even though this is not tested as part of the study.

And this is not an argument between maths researchers who possess research evidence and others who just possess a feeling in their bones. There is plenty of evidence to support the argument of those Canadian parents and maths professors. The effectiveness of explicit instruction has been well documented, from the process-product studies of the 1960s and 1970s – including Project Follow Through – to experimental studies and cognitive science research. And this is all supported by a strong theoretical framework.

Even my recent investigation of PISA data showed that student-oriented forms of instruction seem to be less effective. I don’t think this is particularly conclusive on its own but it is interesting to note that PISA has not highlighted this relationship, nor have the maths education researchers who have written columns about this data, preferring instead to focus on less significant correlations that are more supportive of their theories.

Turning to the third storyline, I have to share the disapproval of the committee. Teaching maths is not about producing a STEM workforce; this is simply a happy byproduct. We teach a curriculum in order to pass on culture and cultural artifacts to the next generation. We cannot predict what the future will bring for individuals or which knowledge they will eventually find useful or fulfilling and so we choose the content on the basis of that which has endured: powerful ideas that have explanatory value, that have proved useful in the past and that are therefore likely to be useful in the future. We don’t exclude anyone on the basis of not being academic enough or because we assume that they are destined for a particular type of career. The fruits of civilisation are our common heritage and should be available to all.

Yet I have some empathy for STEM enterprises that watch the fall in mathematics standards, worry about it and start to become more vocal in response.

What don’t the committee say? They never address the fact of the declining maths scores that convinced the Canadian parents and maths professors to start arguing the case for more explicit teaching in the first place. They never state the kind of mathematics education that they would like to see more of in schools. We can perhaps infer this from the output of the researchers who they highlight. Instead, the committee are focused on applying the sociological notion of ‘positioning theory’.

My advice to these researchers is very simple. Trust the public with the facts of the debate. Come out vigorously, explaining your positions on maths teaching, the evidence for these positions and what you think needs to be done. These will be challenged but that is to be welcomed as part of a democratic debate. The demos can then make up their own minds about which education policies to vote for at the ballot box.

We don’t need all this sneakiness.


PISA evidence for project-based learning in maths

The more I analyse the PISA data set, the more I am surprised by what PISA have chosen to highlight in their recent report. The PISA measure of memorisation – a topic stressed by PISA and by Boaler and Zoido in a follow-up article – hardly correlates with PISA maths results at all. Teacher-directed instruction and student-oriented instruction appear to be more significant.

One of the questions asked by PISA in the category of “student-orientation” is about how often a student’s maths teacher, “Assigns projects that require at least one week to complete.”

This seems to correlate negatively with the PISA 2012 mean maths score.

pisa-project-based-maths

Should we therefore avoid project-based learning in mathematics?

Obviously, there are a number of limitations to this analysis. Can we really compare jurisdictions like this? And as Ben Wilbrink pointed out on Twitter, students are not simply influenced by what their current teachers are doing but also a great deal by what their previous teachers did. Nevertheless, I am following the kind of analysis that PISA have themselves chosen to highlight.

We could perhaps address the first of these points – comparability across systems – by looking at the students in each jurisdiction and comparing the way that their maths performance relates to these various different measures. This is what Caro, Lenkeit and Kyriakides did in a paper published earlier this year (thanks again to @cbokhove for the link).

They did not separate the analysis into individual questions but looked at the construct of student-orientation as a whole, using PISA’s index. For every education system, they found a negative relationship between maths scores and the student-orientation measure: “…model estimates produce a decisively negative association with student-oriented instruction across education systems.” So this seems to broadly agree with my between-country analysis.

Caro et. al. also looked at a number of other measures in this way. For instance, they examined measures of classroom climate (note that they analysed 62 systems in total, omitting Albania and Lichtenstein for methodological reasons):

“For the instructional context variables we found a positive association with disciplinary climate in 61 education systems and a positive association with classroom management in 47 systems. Teacher student relations were positively related to mathematics performance in 16 systems and negatively in 22 systems.” [condition codes omitted]

This is interesting. I would expect classroom management and discipline to correlate positively with performance but I would have thought that teacher student relations would also correlate positively. It might be worth examining the questions that PISA asked in order to come up with this measure.

They also looked at the teacher-directed and cognitive-activation measures that I investigated across countries. I found a weak, negative correlation between both of these and maths performance (here and here). Caro et. al. decided to try to fit curves to these relationships rather than straight lines, on the plausible assumption that there are diminishing returns to be had by further increasing the use of a strategy once there is already a lot of it going on. Across education systems, they found*:

“…mathematics performance tends to improve for higher levels of cognitive activation but at a decreasing rate or even with negative associations for very high frequencies of cognitively activation activities. In other words, for students who report the most frequent use of cognitively activating activities from their teachers, the initially positive association of this strategy stagnates or becomes even negatively associated with performance…

…Results are similar for teacher-directed instruction… where the association with mathematics performance is positive at fewer frequencies, decreases when teacher-directed instruction is employed more often and ultimately becomes negative at higher frequencies. Again, in the great majority of systems there is a positive side of teacher-directed instruction that is underestimated if guided solely by linear associations.”

Examining these graphs by eye is not massively convincing. They appear to me to show little indication of any relationship at all. I suspect that the student-orientation construct is capturing something real about what is happening in lessons; something that impacts upon performance. I am not sure that these others do.

*have a look at how these findings are reported in the paper’s highlights section