The Ferris Wheel

“Right,” said Mr Poynt, briskly calling the class to attention, “now that we have those government tests out of the way I thought it would be good to dial down the stress levels with some authentic project work.”

This was good news to Jimmy. He hated how the government had made him spend the last five weeks endlessly practising tests. How many more times did he need to find out that he couldn’t do the last ten questions on the paper? Project work sounded good.

“Your task,” explained the teacher, “is to design a Ferris wheel for the town and market it to tourists. This is an opportunity to exercise your creativity muscles. Does everyone know what a Ferris Wheel is?” Mr Poynt pressed the space bar on his laptop and a picture of The London Eye appeared on the whiteboard.

“It has to be 20 metres high. You need to work out how much steel to use, how fast it will turn and so on. There’s a sheet with some questions like that on it which you will need to answer along with some guidance on how to do this,” he explained, “You will need to produce a poster on A3 paper and give a presentation to the class.”

Mr Poynt asked Vinay to distribute the sheets around the class.

“I’m not going to tell you exactly how to do it. Think of me as a resource.”

Jimmy chose to work with his usual group: Vinay, Oscar and Byron. Earlier in the year, Mr Poynt had tried to enforce mixed groups of girls and boys but it didn’t last very long.

Anyway, the usual group was highly efficient. Jimmy would do the artwork – he was talented at drawing cartoons. He immediately started sketching the Ferris wheel. Vinay would do the research and Byron would do the writing and present the poster. Oscar was the maths genius so he could work out the numbers that Mr Poynt wanted.

“Marvellous!” Exclaimed Mr Poynt as he surveyed the purposeful activity that now filled the room. The students are so engaged! And with doing maths! If only the government could see that maths wasn’t just about the rote memorisation of mechanical procedures to reproduce on standardised tests.

Then he sat at his desk, took a long slurp of coffee and opened up his emails.


Making mathematics real: is it such a good idea?

One of the assumptions that is held by many educators is that maths should be taught, where possible, through real-world examples and applications. Some trace this idea back to John Dewey and it certainly follows the kind of naturalistic logic that asks why learning to do arithmetic can’t be like learning to walk or speak (there is actually a good reason why it can’t): If only we could motivate students to learn mathematics by showing them its utility in reaching some other goal such as baking a cake or making a go-cart then all the pain of learning would go away.

This is an instrumental view of mathematics: maths is a tool for achieving another purpose rather than something of value in its own right. As with many ideas that originate in the progressive movement of the early twentieth century, this view has been subsumed in a verbose way into the later theory of constructivism where, for instance, “Challenging, open-ended investigations in realistic, meaningful contexts need to be offered which allow learners to explore and generate many possibilities, both affirming and contradictory.”

The problem with the instrumental view of maths is that it gradually evaporates everything in the maths curriculum, including the kinds of investigatory activities favoured by constructivists. You can function just fine in society living a rich a fulfilling life without much maths at all. Need to do some sums? Grab a calculator. And there’s certainly no need to learn anything as ethereal as algebra. When was the last time you had to solve for x outside of a maths classroom?

If you try to bring the real-world into the classroom then you probably won’t succeed very well – the best you can hope for is a simulation of real life and, at worst, you will be torturing reality to try to fit the maths.

Whilst the motivational posters of Twitter still valorise ‘real-world’ and ‘authentic’ without explaining what these mean, the more thoughtful constructivists have moved on, tying themselves in knots as they try to hold on to the principle of authenticity whilst avoiding its absurdity.

In David Perkins’ book “Future Wise,” he seeks to define a criterion that he names “lifeworthy”. This isn’t about learning only those concepts that have a direct application in real life but it also sort-of is. What emerges is essentially an idiosyncratic list of what Perkins thinks is important. He is a fan, for instance, of the French revolution. When it comes to mathematics, it’s out with quadratic equations – not lifeworthy enough – and in with statistics.

The late Grant Wiggins made a similar turn when trying to define ‘authenticity’, specifically in the form of assessment tasks. For Wiggins, authentic tasks must be, “representative challenges within a given discipline. They are designed to emphasize realistic (but fair) complexity; they stress depth more than breadth. In doing so, they must necessarily involve somewhat ambiguous, ill structured tasks or problems.” Yet this doesn’t mean they have to be ‘real-world’ or ‘hands-on’. The CAN be. But the don’t have to be.

Wiggins goes on to outline an example of the kind of open-ended mathematics task favoured by constructivists. This is authentic, he asserts, because it involves doing real maths.

Huh.

When you look at Perkins’ proposals for maths tasks, despite the definitional hi-jinks, they involve students doing things like planning, “for their town’s future water needs or model its traffic flow.” Which sounds real-world, mundane and dull.

The concept of real-world maths is so ingrained – particularly through it’s adoption by constructivists – that the massive multi-national testing program run by the OECD and known as the Programme for International Student Assessment (PISA) adopted the principle for its maths test. PISA maths is based upon Realistic Mathematics Education (RME) – a maths philosophy from The Netherlands that takes an instrumental view of mathematics. In RME, students first work in real-world contexts, using their intuition to solve problems before developing more formal approaches.

It is therefore ironic that the OECD has found evidence in its own data that pure, abstract mathematics teaching is linked to higher performance on its own tests of supposedly real-world problems than teaching that focuses on real-world contexts and applications.

The OECD nails the instrumental view:

“It is hard to find two scholars holding the same view about how mathematics should be taught, but there is general agreement among practitioners about the final goal: mathematics should be taught ‘as to be useful’.”

I disagree with that. The study goes on to look at how maths is taught in different countries. It is essentially a study of correlations and so you could wave it away for that reason but the authors have tried to control for a number of factors. Crucially, they find the following:

PISA pure maths

As the report suggests, this finding is consistent with cognitive science and the fact that learning is often tied to the contexts through which it is learnt. Indeed, one of the most powerful aspects of mathematics is that it is abstract and therefore can be generalised across diverse contexts.

It is odd but not entirely surprising to see how these results have been spun. Andreas Schleicher, education boss at the OECD decided to somewhat miss the point. To him, it was not the contexts that were the problem but the way that they must have been used. He assumed that teachers of applied maths must be teaching students tips and tricks and asking them to mechanically learn simple mathematical procedures because Schleicher knows, a priori, that this would be bad.

Diane Briars of the National Council of Teachers of Mathematics in the U.S. (NCTM) also took the opportunity to criticise the idea of teaching children to memorise rules as well as having a bit of a rant about ‘flip-and-multiply’ – a method for dividing fractions.

It’s almost as if they had been asked to comment on something else.


How Reading Recovery probably works

I have written before about trials of Reading Recovery, particularly the recent I3 study from the U.S. Since then, I have become aware of two papers that I think are key to understanding the way that Reading Recovery works.

To say that it ‘works’ is actually quite controversial. Objectively, it does. Placing students in a Reading Recovery intervention seems to improve their reading more than if you don’t do anything. The question remains as to why this is the case. For instance, is it due to the specialist training that Reading Recovery teachers receive?

It is important to note that Reading Recovery is a one-to-one intervention of up to 60 half-hour sessions. This is hugely resource intensive. It also represents Benjamin Bloom’s ideal of a maximal form of teaching. He reviewed various interventions – specifically conventional teaching, mastery learning and tutoring – and found an effect size of d=2.0 for one-to-one tutoring. So the form of Reading Recovery likely contributes some proportion of its effect.

We could possible gauge this by comparing Reading Recovery directly with another one-to-one reading intervention of the same duration and randomising students between the two treatments. Surprisingly, there seem to be few such direct comparisons. So perhaps we should look at comparing effect sizes from Reading Recovery versus a control with effect sizes from rival one-to-one programs versus a control. This is more fraught because conditions will necessarily vary but it might be indicative.

This is where the second paper comes in. In 2011, Robert Slavin and colleagues reviewed a number of studies on reading interventions. They were quite picky about the studies that they included. When it came to Reading Recovery, they avoided outcome measures that were intrinsic to the method itself in favour of more objective measures:

“First, most Reading Recovery studies use as posttests measures from Clay’s (1985) Diagnostic Observation Survey. Given particular emphasis is a measure called Text Reading Level, in which children are asked to read aloud from leveled readers, while testers (usually other Reading Recovery teachers) record accuracy using a running record. Unfortunately, this and other Diagnostic Observation Survey measures are closely aligned to skills taught in Reading Recovery and are considered inherent to the treatment; empirically, effect sizes on these measures are typically much greater than those on treatment-independent measures.” [my emphasis]

At this point I will remind you of my first principle of educational psychology: students tend to learn the things you teach them and don’t tend to the learn the things you don’t teach them.

Slavin et. al. also ruled-out studies based only upon those students who had successfully completed Reading Recovery. Such studies prove little. I am sure that many teachers would prefer to be judged only on the results of those students who have been successful.

Once they had whittled-down the research in this way, Slavin et. al. were able to note that:

“The outcomes for Reading Recovery were positive, but less so than might have been expected…  

Across all studies of one-to-one tutoring by teachers, there were 20 qualifying studies (including 5 randomized and 3 randomized quasi-experiments). The overall weighted mean effect size was +0.39. Eight of these, with a weighted mean effect size of +0.23, evaluated Reading Recovery. Twelve studies evaluated a variety of other one-to-one approaches, and found a weighted mean effect size of +0.56… 

Across all categories of programs, almost all successful programs have a strong emphasis on phonics. As noted earlier, one-to-one tutoring programs in which teachers were the tutors had a much more positive weighted mean effect size if they had a strong phonetic emphasis (mean ES = +0.62 in 10 studies). One-to-one tutoring programs with less of an emphasis on phonics, specifically Reading Recovery and TEACH, had a weighted mean effect size of +0.23. Within-study comparisons support the same conclusion. Averaging across five within-study comparisons, the mean difference was +0.18 favoring approaches with a phonics emphasis.”

I think it is important that policymakers are aware of these findings.


What can we learn from the success of problem-based learning?

I have tended to steer clear of the controversy surrounding the effectiveness of problem-based learning in medical education. This is because I tend to apply the perspective of cognitive load theory which suggests that learning is constrained by the limits of working memory. We can overcome these limits by having sufficient knowledge in long-term memory to draw upon. This therefore predicts that breaking learning down into small, manageable chunks and explicitly teaching these chunks will be best for novices who have little to draw upon from long-term memory but that solving more complicated or realistic problems will be best for experts who already have a lot of domain knowledge.

My difficulty is in placing medical students on the novice-expert continuum. On the one hand, when they begin training they still know very little about medicine. On the other hand, they will have been highly successful school students who will posses a large amount of relevant biological, chemical and related knowledge. Add to this the fact that they are also likely to have a high level of general intelligence and that this is effectively a measure of working memory capacity. So medical students are likely to be constrained less by their working memories than most of us.

I was therefore fascinated to read a review of the evidence for problem-based learning in medicine written by Jerry Colliver in 2000. The discussion echoed many of those that I’ve been involved in when discussing education research but it also provided a few new insights. Not least of these is that Benjamin Bloom has suggested that the optimal form of teaching is one-to-one tutoring and has characterised the goal of education research as finding methods of instruction that approach the same level of effectiveness. I didn’t know this. Apparently, Bloom found an effect size of d=2.0 for one-to-one tutoring, placing a ceiling on what we may expect from education research, although this must be considered alongside our current understanding of the validity of effect sizes.

Colliver reviews a number of key studies that show problem-based learning to be more successful than traditional methods of medical education. But he is a critic of these studies. He notes, for instance, that many lack randomisation – students often self-selected into problem-based learning or the control condition and there were often systematic differences between the groups. Colliver references a paper that explores these differences.

Many of the outcome measures used in the studies showed little difference between conditions whereas some clearly favoured problem-based learning. Colliver discusses a measure of the students’ ability to relate to patients; a measure where problem-based learning students did well. Then he compares the two curricula. In one study:

“the PBL track ‘‘emphasized frequent contact with real or simulated patients for the dual purpose of practicing interpersonal, physical diagnosis and clinical skills,’’ whereas the traditional track ‘‘limited patient contact to supervised encounters with a small number of hospitalized patients as part of bedside tutoring groups in the first and second years.’’ 

So here is evidence, if it were needed, that students tend to learn the things you teach them and don’t tend to the learn the things you don’t teach them. I am thinking of making this my Principle Number One of Educational Psychology (posters and memes, please).

This has echoes in the kind of research that we see into inquiry-learning in high school science classes where students are tested on things such as their ability to formulate hypotheses; a key focus of many inquiry-learning programs but one that is not emphasised so much in traditional science classes where teachers instead tend to focus on teaching actual science.

Overall, Colliver is skeptical of both the experimental designs that are used to demonstrate the effectiveness of problem-based learning and the rather modest effect sizes that these studies generate. In his discussion, Colliver also critiques the educational principles and learning mechanisms on which the problem-based learning approach is based. He suggest that part of the theory is that learning something in context helps you recall and use that information in the same context later. However, the simulations used in problem-based learning are not real contexts so we haven’t actually achieved the learning-in-context aim.

Again, I think there are parallels to the learning theories that we tend to adopt in schools.

One of the strongest aspects of the Colliver paper lies in this rebuttal, the authors of which don’t seem able to address many of the issues that Colliver raises.


News: Learning Impact Fund grants

You might be aware that I recently attended a launch by Social Ventures Australia of their new Evidence for Learning project. This is an attempt  to conduct the kind of research in Australia that is undertaken by the Education Endowment Foundation (EEF) in the U.K. I have mixed feelings about large-scale randomised controlled trials (RCTs) in general and the EEF studies in particular because there is a tendency to test whole packages of intervention and compare them with business-as-usual. This leads to potentially confounded studies – if there is an effect then you can’t really know why.

However, I still think this is worth pursuing. I would rather have the EEF in the world than its absence because we can then attempt to apply pressure for good study designs. That’s why I cautiously welcome Evidence for Learning.

In this spirit, I’m going to pass on some news that I received from John Bush of the Evidence for Learning, “Learning Impact Fund”. The fund has release $1 million for two initiatives:

“One round is open to any education program designed to improve the academic achievement of children in Australia, and the second is for programs focused on building resilience skills in students in Victorian schools.”

It is my understanding that the former group will be part of RCTs but the resilience studies will be a more preliminary form of evaluation. Follow this link if you are interested.

I am not convinced about the idea of a resilience intervention but I am all in favour of conducting research that might prove me wrong. I will be particularly interested in how the participants propose to measure it…


Placebos in education research

A friend drew my attention to a blog post which led to me tracking down the study that the blog post is written about. I think this study is of importance to those of us with an interest in education research.

Trials of new drugs tend to be double-blinded. Patients are either given the test drug or a placebo – a simple sugar-pill which looks the same. The patients do not know which of these they have been given and neither do their doctors (hence the ‘double’ blinding). Part of the reason for such secrecy is to reduce expectation effects – patients will all have an equal expectation of getting better.

This is why it can be hard to evaluate complementary therapies such as acupuncture – you tend to know if someone has stuck a needle in you. If we compare acupuncture with, say, massage then people might believe that the acupuncture will help them more than the massage and this may affect the results.

In education, we often refer to an expectation effect known as the ‘Hawthorne Effect’ where the knowledge of being the subject of study somehow affects the results – perhaps teachers are more enthusiastic or perhaps students work a little harder. Again, this is an expectation effect and it is one of the reasons why we should be skeptical about research. If teachers opt in to teaching the new, shiny program and students are aware that they are in this program – or even self-select into it as has been the case with some university-level trials of problem-based learning – then should we really be surprised that there is an effect when compared to business-as-usual?

In fact, a new, shiny program would not only involve a Hawthorne effect – the knowledge that you are the subject of study – but a placebo effect – a new method or piece of equipment or construct of some kind that may cause you to expect a favourable outcome.

Should we worry about these effects? After all, expecting to do better won’t teach us how to read and it seems unlikely that it could have an effect on something as fundamental as our underlying intelligence.

This is where the new experiment comes in. The researchers were investigating the possibility of a placebo effect in ‘brain-training’ activities. They recruited participants to the study using two different posters. The first was neutral and asked student to “participate in a study” in order to gain course credits whereas the other poster – the placebo condition – mentioned “cognitive enhancement” and the potential to “increase fluid intelligence”.

The intervention was then the same – a brain-training game. So it wasn’t the intervention that was manipulated but the recruits’ expectations of it. It’s a bit like giving acupuncture to two groups of patients but promoting the potential benefits with only one of the groups.

There is a possibility of a systematic difference between the two groups given that assignment was by self-selection rather than being random. Even so, performance on the brain-training task was similar and yet students who self-selected into the  placebo group saw the equivalent of a 5-10 point increase on a standard IQ test.

This feeds the ongoing debate about whether brain-training achieves much at all and, if it does, whether this transfers out of the narrow range of skills that the training addresses. But it should also give us pause to reflect on educational research. Has anyone ever run a trial of an educational placebo?


The solution to education’s groupthink

There is a blog site called Mind/Shift. It is based in Northern California and is part of KQED, a public service broadcaster. Mind/Shift represents much of what is wrong in education. It contains posts by breathless writers about new and innovative teaching practices. The evidence to support these claims is usually notable only by its omission.

For instance, a new piece by Katrina Schwartz, a staff journalist, is a review of a talk given by Will Richardson, a former English teacher who now works as a consultant. The article states that:

“Schools need to have a clear vision, rooted in today’s context and a set of practices that reflect those two things. When he consults with schools, Richardson said he most commonly sees a lack of vision based in how students learn. In his many talks he shares a list of things educators know intuitively about how kids learn best alongside a list of things schools do because it’s easier for adults. He says if educators want to shift education to the modern context, they need to prioritize things that help students learn best.”

This is actually a statement that should be supported by evidence. Whatever the limitations of educational research, strategies that ‘help students learn best’ are certainly something that can be experimentally tested. And yet the appeal is not to research but to ‘what educators know intuitively’.

A graphic of Richardson’s is displayed in the article. I won’t reproduce it because I don’t have permission but it is a classic of the form (I call these ‘tabular dichotomies’). We are presented with two lists that are side-by side. The first list has the title, “Common sense”, and includes statements such as, “real world application”, “fun”, “relevance to their lives”, “social”, “autonomy and agency”. Presumably, this list is meant to represent common sense about how students learn best. The second list is simply labelled, “???????”, and includes things like, “sitting in rows”, “one subject area focus” and “no choice/agency”.

These question marks are telling and are part of the reason why I assign little personal blame to the teachers, consultants and journalists whose symbiotic relationship generates this kind of writing. It’s a form of groupthink. They inhabit a bubble. If we try to see it from their perspective then the revolution that we need in education really is common sense. Why else would everyone that they interact with agree? And there really is no reason for sitting students in rows. It’s just a hangover, a tradition kept going by recalcitrants who are simply slaves to a system and who haven’t thought about education very much.

We see a similar attitude in an EdSurge piece called, “How to Manage the 4 Types of Teachers You Meet in Professional Development.” I’ve included the full title because it took me a while to come to terms with it. The piece is about persuading teachers to adopt edtech products. There is little doubt that if the author met me she would conclude that I am the type that she labels a, “Lagger”. Apparently, the solution to laggers is:

“First and always – talk about the why. Talk to them about how everything we do is to prepare our students to be productive citizens in their society and how we do them a disservice by not providing them with access to tools that will assist our children in doing just that. Take things sloooooooowly. These teachers benefit from sessions that are either one-on-one or small group. Encourage baby steps and be realistic in expectations.”

Again, the assumption is that skeptics are simply ignorant. With this mindset, it’s just not possible to contemplate that these skeptics might have a point. The benefits of what are being proposed are so self-evident that it must be some personal quality or lack of confidence that is preventing them from moving forward.

In reality, I don’t need some edtech advocate talking really sloooooooowly to me in a one-on-one session, I need to have the product explained and I need to be persuaded that it’s worth bothering with.

I do have the solution to educational groupthink of this kind but I don’t think you’re going to like it. There are no shortcuts in much the same way that there are no magic shortcuts to ‘critical thinking’ in education. It will be a long process.

And this process involves quiet, determined challenge. If you are able, write a comment on a post that seems to be at odds with the evidence. Link to a research paper that challenges what has been said. That’s what I’ve tried to do in my comment on the Mind/Shift piece and that’s what I tried to do recently in a comment on a piece in The Conversation about maths teaching. Keep it polite but remember that the most civil criticism is still criticism and so people are inclined to object to it. I recommend drafting comments in word and saving them. If you have a blog site and your comment isn’t posted – this sometimes happens – then you have the start of a blog post of your own.

Nothing will change overnight and you are unlikely to persuade the author of the post. But it’s a start. If one journalist pauses to check out the evidence before clicking ‘publish’ on his next piece then this is a small victory. And any reader who is new to the game will only have to look below the line and see that not everybody thinks the same way. In fact, there are quite reasonable people out there who are suggesting that there is evidence to the contrary.

Of course, if you can find the time then please write a blog of your own. Blogging is a great democratising tool that allows you to be part of the solution. And if you are an optimist like me then this means being part of the future. It’s just going to take a bit of work getting there.