The truth about teaching methodsPosted: May 30, 2015
There are those who would disagree that there is even such a thing as a specifiable teaching method. And there are those who would deny us the language with which to compare methods. But setting such defensive obfuscation aside, how can we decide which teaching approach is the best one to use in a given situation?
For John Hattie, the answer has been to compare effect sizes from different kinds of intervention. An effect size is a way of working out a standardised effect across different studies. It is basically the difference between the average of some kind of score from two comparison groups divided by the size of the spread of the data (the standard deviation). The problem is that Hattie perhaps uses this idea too generally. Can we really compare an effect size from a before-versus-after study with one from a control-versus-intervention study? If we narrow the student population e.g. by studying a high ability group then we will narrow the spread of results and so inflate the effect size. If we use standardised tests designed under psychometric principles then we will typically see less difference between groups and therefore reduce the effect size – this is due to the way that standardised tests are constructed.
In Hattie’s 2009 book, all types of test are treated equally. This means that trials where a new approach has been implemented by enthusiastic teachers and then compared to a do-nothing control group are lumped in with much more rigorous trials. For instance, the worked-example effect has been tested in randomised controlled trials. Is its effect size of 0.57 really comparable to similar effect sizes from poorly controlled trials? I don’t think so. Perhaps the answer is to agree to only look at randomised controlled trials?
Randomized controlled trials
There has certainly been a push for more randomised controlled trials (RCTs) in education. Ben Goldacre and others have made a strong case in the UK and the Education Endowment Foundation (EFF) is leading the charge, funding a whole range of different studies. However, RCTs are not without their problems.
Firstly, in medicine, RCTs are ‘blinded’. This is where, typically, one group is given a treatment whilst another is given a placebo. The patients and researchers do not know who is in each group. The purpose is to eliminate the placebo effect where simply knowing that you are being treated can lead to favourable outcomes. It is often quite impossible to blind an educational trial; most students will know if they are receiving something new and funky instead of business-as-usual. We therefore have to factor in the possibility of a placebo effect in whatever we find.
But it is also possible to poorly design an RCT by varying a whole bunch of factors at once. I recently wrote about such an RCT evaluating a scale-up of Reading Recovery. In this case, the differences between the control and intervention groups were multiple and it is impossible to tell whether it is the specific Reading Recovery practices that caused the effect.
In my post on the research, I asked if other studies had been conducted on Reading Recovery that were better controlled. One person linked me to this paper where Reading Recovery was compared (amongst other conditions) to a strange version of direct instruction where the students hardly did any reading. If you have access, it is worth reading the full paper, particularly for its description of the Reading Recovery teaching method:
In this example, Dana is reading Nick’s Glasses (Cachemaille, 1982), an 8-page illustrated book about a boy who cannot find his glasses because he is wearing them. The text on page 6 says, “‘Have you looked behind the TV?’ said Peter.”
Dana read, “Have you looked under the….” She hesitated, glanced at the picture (which did not provide the needed information), and searched the line of print. Then she started over, ” ‘Have you looked behind the TV?’ said Peter.”
At the end of the page, her teacher quickly said, “I like the way you were checking carefully on that page. Show me the tricky part.” Dana pointed to the word behind, saying, “It had a b.” “Yes,” said the teacher, “Under would have made sense. He could have looked under the TV, but that word couldn’t be under. I also like the way you checked the picture, but that didn’t help enough, did it? You were really smart to use the first letter; read it again fast and be sure that it makes sense.”
Dana read the page again fluently, saying, “That’s right.” In this example, the teacher was pointing out to Dana how she effectively used several different sources of information simultaneously to monitor her own reading.
This seems like a poor method. Encouraging students to guess words from the pictures is problematic because it won’t help them to read books that don’t have lots of pictures in them. Phonics should be a first resort not a half-hearted last resort when guessing fails. In this instance, phonics was only employed in relation to the first letter of the word rather than for the decoding of the whole word.
This makes me even more skeptical that the recent positive result from an RCT was due to the specific Reading Recovery methods.
An overlooked body of research evidence is the wealth of process product research spanning the 1950s through to the early 1980s. This research is essentially investigating correlations between specific teaching methods and gains in student knowledge and understanding. You can see why it has been largely replaced by experiments and quasi-experiments where factors can be systematically varied. However, I think it is still important and highly suggestive of which approaches are the more effective.
Barak Rosenshine looked into this research and derived principles of ‘direct instruction’. I also like Greg Yates’ discussion in his “How Obvious” paper. If you have the time, it is worth reading Thomas Good and Jere Brophy’s summary of the research, whilst mindful of Good’s warning not to see the findings as a checklist or observation tool.
So, it can be hard to evaluate approaches in education. The common methods have obvious flaws. However, I am not a postmodernist. I believe that people are more similar than different in the way that they learn and that we should ultimately be able to find some good general principles on which to base our decisions.
In the medium to long term, we should find ways to encourage knowledge building through properly controlled, randomised trials. The formation of the EEF in the UK is a good sign. As teachers, we may wish to involve ourselves in such research when we undertake Masters and PhD study. University education departments should focus more on this sort of research and less on ideological opinion papers or woefully flawed trials. I am hopeful.
However, we should not overlook what we already know. Despite the flaws, as Rosenshine notes, the results of investigating educational practices using quite diverse approaches all seem to converge on quite similar findings; the importance of teacher clarity and explicit instruction, the value of academic time-on-task, the role of practice and testing. There is enough to be going on with.