Anecdotal evidence? Funny hats to the rescue.Posted: May 1, 2016
I know a little about education and this means that I’m often left perplexed by education journalism. This leaves me uneasy – how much should I trust journalism about subjects that I know far less about? Hmmm…
The latest example that’s left me scratching my head is a piece by the ABC here in Australia. From what I can gather, various research institutes and state governments have joined forces and are getting students to don funny hats with electrodes in them. Why?
“Debate over which education techniques work best are argued using mostly anecdotal evidence, with limited statistics to draw upon beyond test results.
Now researchers from around Australia are going to extraordinary lengths to measure classroom success, right down to observing students’ brain activity and heart rates.”
The story then goes on to describe how this approach is used in a Brisbane school. Children are being studied as they engage in cooperative learning in a science class.
The odd thing is, if we are trying to work out which education techniques work best then we need to… er… compare some techniques. You can’t just use one technique like cooperative learning and say ‘Preliminary scientific results suggest it is working, and the students agree’. Whether students are strapped to heart-rate monitors or not is rather beside the point. How do we know this is a better technique than anything else?
The actual reasons given for cooperative learning seem to be those that are always given for cooperative learning – the students like it, the teacher reckons it’s a good idea etc. Nobody reading this piece would be aware that there has been a massive amount of previous research into cooperative learning and the specific conditions needed to make it successful.
Indeed, the entire history of education research seems to have been dismissed as ‘mostly anecdotal evidence’.
I wonder if this is a wider problem than just education journalism. The comments about anecdotal evidence seem to have come from one of the researchers and this reminded me of something that Thomas Good complained about a couple of years ago.
Good produced a thorough review of the process-product research of the 1950s-1970s along with a colleague, Jere Brophy. The same research has been summarised more accessibly by Barak Rosenshine. It consists of a large number of mainly correlational studies that show that what is variously described as ‘direct instruction’ or ‘explicit instruction’ is associated with higher student gains. The studies cover a range of subjects, ages and assessments. No, we cannot absolutely determine causes from correlations but the research is systematic and highly suggestive. It’s certainly not ‘anecdotal’.
Good’s complaint was that this research seems to have been forgotten to the extent that researchers today are often posing questions that have already been answered.
I think he has a point. Explicit instruction has passed a number of tests beyond process-product research, tests that range from lab-based experiments to classroom experiments to large-scale studies. I suspect that the problem for researchers is that it’s the wrong answer.
Perhaps the hats and heart-rate monitors will do a better job, especially if the only techniques that are tested are the right ones.