Anecdotal evidence? Funny hats to the rescue.

I know a little about education and this means that I’m often left perplexed by education journalism. This leaves me uneasy – how much should I trust journalism about subjects that I know far less about? Hmmm…

The latest example that’s left me scratching my head is a piece by the ABC here in Australia. From what I can gather, various research institutes and state governments have joined forces and are getting students to don funny hats with electrodes in them. Why?

“Debate over which education techniques work best are argued using mostly anecdotal evidence, with limited statistics to draw upon beyond test results.

Now researchers from around Australia are going to extraordinary lengths to measure classroom success, right down to observing students’ brain activity and heart rates.”

The story then goes on to describe how this approach is used in a Brisbane school. Children are being studied as they engage in cooperative learning in a science class. 

The odd thing is, if we are trying to work out which education techniques work best then we need to… er… compare some techniques. You can’t just use one technique like cooperative learning and say ‘Preliminary scientific results suggest it is working, and the students agree’. Whether students are strapped to heart-rate monitors or not is rather beside the point. How do we know this is a better technique than anything else? 

The actual reasons given for cooperative learning seem to be those that are always given for cooperative learning – the students like it, the teacher reckons it’s a good idea etc. Nobody reading this piece would be aware that there has been a massive amount of previous research into cooperative learning and the specific conditions needed to make it successful.

Indeed, the entire history of education research seems to have been dismissed as ‘mostly anecdotal evidence’. 

I wonder if this is a wider problem than just education journalism. The comments about anecdotal evidence seem to have come from one of the researchers and this reminded me of something that Thomas Good complained about a couple of years ago.

Good produced a thorough review of the process-product research of the 1950s-1970s along with a colleague, Jere Brophy.  The same research has been summarised more accessibly by Barak Rosenshine. It consists of a large number of mainly correlational studies that show that what is variously described as ‘direct instruction’ or ‘explicit instruction’ is associated with higher student gains. The studies cover a range of subjects, ages and assessments. No, we cannot absolutely determine causes from correlations but the research is systematic and highly suggestive. It’s certainly not ‘anecdotal’.

Good’s complaint was that this research seems to have been forgotten to the extent that researchers today are often posing questions that have already been answered.

I think he has a point. Explicit instruction has passed a number of tests beyond process-product research, tests that range from lab-based experiments to classroom experiments to large-scale studies. I suspect that the problem for researchers is that it’s the wrong answer. 

Perhaps the hats and heart-rate monitors will do a better job, especially if the only techniques that are tested are the right ones.


7 thoughts on “Anecdotal evidence? Funny hats to the rescue.

  1. Tempe says:

    Yep, I was irritated by that story. How odd to just look at one method and call it successful, largely, as far as I could tell, because the kids liked it. when I was at school I liked group work too because it meant bludge time.

    I often listen to Radio National and am yet to hear a story on the success of direct instruction. Instead there seems to be a steady stream of stories on “controversial” teaching techniques. I wrote to Life Matters asking that they might like to explore some other ideas. I even put your name forward Greg, hope you don’t mind, as a person to interview. Alas, I heard nothing back.

  2. I often wonder if some of these educational studies need to have their sample size adjusted. Suppose you try a new educational approach in three teachers’ classrooms of 20, 23 and 19 students respectively. What is your sample size? As far as I can see, it is always treated as 62. However, what is being tested is a METHOD being used by a PERSON. And there are three such PERSONs. Is it not also sensible to regard the sample size as three? In many cases it seems that the teacher is at least as important a variable as the teaching method. If a researchers can handpick the teachers to represent the test and control groups, they may be able to control the outcome in any way whatsoever. Even if those teachers are randomly assigned, if there are only two teachers in both groups, this would seriously sway the results. One must remember that when calculating sample size variance one divides the sum of squared individual variations from the mean by n-1, or in this case, 1. Worse if there is only one teacher and this is taken to be sample size — giving an effective sample variance of infinity.

  3. I ranted about this a while back:

    Hey Researchers …

    EVERY school that tries your new idea is now part of the research; all data should be kept. It never is … in fact education is the only field where all of the research is case-control, the selection bias is ignored, the publication bias is widespread, and the results don’t ever seem to matter … all while the subjects of the research suffer through another set of changes and failures in the vague hope by the administrators that “Someday, we get it right.” I’m here to tell you that “Someday” hasn’t arrived yet.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.