It can be hard to cut through education research because there is such a great volume of it, most of which is not very helpful. So it’s useful to have some heuristics to fall back on. I have a few that I can recommend.
Firstly, I would ignore anything excessively jargon laden or that mentions French philosophers in its abstract. Such papers are unlikely to offer much to a practising teacher. I’ve picked my way through a number of them now and if there is a point to them, it tends to be quite trivial.
If the paper involves an experiment then take a look at the methods section. Surprisingly often this will have a great big hole in it. For instance, the control and experimental groups might be different.
However, I’m starting to think that there is something even more important to look for. Any intervention should have a plausible mechanism. The writers of the paper need to be able to give a good account of how their intervention works. This is important for evaluating the results of any statistical tests because it relates to the ‘baseline’ probability: something that is never measured by the experiment itself.
Imagine, for instance, we randomised students into two groups and got a wizard to cast a spell on one of the groups before we gave both groups a test. Our null hypothesis would be that casting spells makes no difference to the test results. Imagine we then analyse the data and there is a difference. We do a statistical test and find that there is only a 1 in 20 chance of obtaining this difference if the spell had no effect. How would you interpret this result? I’d put it down to chance because I can see no plausible mechanism by which spells can affect test results.
One way you can roughly evaluate a proposed mechanism is to ask: How far removed is the intervention from the desired result? Is it proximal (close) or distal (far away)? It is much easier to understand the mechanism of a proximal intervention than a distal one. A distal intervention is likely to rely on a chain of influences, none of which correlate 100%, so by the time you pass through a few of the links in the chain, any effect may have washed out.
This distinction is made by Castles and McArthur in a fascinating new Nature paper about reading interventions. I met one of the authors, Anne Castles, at the recent Language, Literacy and Learning conference in Perth.
Castles and McArthur suggest that proximal reading interventions such as phonics and vocabulary training have a much better evidence base than distal interventions such as fish oil, coloured lenses or, heaven forbid, chiropractic.
I wish that the Education Endowment Foundation in England would pay more attention to mechanisms. If so, they might pause before throwing even more money at Philosophy for Children, a distal intervention that is intended to improve English and maths.
I’ve heard exactly the same argument used (effectively) to distinguish between ‘evidence based medicine’ and ‘science based medicine’.
Whilst true that RCTs are something of a gold standard, with scarcity of resources, ethical constraints, error margins etc and a lot of very big questions to answer we can never hope for ‘full coverage’ whatever that would look like!
The opportunity cost is too great to waste money and time on the credulous.
Castles & McArthur link brings me to Willingham book (When Can You Trust the Experts?: How to Tell Good Science from Bad in Education). Were you trying to multitask when you posted this? 😉 Also, hypothesis is spelled wrong.
Thanks. I’ll fix
In terms of heuristics, the Lindy effect is probably a really useful rough rule of thumb to apply to educational research. https://en.wikipedia.org/wiki/Lindy_effect Essentially, if the research supports principles that have been around for a while then it is probably in turn going to be around for an equal while longer. Conversely, if the research claims something brand new then it is more likely to perish.
Here’s a question though for you Greg, about another possible way of looking at and sifting through research, does it mention potential side effects or not? I find this a very interesting approach towards educational research. http://zhaolearning.com/wp-content/uploads/2017/02/SideEffectsPublished.pdf Although, I suspect you will disagree strongly with the Peterson discussion described within it on direct instruction in terms of side effects.
I believe there should be the equivalent of the Hippocratic Oath in education in general but don’t get fooled into thinking that the second article by Yong Zhao is impartial.
Zhao makes his money travelling the world prancing around stages at education and other conferences spouting his untested creativity and personalisation crap.
Impartial or not isn’t really the point though is it? Greg’s main point is about useful heuristics for teachers to sift through edu research. He offers the proximal vs distal heuristic which is very useful. Zhou’s heuristic, which is does the research mention possible side effects or not (even if you disapprove of his sales techniques and stage manner), is also a potentially useful one for looking at edu research.
I think Greg may have talked about the side effects article before. I found it interesting at first before the lazy reasoning started driving me mad.
Some useful heuristics. And I agree that proximal interventions have a much better evidence base that distal ones. However, it doesn’t follow that the distal interventions aren’t valid in individual cases. Whether the intervention is helpful will depend on the root cause of the reading difficulty. If there’s been little research or the research is poor you can’t conclude much from it, either way.
Speaking of jargon how is distal a better word than distant? If authors has simply said distant or close interventions relative to measured effects would anyone have to explain the words used? They could also qualify this with physically distant or say a large or small interval between intervention and measured effect to be more precise and accurate if they do or don’t mean physical distance.
Just because doctors like to sound latiny doesn’t mean it is a good thing.
I like the idea of a heuristic for sifting through educational research and think that your arguments r.e. proximal vs distal and scrutinizing the methods sections are very useful.
However, I can’t help but think that a major problem with teachers engaging with evidence-based practice is that in an article about making sense of and ignoring ‘jargon-laden’ literature, there is quite a lot of jargon within. ‘Heuristics’, ‘proximal’, ‘distal’, ‘null hypothesis’ and ‘‘baseline’ probability’ are terms that would strike fear in the heart of many teachers and rather than support them in picking apart and making use of research, might actually remove them further from it.
I understand that these are the correct terms and that research and academic literature requires accuracy of expression, so what is the best way of getting teachers to interact with research? At the moment, people rely on oversimplified, biased approaches in EEF and elsewhere because there are very few natural middlemen. Indeed, as I write this, I feel the need to ask you to excuse the gendered language: I wanted to say intermediary, but realized that’s jargonistic too!
Should there be formal training given to teachers to develop critical skills or should we aim for simpler research? My vote’s with the former.
What you want is good research in simpler language. Distal and proximal are borrowed from anatomy where they simply mean far or near in distance. But here they are generalized to mean far or near in time or chain of cause and effect and maybe distance. It would be easier to follow and more precise to say far apart in time or long term, or a long chain of causes and effects.
Baseline probability is simply the expectation of how likely the outcome is. Is it very surprising, completely expected or somewhere in between. Given by definition this is not a measured probability using terms everyone can relate to makes more sense than one that needs explaining.
Here heuristic is just a practical method.
Pity this is pay per view
http://www.chronicle.com/article/Why-Academics-Writing-Stinks/148989
Thanks Stan, I think you’ve nailed it on the head. The problem isn’t the method and the results, but the impenetrability of results. The ideas are sound, but although I understand the terms above, many don’t. How can we ensure research meets those its intended for? We need to engage as many teachers as possible in research and academic literature. Does EBT put teachers off by definition?
I take your point but most of the words you mention are explained in the post.
I agree that the devil is generally in the design — although it’s sometimes in the analysis. Sometimes both (EG Boaler). My favourite design error study is Kamii and Dominic’s multi-year study of a basket case of a school comparing “algorithms” teaching to “no algorithms” teaching and purporting to show that it’s harmful to teach students to perform arithmetic algorithms instead of letting them make up their own methods.
The design has numerous problems, which are too many to list here. But among them
– TELLING teachers, parents (and presumably students) at the beginning of the study that teaching algorithms will do students harm. (and I wonder how they got parents to agree after that to put their kids in the “algorithms” group, to sign parental waivers, how they convinced the teachers to use that method, and what they wrote on their legally required clearance forms for experimenting on human subjects.
– Not having fixed treatment groups. Each year students are randomly shuffled into either of the two treatment groups. Of course, since algorithms instruction is sequential and involves a multi-year progression, this effectively breaks up the coherence of their instruction and makes it unlikely that a student receives anything resembling ordinary algorithms instruction.
– The “no-algorithms” group concentrated entirely on “mental math” — developing tricks enabling small-number multi-digit procedures that can be carried out in one’s head. Algorithms classes involved using a pencil and paper to carry out the usual vertical, digit-aligned methods, recording intermediate steps on paper.
Then … the post-test for each year consisted of a completely oral examination on multidigit arithmetic, with paper and pencil forbidden. So the algorithms group had to operate in an environment unlike their instruction while the no-algorithms group essentially duplicated their instruction.
The results, of course, were atrocious … for both groups. But there was sufficient discernable effect in the numbers — as analysed by the researchers at least — to suggest the algorithms group did more poorly. I would say that was a foregone conclusion, and the design was constructed to attempt to force that outcome. Even so it was evidently very weak, and the researchers ended up throwing out some “inconvenient” data.
Of course when the study came out, with their prescription against allowing the teaching of algorithms in early grades, the NCTM eagerly published it, and the educational community accepted the results uncritically. The study is cited by the makers of the WNCP curriculum — adopted in 8 Canadian provinces, which excludes all mention of vertical algorithms, long division etc (and a number of other important algorithms and canonical forms, like “Least Common Denominator” — basically the elements many believe cannot be “discovered” by students independently without direct guidance).
So I would add … watch for a design that seems clearly inclined toward a foregone conclusion — i.e., the one the authors “arrive at” in the end. Also note the prior positions of the authors for signs that this may indeed have been the intended destination. In Kamii’s case she had been advocating for years the elimination of teaching algorithms, but without empirical evidence to support her proposal. The transparent purpose of the study was apparently to provide that evidence.
Pingback: Inquiry-based learning predicts charter school failure | Filling the pail
Reblogged this on The Echo Chamber.
Pingback: 7 tools for thinking about education | Filling the pail
Pingback: Death of a mindset – Filling the pail