Picture this scenario: It is near the start of the school year and a group of Year 10 maths teachers gives their cohort a quiz to complete. After the data is in, it becomes clear that a large number of students cannot solve linear equations.
A discussion ensues. It turns out that nobody has actually taught linear equations to the Year 10s. Considerable time is then spent generating hypotheses and subtly blaming last year’s Year 9 teachers.
This is all a waste of time.
If you want to generate useful inferences from assessment data then it needs to be on something the students have actually been taught. That way, you can attempt to evaluate the effect of that teaching, the one factor we ultimately control.
The only useful conclusion that can be drawn from data on something that we haven’t taught the students is ‘we need to (re)teach that’ or ‘we don’t need to (re)teach that’. Which is a very brief discussion.
This does not mean that we should avoid asking questions based on previous content. There’s a lot of solid research to support the idea of repeatedly returning to material and quizzing it. What we shouldn’t do is overanalyse this data because kids forget stuff and that’s quite normal.
Focusing on the content you have taught seems like a simple idea in the context of maths, but one that becomes murkier when we consider something like writing.
As I’ve argued before, we often teach writing backwards by asking students to do it and only afterwards telling them what we wanted them to do. There is no point spending hours pondering why students use connectives incorrectly if the answer is that you haven’t taught them how to use connectives correctly, or that you haven’t taught them this within the last six months.
If we want to learn lessons about how to improve our teaching – and I’m sure we all do – then the trick is to analyse students’ capacity to do the things we’ve taught them and not other stuff.