Following my recent piece on Dan Meyer’s TED Talk, an interesting discussion developed in the comments. I want to touch on a couple of these ideas in the coming weeks but the first is the notion of teaching children how to manage presented information. In his TED talk, Meyer makes it clear that he thinks that this is critical:

“Here’s an example from a physics textbook. It applies equally to math. Notice, first of all here, that you have exactly three pieces of information there, each of which will figure into a formula somewhere, eventually, which the student will then compute. I believe in real life. And ask yourself, what problem have you solved, ever, that was worth solving where you knew all of the given information in advance; where you didn’t have a surplus of information and you had to filter it out, or you didn’t have sufficient information and had to go find some. I’m sure we all agree that no problem worth solving is like that.”

“…notice that all the information written on there is stuff you’ll need. None of it’s a distractor, so we lose that. Students need to decide, “All right, well, does the height matter? Does the side of it matter? Does the color of the valve matter? What matters here?” Such an underrepresented question in math curriculum. So now we have a water tank. How long will it take you to fill it up? And that’s it.”

In my post, I suggested that Meyer’s ideas are at odds with cognitive load theory. He doesn’t explicitly state this in the talk but I think that the implication is that *all* problems should be of this nature, including for novice learners and that we should perhaps learn primarily through the use of such problems. This seems consistent with the inquiry approach promoted by Jo Boaler in her recent book as well as the U.S. ‘reform’ mathematics agenda.

Yet in the comments Meyer suggests that he doesn’t mind *when* open-ended problems are addressed as long as they are addressed at some point, leaving the door open to a sequence of explicit instruction prior to this. This is a welcome development – I have no problem with the use of open-ended tasks when students are sufficiently expert. However, I am not sure that this is the message that the followers of Meyer are receiving and I suspect these comments will have surprised a few people.

My argument was that distracting or missing information represents extraneous cognitive load. In other words, it is extra stuff to process in addition to processing the problem itself. This would overload novice learners who do not have sufficiently developed schemas in their long term memories and who will therefore have to do all of these things simultaneously in working memory. Interestingly, Meyer disagreed with this and claimed that managing presented information actually represented *intrinsic *load:

“The case for these tasks from the standpoint of cognitive load is, first, that temporarily subtracting information increases

intrinsicload, notextraneousload. Nowhere in traditional textbooks do students receive a media res context and consider what questions could be asked and what information would and wouldn’t be relevant for those questions. These tasksdoincrease load. But that load is intrinsic. That’s a normative claim, of course, not an empirical one. I’m saying, “Math should look like this.” Reasonable people can disagree.”

I sort-of agree. If the *aim* is to teach students how to manage presented information then we might call this ‘intrinsic’ load. Does the problem-solving component therefore become extraneous? I am confused by this thought. Alternatively, we could adopt the line taken by Kalyuga and Singh in their recent paper and claim that cognitive load theory only really applies when the aim is the construction of domain-specific schema. In this case, the more domain-general skill of managing presented information would not be covered by the theory and it would therefore be inappropriate to discuss it in terms of intrinsic or extraneous load.

The Kalyuga and Singh piece remains largely neutral on whether these other kinds of objectives can be achieved but let’s examine the case for this particular skill.

I can’t picture teaching students how to exclude distracting information or how to find missing information. Indeed, Meyer doesn’t suggest an explicit approach for doing this, assuming that they will pick-it-up through discovery learning. But what would this look like?

For distracting information, we might suggest that students make a list of what is relevant to solving the problem and what is irrelevant. For missing information we could suggest that they do the same. Students could then match this list to what is available and notice redundancy and / or gaps. Presumably, this is the process that we wish them to pick up through induction. But hang on a minute, how can they do this if they don’t already have relevant problem-solving schema? How can you decide what is or is not relevant unless you know how to solve the problem?

It reminds me of Dan Willingham’s discussion of critical thinking:

“… if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about an issue, he can’t think about it from multiple perspectives.”

At the very least, this confirms the logic of first teaching novices the correct problem solving methods and we know that the most effective way to do this is through explicit instruction such as by using worked examples. Perhaps Meyer agrees with this position.

There may then be a role for exposing relative experts to problems with confusing information and I think this could be warranted. If students only ever experience problems where they are given exactly the information that they need then it might not occur to them that problems might be formulated differently. Meyer might have a valid point here. However, I also think that it is likely that this is a biologically primary skill. We have probably evolved to use the strategies of matching, exclusion and search in the way that we have evolved to use means-end analysis. In this case, it is not something that needs excessive levels of instruction beyond a bit of prompting.

Since I had too many comments on your previous post I am putting it here.

I just came across this quote, quoted by Richard Feynman in his analysis of the lack of success of his physics course in 1961-3:

But then, “The power of instruction is seldom of much efficacy except in those happy dispositions where it is almost superfluous. ” (Gibbon)

Richard P. Feynman, 1963

Of course, the really jaded teacher would probably apply this to any other teaching method!

I agree with your view that a proper starting point for learning how to solve problems are those with the data needed to solve the problem included therein. One has to start somewhere.

An example of this is the plight of a visitor to a new city trying to find his way around. In getting from Point A to Point B, the visitor may be given instruction that consists of taking main roads; the route is simple enough so that he is not overburdened by complex instructions. In fact, well-meaning advice on shortcuts and alternative back roads may cause confusion and is often resisted by the visitor, who when unsure of himself insists on the “tried and true” method.

The visitor views these main routes as magic corridors that get him from Point A to B easily. He may have no idea how they connect with other streets, what direction they’re going, or other attributes. With time, after using these magic corridors, the visitor begins see the big picture and notices how various streets intersect with the road he has been taking. He may now even be aware of how the roads curve and change direction, when at first he thought of them as more or less straight. The increased comfort and familiarity the visitor now has brings with it an increased receptivity to learning about – and trying – alternative routes and shortcuts. In some instances he may even have gained enough confidence to discover some paths on his own.

Also, as students gain proficiency in solving problems, the data needed to solve the problem can be made more difficult, or not so obvious. For example: “Tom goes on a hike. After walking for a while at 5 miles per hour, he discovers he has forgotten his lunch. A passing truck takes him home at 20 miles/hr. He arrives home 1 hour after he started on the hike. He far had he walked?”

In this problem, the distance is not given, nor is it necessary. What is necessary, though, is to see that the distance he has hiked is the same distance to return home, thus dictating what values must be set equal to each other. Students find this difficult at first because they are distracted by the distance not being stated. Once they realize the distance hiked equals distance to home, they can represent this equality in two different ways by the different rates and times. Thus, variants on the initial worked example function to stretch the student into applying prior knowledge in new situations.

Why wouldn’t the skill of discerning what information is needed when you have more than you need be treated simply as another rung on the ladder? Just as problems might go from easy positive numbers to positive and negative and on to less pleasant numbers they would do on to more information than you need.

The same DI approach would work – worked examples to ensure students are aware of the issue, distributed practice to ensure students remember this type of problem.

If you look at how questions with extraneous information are used in math competitions they seem to be used to add an extra challenge and test the students confidence in their solution. They seem particularly useful where you want to test long term memory rather than just whether a student can recall the algorithm or formula learned in the last week.

As an aside if everywhere you replaced the phrase “understanding of a problem” with “confidence in your solution” would there be any way to create a test of one that didn’t test the other and would it remove the ambiguity surrounding what understanding means?

(Here a test of confidence would include the check that the solution is incorrect and the confidence misplaced.)

Interesting stuff. This exposes some questions about experiential learning with ‘biologically secondary’ content that I haven’t quite wrapped my head around yet.