Category: digital instruction

If you and I have had a conversation about math education in the last month, it’s likely I’ve taken you by the collar, stared straight at you, and said, “Can I tell you about the math lesson that has me most excited right now?”

There was probably some spittle involved.

Evan Weinberg posted “(Students) Thinking Like Computer Scientists” a month ago and the lesson idea haunted me since. It realizes the promise of digital, networked math curricula as well as anything else I can point to. If math textbooks have a digital future, you’re looking at a piece of it in Evan’s post.

Evan’s idea basically demanded a full-scale Internetization so I spent the next month conspiring with Evan and Dave Major to put the lesson online where anybody could use it.

It’s so easy to start. While most modeling lessons begin by throwing information and formulas and dense blocks of text at students, Evan’s task begins with the concise, enticing, intuitive question “Is this blue?” That’s the power of a digital math curriculum. The abstraction can just wait a minute. We’ll eventually arrive at all those equations and tables and data but we don’t have to start with them.

Students embed their own data in the problem. By judging ten colors at the start of the task, students are supplying the data they’ll try to model later. That’s fun.

It’s a bridge from math to computer science. Students get a chance to write algorithms in a language understood by both mathematicians and the computer scientists. It’s analogous to the Netflix Prize for grown-up computer scientists.

It’s scaffolded. I won’t say we got the scaffolds exactly right, but we asked students to try two tasks in between voting on “blueness” and constructing a rule.

They guess, based on RGB values, if a color will be blue. This was instructive for me. It was obvious to me that a big number for blue and and little numbers for red and green would result in a blue color. I learned some other, more subtle combinations on this particular scaffold.

This is the modeling cycle. Modeling is often a cycle. You take the world, turn it into math, then you check the math against the world. In that validation step, if the world disagrees with your model, you cycle back and formulate a new model.

My three-act tasks rarely invoke the cycle, in contrast to Evan’s task. You model once, you see the answer, and then you discuss sources of error. But Evan’s activity requires the full cycle. You submit your first rule and it matches only 40% of the test data, so you cycle back, peer harder at the data, make a sharper observation, and then try a new model.

The contest is running for another five days. The top-ranked student, Rebecca Christainsen, has a rule that correctly predicts the blueness of 2,309 out of 2,594 colors for an overall accuracy of 89%. That’s awesome but not untouchable. Get on it. Get your students on it.

I was walking with my wife along the River Corrib in Galway last weekend when we got into an argument that lasted the rest of the walk. I’ll present our two arguments and some illustrative video. Then I’d like you or your students to help sort us out.

Argument A: It would be much harder to swim to the other side of the river in the fast-moving water as in still water.

Argument B: It would be just as easy to swim to the other side of the river in the fast-moving water as in still water.

I hope this gets as out of hand for you and your students as it did for me and my wife.

This excellent question exhibits a quality that is not found often in math curricula: it has the “specificity sweet spot”: it is specific enough for a student to answer, but non-specific enough for every kid to agree on the answer. Students making different assumptions will have different responses, thus creating a real mathematical argument.

It’s great, first of all, that Khan Academy has all their student exercise code on GitHub for everybody to see. I don’t know any other adaptive system that does that. I figured there had to be a better way to reward them for that transparency than the criticism and judgment I’m about to post here, so I made them a badge also.

Their code illustrates the different ways good math teachers and good programmers try to figure out what students know.

In each of the dozen files I’ve reviewed, Khan Academy first generates some random numbers that meet certain criteria. In the proportions assessment, they call for three random unique integers between 5 and 12. No decimals. No negatives. No zeroes.

var numbers = randRangeUnique( 5, 12, 3 );

Then they use those numbers to generate exercises. With proportions, they insert an “x” randomly into that list of numbers. The final order of that list determines the proportional relationship that students will have to solve.

numbers.splice( randRange( 0, 3 ), 0, "x" );

But good teachers are more than random number generators. They create exercise sets that increase in difficulty, that ask students to demonstrate mastery in different contexts, all because proportions are conceptually difficult but procedurally simple. It’s extremely easy for students to get by on an instrumental understanding of proportions alone. (eg. “All you hafta do is multiply the two numbers that are across from each other and divide by the number across from the x.” Boom. It’s badge time.) It’s especially easy when the only thing that changes about the problem is the random numbers.

But forget good teachers for a minute. Let’s look at the bar set by various standard-setting organizations. Here is what you have to do to demonstrate mastery of proportions on a) Khan Academy, b) the California Standards Test, c) the Smarter Balanced Assessment.

Khan Academy

You’ll do a handful of problems just like this, with different random numbers in different places.

California Standards Test

Smarter Balanced Assessment

The difficulty and value of the assessments clearly increases from Khan Academy to the CST and then Smarter Balanced. (I’m hesitant to guess how well a student’s score on the Smarter Balanced Assessment will correlate to all her practice on Khan Academy.)

Here we find a difference between good math teachers and good programmers. The good math teacher can create good math assessments. The good programmer can make things scale globally across the Internet. The two of them seem like a match made in math heaven. Just get them in a room together, right? But the very technology that lets Khan Academy assess hundreds of concepts at global scale — random number generators, string splices, and algorithmically generated hints — has downgraded, perhaps unavoidably, what it means to know math.

2012 Dec 13. Peter Collingridge points out in the comments that Khan Academy has a proportions assessment comparable to the California Standards Test. If they have anything similar to the Smarter Balanced Assessment, please let me know.

I’ll try not to be ideological here about photos and videos in modeling tasks. If you have another way to achieve the same cathartic reaction we find 38 seconds into the video, drop me a note in the comments. I’ll take it.

Last year I asked you guys to submit your own graphs and stories which I edited together by hand.

Today, in a joint collaboration with the BuzzMath team, we’re releasing 24 of those videos for immediate download and use in your classrooms, all tagged by content and math. (ie. “a step function about ponies”)

I’m never gonna do what I did a year ago ever again. Editing all those videos by hand took months of my time and probably a year off my life. But I would like to know what holes you see in this library and what we can do to plug them.

Do we need more videos with periodic functions? Do we need more videos featuring bacon? Suggest them in the comments. If it’s a good idea and you can film the video, I’ll make your graphing story on a case-by-case basis. This thing will grow larger and awesomer.

BTW. Be sure to drop a tweet @BuzzMath thanking them for their killer work here.