My wife is out of town this weekend, so I decided to stay up late Saturday night and scratch a figurative itch that’s been in the back of my mind for a few years — XKCD’s 3-velociraptor problem. (I spent early Saturday night at an Ira Glass show at Wolf Trap, so apparently I don’t turn into a complete hermit when the mate is away.)

I’m not a mathematician (my brother got those genes), so I reached for my trusty brute-force, computer-simulation experience to solve the problem. At first glance, you’d think that you want to run toward the limping velociraptor at the top. And you’d be right, but to a surprisingly small extent. The optimal angle is about 32 degrees–only two degrees different from the angle you’d run to bisect the vicious predators if all velociraptors were healthy. Of course, there’s a symmetrical solution on the left side (about 148 degrees) as well, as demonstrated by the gif below. You live for all of 3.06 seconds, and get about 18 meters away.

The reason that the answer differs so little in the top-VR-is-injured variation is that over three seconds, the two healthy beasts only reach 12 m/s, which is only 2 m/s faster than the injured animal. I was disappointed when I realized that Randall probably didn’t solve the problem before writing the comic–if he had, I would have expected a more interesting conclusion.

Thus, in the vein of XKCD’s “What If” posts, let’s change some of the parameters. First, obviously humans can run different directions whenever we want–that’s the benefit of being the world’s (current, supposed) smartest species. However, because the aforementioned injury is barely holding the top raptor back and because you’re still much slower than it (6 m/s vs 10 m/s), the solution in this variant doesn’t change.

But what if you could out pace the lame duck (so to speak)? Setting the maximum speed of the top therapod at 5 m/s changes things dramatically. You should run at the top predator at only a slight angle (~10° from true north), brush right past it and head for the hills. While the lower raptors will catch you (in 3.95 seconds to be exact), in my gif our terrified human runs right off the 30mx30m screen to arbitrarily-defined safety!

Jackie and I just finished participating in the MIT Mystery Hunt, and had a blast as always. The title was “20,000 Puzzles under the Sea” (plus or minus 20,000 puzzles), complete with a steampunk theme. Puzzles will likely be available to all in a couple days.

One of the puzzles was Ariel’s Scavenger Hunt, where we had to find items for Ariel from the Little Mermaid. Bonus points if the items worked under water and if they were steam punk themed. So, for the category of “wooing the prince” — and given Ariel’s penchant for underwater singing — we teamed up with fellow PuzzFeeder Kim to deliver the following song to Ariel.

Under the Steam
The gadgets are always shiny
On somebody else’s sub
You dream about going down there
But whether to – there’s the rub
Just look at the world around me
Get rid of your rusty junk
Our tech is the classic style
So join me and turn steam punk!

Under the steam
Under the steam
Never grow weary
Down where it’s gear-y
Cuz that’s the theme.

In this Victorian paradise
Love, nuts and bolts — they fit so nice
We’ll be romancing
Kissing and dancing
Turn up the steam.

Under the steam
Under the steam
Love is the query
Down where it’s gear-y
Now it’s a meme!

It’s time for the MIT Mystery Hunt, a tradition I participated in as an undergrad and have started to revisit recently. My new wife and I head up to the hunt tomorrow, but if you can’t participate, here are the puzzles from our wedding mystery hunt last September. The answers to the puzzles directed folks around the Tidal Basin. Have fun with these!

The Map, which will help clue you into what the possible answers might be

Puzzle 1:Fill my heart (errata: 28 Down should not have any text associated with it instead of reading “no clue”)

I’ve been tapped as the new executive director of the Analyst Institute. I’ve worked with AI’s fantastic team before, and I could not be more excited to continue and expand the culture of experimentation that has developed on the Democratic/progressive side.

I spent a good chunk of Saturday and Sunday ignoring the NCAAs and spent my time on something only slightly less frivolous: crafting an algorithm to beat the awesomely addicting game 2048. Spoilers below

The Monkey Cage Blog was kind enough to run my guest post about the impact of 2012 GOTV efforts [1]. I relied on observational data from Catalist to conduct my analysis (many thanks there) because randomized experiments (which are deservedly the “gold standard”) are unavailable for such a broad investigation. Even with Catalist’s “incredible data”—which is at the individual-level data—the observational analysis is extremely tricky. This wonky, methodological blog post explains why. (Thanks to Mark Mellman—former boss; mentor of mine—and Josh Rosmarin for prompting this train of thought, and to Kevin Collins for helping refine it.)

When performing battleground state analyses like mine, competitiveness effects are a large confounder. Not only are battleground state voters more likely to cast a ballot because their vote matters more in these competitive states, but this effect should naturally be larger among partisans. To re-state: partisans in battleground states are the voters most affected by non-campaign competitiveness effects, and these are the voters whom campaigns most target. That’s a tricky knot to disentangle.

To alleviate this problem, I control for individual-level a priori turnout score—i.e., the probability (at the beginning of the campaign season) that political practitioners assigned to each voter for the likelihood that she would cast a ballot. The reason this control is so important is that since competitiveness affected the battleground states in 2008, 2004, etc, then partisans are naturally more likely to vote in presidential years. Crucially, this increased probability is reflected in their turnout scores. By controlling for this score, I (attempt to) isolate 2012 campaign effects.

I’ll be honest that there are a multitude of small problems that make this 2008-based turnout score an imperfect control. State competitiveness could have changed from 2008 to 2012 (though it barely did). Voters could have become more (or less) partisan in the intervening four years, thus altering their individual competitiveness effects. Some voters moved from a non-battleground state to a battleground state, and their turnout score might not reflect the effect of this move.

However, those cavils pale in contrast to the issue if controlling for 2008 (via a turnout score) masks 2012 competitiveness effects, shouldn’t this control also mask 2012 campaign effects? After all, Obama and McCain ran full-fledged campaigns in 2008—those effects should be baked into the 2012 turnout score just as I hope the competitiveness effects are. And, if I excuse this issue by claiming (a) 2008 turnout is diluted within the holistic turnout score, (b) some people’s scores will have shifted between elections (thus entering or exiting campaigns’ target universes) and (c) some people have moved between states, then don’t those same excuses mean I similarly failed to fully account for the competitiveness effect?

The good news is that the data do not support this final worry. If the competitiveness effect incidentally dominated my analysis, then the partisan effect would be most observed at the extremes of the scale (as the sporadic voters who are the biggest partisans would be the ones to care most about living a battleground state). However, this pattern is not observed. Thus, I feel fairly confident that I eliminate competitiveness concerns through the turnout score control.

However, my hunch is that by controlling for 2008 turnout (via the turnout score), I do in fact mask some of the 2012 campaign effects, thus biasing my estimates downward. As a cautious person, I’ll take that risk rather than potentially inflating the numbers upward and seeing an effect where there is none. Others may make a different decision.

To reinforce to the idea that its difficult to tease out campaign effects from competitiveness effects, image if Catalist had provided me with a list of voters who had moved from non-battleground states to battleground states between 2008 and 2012. It’s tempting to think that examining those voters’ 2012 actions sheds insight into battleground state campaign effects. Unfortunately, this analysis is not fruitful because these movers’ 2012 turnout patterns reflect both campaign effects and their votes mattering more (or appearing to matter more) in 2012 than they did in 2008 because of competitiveness effects.

All of the above demonstrates why conducting a randomized controlled experiment, in which none of these confounding elements are an issue, is the key to estimating causality. It’s why I’m so glad that a culture of experiments on the Democratic/progressive side–and many props to Malchow, Podhorzer, and everyone at AG/AI for making it happen.

This is a semi-live document where I’ll answer various questions people have about my recent MonkeyCage blog post

I used Catalist’s turnout score and Obama support score the two microtargeting scores that I reference in the post. Neither was affected by 2012 campaign activity, thus reducing endogeneity problems. Special thanks to Catalist for the data and helping me understand its nuances.

Chris Kennedy rightly points out that other organizations besides the official Obama and Romney campaigns were engaged in targeted GOTV. The correct interpretation is to read “Obama campaign” and “Romney campaign” as stand-ins for “Democratic efforts” and “Republican efforts.” I apologize for the error.

Figure 1 was distorted by Monkey Cage. Here’s the real one:

Sporadic voters are defined as those having ex-ante turnout probabilities of below 85% (which is about the median probability, and much higher than the mean value of 71%). Campaign effect estimate are much lower among voters with the highest turnout probabilities (>95%); inclusion of borderline GOTV voters (with vote propensity scores 85%-95%) would increase the estimated net Obama effect, but it’s not clear that these voters were targeted by the campaigns, so they are left out. Republicans are defined as having a pre-campaign, Catalist-estimated likelihood of supporting Obama below 40%; Democrats, above 60%. One standard error is shown as error bars in the Figures. The nationwide turnout of 66% is calculated from Catalist’s voter file and includes inactive voters (using official designations) in the denominator.

The turnout and partisanship scores were developed by Catalist and used by them in Spring 2012. No 2012 campaign activities affect these microtargeting scores. This helps avoid endogeneity issues.

Same-day registration (aka, Election Day registration, or EDR) allows people to register (or fix their existing registration) at the polls on Election Day. The beauty of this system is that if a citizen who is generally uninterested in politics is persuaded to vote on Election Day (perhaps because of the media attention, peer pressure, or the prevalence of “I Voted” stickers) then that person doesn’t have to worry about the fact that they didn’t do any pre-planning with respect to their registration. Their vote will count. Thus, EDR boosts turnout by 3 to 7 percentage points.

Perhaps the commission didn’t endorse EDR because more people showing up on Election Day who need help with registration slows the process down. But if local election officials use resource calculators, we can solve that problem while boosting turnout and improving our democracy.