Researchers essentially turned computers into couch potatoes by feeding them hundreds of hours of footage from popular TV shows like "The Office," "Scrubs" and "Desperate Housewives," NPR reported Tuesday. Each clip ends with one of four actions: a hug, a kiss, a high five or a handshake. The computer's challenge? Predict which one is about to happen.

With the help of a learning algorithm, the artificially intelligent test subjects were able to predict the correct action 43 percent of the time, compared with 71 percent from human test subjects. Researchers hope to see that figure increase as the computers consume more and more video examples and learn to pick up patterns from them.

The long-term goal is to train AI to recognize things like danger, injury or crime as they're happening or even about to happen. Breakthroughs like that are still likely a long way off. But given the project's success, MIT's researchers are optimistic that they can move us closer. At the very least, here's hoping that computers pick up the nuanced complexities of Michael Scott's "that's what she said" jokes.