Chasing the Elusive Arrow of Time with Computer Algorithms

Editor

June 21, 2014 // 09:00 AM EST

Copy This URL

Image: Enrique Ramos/Shutterstock

What direction is time heading in at this very moment? Are you sure? Of course you are. Life is just a constant barrage of causes and effects, things happening before other things that, had they not occurred, would have prevented the other, subsequent things from happening. No one goes through their life thinking about this, the directionality of time, because it's beyond evidence and buried in the brain's deepest intuitions: you were born before you die and it could hardly be the other way around. Time moves forward.

As it is with really all of the mind's deepest intuitions, this directionality is not so easy. It's possible to imagine a wide variety of schemes involving information and information hiding that make time's arrow less clear. In fact, physics, at its smallest, deepest roots has really no interest in forward and backward; it could really go either way. Time, or the direction of time, arises as physics gets bigger and more complicated. Zoom way in, all the way in actually, and what you'll find is enviable oblivian: cause-effects, effect-causes, effect-effects, cause-causes. Something like that.

But, even at the scales of complex organisms, it's possible to play tricks on time perception by just limiting information, hiding some causes and hiding some effects. Given a cleverly selected (or maybe not so cleverly) snippet of video, just some scene and a bit of motion, the situation could be much different. If you were some computer program, without the same knowledge base and assumptions of forward motion that we humans all have, and were asked to choose whether a video clip were moving forward and backward just based on the evidence given in the clip, you might find the task challenging. And if you were a computer scientist trying to teach some software to make the distinction, you might also find it challenging.

A research team based at MIT unveiled a new study on Friday, to be presented at this month's IEEE Conference on Computer Vision and Pattern Recognition, that describes three new algorithms, all approaching the time-detection problem differently, with success rates of up to 90 percent. Like most algorithms tasked with parsing behavior and motion in the chaotic world of macroscopic causes and effects, it's a problem of decomposing the world into smaller semantic units. As humans, we don't need to do this because we have big brains with lots of useful acquired knowledge, but computers are left to figure the world out by breaking it down into subdermal logic.

"If you see that a clock in a movie is going backward, that requires a high-level understanding of how clocks normally move," said co-author William Freeman in an MIT press release. "But we were interested in whether we could tell the direction of time from low-level cues, just watching the way the world behaves. It's kind of like learning what the structure of the visual world is ...

"To study shape perception, you might invert a photograph to make everything that's black white, and white black, and then check what you can still see and what you can't," Freeman continued. "Here we're doing a similar thing, by reversing time, then seeing what it takes to detect that change. We're trying to understand the nature of the temporal signal."

The most successful of the three algorithms essentially writes a dictionary of motion, comprised of some 4,000 "words." Each word is itself comprised of a four-by-four grid, with each cell describing attributes of motion, direction and degree. The dictionary was built by taking individual frames of real video, subdividing them into hundreds of thousands of tiny squares which are each further subdivided into the crucial four. Each of these tiniest squares in relation to its three neighbors within a given word gives information about the direction and distance (degree) that pixels are moving. From this massive set, it was then possible to generalize to our dictionary.

Finally, each of the 4,000 words in the dictionary were classified according to their respective likelihoods of forward and backward temporal movement. It's a rather brute force approach, but it works. One nice feature is that the algorithm, as it's making a determination from a given video clip, highlights the particular cells being used in the judgement. The researchers note that this might give some clue into the system that human brains use to make time determinations, through the identification of most-crucial elements.

The second algorithm is simpler. It's based just on the fact that in forward-moving time, motion tends to move outward rather than inward. Points for simplicity, but it still only achieved 70 percent.

The final algorithm attempted to define causation itself, in terms of statistics. Causation, just as a concept in experimental science, is tricky. "There's a research area on causality," Freeman said. "And that's actually really quite important, medically even, because in epidemiology, you can't afford to run the experiment twice, to have people experience this problem and see if they get it and have people do that and see if they don't. But you see things that happen together and you want to figure out: 'Did one cause the other?' There's this whole area of study within statistics on, 'How can you figure out when something did cause something else?' And that relates in an indirect way to this study as well."

The causation algorithm attempts to find "noise" surrounding visual events, or what might be considered to be a data error in a visual signal. The example given is of a ball rolling down a hill and hitting a bump, going airborne. If you play the video backwards, you'd see a ball jumping into the air for no apparent reason, with the bump (the error), only appearing as the ball "lands." But played forwards, you'd see the bump, or error, and could make a connection. This algorithm gets to the root of the overall time problem in a way the others don't, but you can see why it's limited. (Specifically, it only works in cases of linear motion, which is rare in cases of "human agency.")

The researchers make a practical nod, suggesting that these algorithms could be used to make video game and movie graphics more realistic. But maybe it's best to just think conceptually, as time's directionality in the real world remains mostly a conceptual problem.