"Given an artistic image, we transfer its particular style of painting to the entire video," explains the researchers' recently published paper, which aims to build on earlier related work on still images.

One problem with video is that processing each frame independently leads to flickering and false discontinuities, the researchers found. To preserve a smoother transition between frames, they added what they call a temporal constraint that takes optical flow into account.

Another thing they added was a better way to reconstruct objects or scenery in the video hidden in one portion but re-exposed in another, making it difficult to make up for the lost time in terms of continuity.

"To solve this, we make use of long-term motion estimates," they said. "This allows us to enforce consistency of the synthesized frames before and after the occlusion."

Meanwhile, a multi-pass algorithm that processes the video in alternating directions using both forward and backward flow helps to get rid of artifacts at the image's edges.

The researchers tested their code on Ubuntu 14.04. It's now available on GitHub.

Katherine Noyes has been an ardent geek ever since she first conquered Pyramid of Doom on an ancient TRS-80. Today she covers enterprise software in all its forms, with an emphasis on cloud computing, big data, analytics and artificial intelligence.