By synchronizing movement and sound, it’s possible to make motion captured on video more dance-like. | Stocksy/Bonninstudio

Computer scientists have developed a software algorithm that makes it possible to create and manipulate dance in video.

The new tool can change the song people are dancing to, fix bad timing and even turn boring political speeches into high-energy music videos using a process the researchers call “dancification.”

Abe Davis, the postdoctoral scholar leading the project, said the software does to video what live disc jockeys do with sound. “We sample and remix motion instead of music,” Davis said. “We’re taking ideas that have had a tremendous impact on modern music and adapting them to video.”

Professor Maneesh Agrawala, Davis’s adviser in the Stanford Computer Graphics Lab, said the tool gives creative people a way to dancify everything from memes to music videos, from feature productions to advertisements.

“Video has been easy to capture but difficult to manipulate,” Agrawala said. “This software changes that.”

Davis presented the theory and practice of what the researchers formally call “Visual Rhythm and Beat” in August at SIGGRAPH 2018, the premier gathering of computer graphics researchers. He has created a demo video showing how the software can synchronize motion with sound and actually change that motion subtly to make ordinary movement more dance-like.

Underlying the algorithm and the demo is the concept that movement has a rhythm just like music. By altering that visual rhythm to match the beat of music, the software can make arbitrary video – for example, of a turtle, cat or congressional hearing – look like the subjects are dancing. “Often, what separates dance from other types of motion isn’t the moves we perform, but the rhythm of those moves,” Davis said. “By manipulating that rhythm, we can turn regular motion into dance.”

Agrawala said the new tool is part of the evolution of video editing. “It’s super exciting to put out a new tool that people will use to do things we can’t envision because they weren’t previously possible,” he said. “What we are doing is enlarging the creative space.”

Davis said the software may be difficult for non-programmers to use right now, but he is working to release code soon that people can try for themselves. To learn more, visit the Visual Beat website. The researchers have sought a patent on the technology.

Maneesh Agrawala, the Forest Baskett Professor of Computer Science, Director of the Brown Institute for Media Innovation and professor, by courtesy, of electrical engineering.