Leslie Kanani Michiko Ikemoto, Okan Arikan and David Forsyth

We describe a method for responsive, high-quality synthesis of human motion. Our method can quickly provide a motion synthesizer with a one-second long, high-quality transition from any frame in motion collection to any other frame in the collection. We construct these transitions using 2-, 3- and 4-way blends. During pre-processing, we search all possible blends between representative samples of motion obtained using clustering. The blends are evaluated automatically with a novel motion evaluation procedure, which we demonstrate is significantly more accurate than current alternatives. The best blending recipe for each pair of representatives is then cached. At run-time, we build a transition between motions by matching a future window of the source motion to a representative, matching the past of the target motion to a representative, and then applying the blend recipe recovered from the cache to source and target motion and whatever stubs are required. This method yields good-looking transitions between distinct motions with very low online cost.

BibTeX citation:

@techreport{Ikemoto:EECS-2006-14,
Author = {Ikemoto, Leslie Kanani Michiko and Arikan, Okan and Forsyth, David},
Title = {Quick Motion Transitions with Cached Multi-way Blends},
Institution = {EECS Department, University of California, Berkeley},
Year = {2006},
Month = {Feb},
URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-14.html},
Number = {UCB/EECS-2006-14},
Abstract = {We describe a method for responsive, high-quality synthesis of human motion. Our method can quickly provide a motion synthesizer with a one-second long, high-quality transition from any frame in motion collection to any other frame in the collection.
We construct these transitions using 2-, 3- and 4-way blends. During pre-processing, we search all possible blends between representative samples of motion obtained using clustering. The blends are evaluated automatically with a novel motion evaluation procedure, which we demonstrate is significantly more accurate than current alternatives. The best blending recipe for each pair of representatives is then cached.
At run-time, we build a transition between motions by matching a future window of the source motion to a representative, matching the past of the target motion to a representative, and then applying the blend recipe recovered from the cache to source and target motion and whatever stubs are required. This method yields good-looking transitions between distinct motions with very low online cost.}
}