Convolutional Humanoid Animation via Deformation
In this paper we present a new deep learning-driven approach to image-based synthesis of animations involving humanoid characters. Unlike previous deep approaches to image-based animation our method makes no assumptions on the type of motion to be animated nor does it require dense temporal input to produce motion. Instead we generate new animations by interpolating between user chosen keyframes, arranged sparsely in time. Utilizing a novel configuration manifold learning approach we interpolate suitable motions between these keyframes. In contrast to previous methods, ours requires less data (animations can be generated from a single youtube video) and is broadly applicable to a wide range of motions including facial motion, whole body motion and even scenes with multiple characters. These improvements serve to significantly reduce the difficulty in producing image-based animations of humanoid characters, allowing even broader audiences to express their creativity.
READ FULL TEXT