Everybody Dance NowGraphics research from UC Berkeley is the best implementation of motion synthesis
Everybody Dance NowGraphics research from UC Berkeley is the best implementation of motion synthesis with human poses to date, taking the dance moves from one video and recreating it in another:This paper presents a simple method for “do as I do” motion transfer: given asource video of a person dancing we can transfer that performance to a novel(amateur) target after only a few minutes of the target subject performingstandard moves. We pose this problem as a per-frame image-to-image translationwith spatio-temporal smoothing. Using pose detections as an intermediaterepresentation between source and target, we learn a mapping from pose imagesto a target subject’s appearance. We adapt this setup for temporally coherentvideo generation including realistic face synthesis.At the moment, there is no official project website / code available, but the official research paper can be found here -- source link
#machine learning#image translation#image synthesis#motion synthesis#visual puppetry#dancing#pose detection