Neural Re-Simulation for Generating Bounces in Single Images
Title: Neural Re-Simulation for Generating Bounces in Single Images
Teams: University College London 2Adobe Research
Writers: Carlo Innamorati 1Bryan Russell 2Danny Kaufman 2Niloy J. Mitra
Publication date: Oct 2019
Abstract
We introduce a method to generate videos of dynamic virtual objects plausibly interacting via collisions with a still image’s environment. Given a starting trajectory, physically simulated with the estimated geometry of a single, static input image, we learn to ‘correct’ this trajectory to a visually plausible one via a neural network. The neural network can then be seen as learning to ‘correct’ traditional simulation output, generated with incomplete and imprecise world information, to obtain context-specific, visually plausible re-simulated output, a process we call neural re-simulation. We train our system on a set of 50k synthetic scenes where a virtual moving object (ball) has been physically simulated. We demonstrate our approach on both our synthetic dataset and a collection of real-life images depicting everyday scenes, obtaining consistent improvement over baseline alternatives throughout.