LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image
Title: LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image
Teams: University of Washington
Writers: Chuhang Zou, Alex Colburn, Qi Shan, Derek Hoiem
Publication date: Mar 2018
Abstract
We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. L-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet, but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general Manhattan layouts.