空 挡 广 告 位 | 空 挡 广 告 位

SceneFormer: Indoor Scene Generation with Transformers

Note: We don't have the ability to review paper

PubDate: January 2022

Teams: Technical University of Munich

Writers: Xinpeng Wang; Chandan Yeshwanth; Matthias Nießner

PDF: SceneFormer: Indoor Scene Generation with Transformers

Abstract

We address the task of indoor scene generation by generating a sequence of objects, along with their locations and orientations conditioned on a room layout. Large-scale indoor scene datasets allow us to extract patterns from user-designed indoor scenes, and generate new scenes based on these patterns. Existing methods rely on the 2D or 3D appearance of these scenes in addition to object positions, and make assumptions about the possible relations between objects. In contrast, we do not use any appearance information, and implicitly learn object relations using the self-attention mechanism of transformers. We show that our model design leads to faster scene generation with similar or improved levels of realism compared to previous methods. Our method is also flexible, as it can be conditioned not only on the room layout but also on text descriptions of the room, using only the cross-attention mechanism of transformers. Our user study shows that our generated scenes are preferred to the state-of-the-art FastSynth scenes 53.9% and 56.7% of the time for bedroom and living room scenes, respectively. At the same time, we generate a scene in 1.48 seconds on average, 20% faster than FastSynth.

您可能还喜欢...

Paper