Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas

Note: We don't have the ability to review paper

PubDate: June 2020

Teams: The Ohio State University;ETH Zurich;Microsoft

Writers: Xiaohu Lu Zuoyue Li Zhaopeng Cui Martin Ralf Oswald Marc Pollefeys Rongjun Qin

PDF: Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas

Abstract

We present a novel method for generating panoramic street-view images which are geometrically consistent with a given satellite image. Different from existing approaches that completely rely on a deep learning architecture to generalize cross-view image distributions, our approach explicitly loops in the geometric configuration of the ground objects based on the satellite views, such that the produced ground view synthesis preserves the geometric shape and the semantics of the scene. In particular, we propose a neural network with a geo-transformation layer that turns predicted ground-height values from the satellite view to a ground view while retaining the physical satellite-to-ground relation. Our results show that the synthesized image retains well-articulated and authentic geometric shapes, as well as texture richness of the street-view in various scenarios. Both qualitative and quantitative results demonstrate that our method compares favorably to other state-of-the-art approaches that lack geometric consistency.

You may also like...

Paper