Matching 2D Image Patches and 3D Point Cloud Volumes by Learning Local Cross-domain Feature Descriptors
PubDate: May 2021
Teams: Xiamen University
Writers: Weiquan Liu; Baiqi Lai; Cheng Wang; Xuesheng Bian; Chenglu Wen; Ming Cheng; Yu Zang; Yan Xia; Jonathan Li
Establishing the relationship of 2D images and 3D point clouds is a solution to establish the spatial relationship between 2D and 3D space, i.e. AR virtual-real registration. In this paper, we propose a network, 2D3D-GAN-Net, to learn the local invariant cross-domain feature descriptors of 2D image patches and 3D point cloud volumes. Then, the learned local invariant cross-domain feature descriptors are used for matching 2D images and 3D point clouds. The Generative Adversarial Networks (GAN) is embedded into the 2D3D-GANNet, which is used to distinguish the source of the learned feature descriptors, facilitating the extraction of invariant local cross-domain feature descriptors. Experiments show that the local cross-domain feature descriptors learned by 2D3D-GAN-Net are robust, and can be used for cross-dimensional retrieval on the 2D image patches and 3D point cloud volumes dataset. In addition, the learned 3D feature descriptors are used to register the point cloud for demonstrating the robustness of learned local cross-domain feature descriptors.