空 挡 广 告 位 | 空 挡 广 告 位

PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection

Note: We don't have the ability to review paper

PubDate: May 2022

Teams: Tsinghua University;Xi’an Jiaotong University;DIDI

Writers: Linfeng Zhang, Runpei Dong, Hung-Shuo Tai, Kaisheng Ma

PDF: PointDistiller: Structured Knowledge Distillation Towards Efficient and Compact 3D Detection

Abstract

The remarkable breakthroughs in point cloud representation learning have boosted their usage in real-world applications such as self-driving cars and virtual reality. However, these applications usually have an urgent requirement for not only accurate but also efficient 3D object detection. Recently, knowledge distillation has been proposed as an effective model compression technique, which transfers the knowledge from an over-parameterized teacher to a lightweight student and achieves consistent effectiveness in 2D vision. However, due to point clouds’ sparsity and irregularity, directly applying previous image-based knowledge distillation methods to point cloud detectors usually leads to unsatisfactory performance. To fill the gap, this paper proposes PointDistiller, a structured knowledge distillation framework for point clouds-based 3D detection. Concretely, PointDistiller includes local distillation which extracts and distills the local geometric structure of point clouds with dynamic graph convolution and reweighted learning strategy, which highlights student learning on the crucial points or voxels to improve knowledge distillation efficiency. Extensive experiments on both voxels-based and raw points-based detectors have demonstrated the effectiveness of our method over seven previous knowledge distillation methods. For instance, our 4X compressed PointPillars student achieves 2.8 and 3.4 mAP improvements on BEV and 3D object detection, outperforming its teacher by 0.9 and 1.8 mAP, respectively. Codes have been released at this https URL.

您可能还喜欢...

Paper