空 挡 广 告 位 | 空 挡 广 告 位

CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

Note: We don't have the ability to review paper

PubDate: Oct 2022

Teams: . New York University;FAIR Labs

Writers: Nur Muhammad (Mahi) Shafiullah, Chris Paxton, Lerrel Pinto, Soumith Chintala, Arthur Szlam

PDF: CLIP-Fields: Weakly Supervised Semantic Fields for Robotic Memory

Abstract

We propose CLIP-Fields, an implicit scene model that can be trained with no direct human supervision. This model learns a mapping from spatial locations to semantic embedding vectors. The mapping can then be used for a variety of tasks, such as segmentation, instance identification, semantic search over space, and view localization. Most importantly, the mapping can be trained with supervision coming only from web-image and web-text trained models such as CLIP, Detic, and Sentence-BERT. When compared to baselines like Mask-RCNN, our method outperforms on few-shot instance identification or semantic segmentation on the HM3D dataset with only a fraction of the examples. Finally, we show that using CLIP-Fields as a scene memory, robots can perform semantic navigation in real-world environments. Our code and demonstrations are available here: https://mahis.life/clip-fields/.

您可能还喜欢...

Paper