雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast

Note: We don't have the ability to review paper

PubDate: Sep 2022

Teams:  Beihang University

Writers: Ye Du; Zehua Fu; Qingjie Liu; Yunhong Wang

PDF:Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast

Abstract

Though image-level weakly supervised semantic seg-mentation (WSSS) has achieved great progress with Class Activation Maps (CAMs) as the cornerstone, the large su-pervision gap between classification and segmentation still hampers the model to generate more complete and precise pseudo masks for segmentation. In this study, we propose weakly-supervised pixel-to-prototype contrast that can provide pixel-level supervisory signals to narrow the gap. Guided by two intuitive priors, our method is executed across different views and within per single view of an image, aiming to impose cross-view feature semantic consistency regularization and facilitate intra(inter)-class compactness(dispersion) of the feature space. Our method can be seamlessly incorporated into existing WSSS models with-out any changes to the base networks and does not incur any extra inference burden. Extensive experiments manifest that our method consistently improves two strong baselines by large margins, demonstrating the effectiveness. Specifically, built on top of SEAM, we improve the initial seed mIoU on PASCAL VOC 2012 from 55.4% to 61.5%. Moreover, armed with our method, we increase the segmentation mIoU of EPS from 70.8% to 73.6%, achieving new state-of-the-art.

您可能还喜欢...

Paper