空 挡 广 告 位 | 空 挡 广 告 位

Masked-attention Mask Transformer for Universal Image Segmentation

Note: We don't have the ability to review paper

PubDate: Dec 2021

Teams: 1Facebook AI Research (FAIR) 2University of Illinois at Urbana-Champaign

Writers: Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, Rohit Girdhar

PDF: Masked-attention Mask Transformer for Universal Image Segmentation

Abstract

Image segmentation is about grouping pixels with different semantics, e.g., category or instance membership, where each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K). GitHub link: https://bowenc0221.github.io/mask2former/.

您可能还喜欢...

Paper