空 挡 广 告 位 | 空 挡 广 告 位

A Fixation-Based 360° Benchmark Dataset for Salient Object Detection

Note: We don't have the ability to review paper

PubDate: May 2020

Teams: IETR INSA Rennes

Writers: Yi Zhang, Lu Zhang, Wassim Hamidouche, Olivier Deforges

PDF: A Fixation-based 360° Benchmark Dataset for Salient Object Detection

Abstract

Fixation prediction (FP) in panoramic contents has been widely investigated along with the booming trend of virtual reality (VR) applications. However, another issue within the field of visual saliency, salient object detection (SOD), has been seldom explored in 360° (or omnidirectional) images due to the lack of datasets representative of real scenes with pixel-level annotations. Toward this end, we collect 107 equirectangular panoramas with challenging scenes and multiple object classes. Based on the consistency between FP and explicit saliency judgements, we further manually annotate 1,165 salient objects over the collected images with precise masks under the guidance of real human eye fixation maps. Six state-of-the-art SOD models are then benchmarked on the proposed fixation-based 360° image dataset (F-360iSOD), by applying a multiple cubic projection-based fine-tuning method. Experimental results show a limitation of the current methods when used for SOD in panoramic images, which indicates the proposed dataset is challenging. Key issues for 360° SOD is also discussed. The proposed dataset is available at this https URL.

您可能还喜欢...

Paper