雨果巴拉:行业北极星Vision Pro过度设计不适合市场

Accurate and Robust Video Saliency Detection via Self-Paced Diffusion

Note: We don't have the ability to review paper

PubDate: September 2019

Teams: Beihang University;Stony Brook University

Writers: Yunxiao Li; Shuai Li; Chenglizhao Chen; Aimin Hao; Hong Qin

PDF: Accurate and Robust Video Saliency Detection via Self-Paced Diffusion

Abstract

Conventional video saliency detection methods frequently follow the common bottom-up thread to estimate video saliency within the short-term fashion. As a result, such methods can not avoid the obstinate accumulation of errors when the collected low-level clues are constantly ill-detected. Also, being noticed that a portion of video frames, which are not nearby the current video frame over the time axis, may potentially benefit the saliency detection in the current video frame. Thus, we propose to solve the aforementioned problem using our newly-designed key frame strategy (KFS), whose core rationale is to utilize both the spatial-temporal coherency of the salient foregrounds and the objectness prior (i.e., how likely it is for an object proposal to contain an object of any class) to reveal the valuable long-term information. We could utilize all this newly-revealed long-term information to guide our subsequent “self-paced” saliency diffusion, which enables each key frame itself to determine its diffusion range and diffusion strength to correct those ill-detected video frames. At the algorithmic level, we first divide a video sequence into short-term frame batches, and the object proposals are obtained in a frame-wise manner. Then, for each object proposal, we utilize a pre-trained deep saliency model to obtain high-dimensional features in order to represent the spatial contrast. Since the contrast computation within multiple neighbored video frames (i.e., the non-local manner) is relatively insensitive to the appearance variation, those object proposals with high-quality low-level saliency estimation frequently exhibit strong similarity over the temporal scale. Next, the long-term common consistency (e.g., appearance models/movement patterns) of the salient foregrounds could be explicitly revealed via similarity analysis accordingly. We further boost the detection accuracy via long-term information guided saliency diffusion in a self-paced manner. We have conducted extensive experiments to compare our method with 16 state-of-the-art methods over 4 largest public available benchmarks, and all results demonstrate the superiority of our method in terms of both accuracy and robustness.

您可能还喜欢...

Paper