空 挡 广 告 位 | 空 挡 广 告 位

The Object at Hand: Automated Editing for Mixed Reality Video Guidance from Hand-Object Interactions

Note: We don't have the ability to review paper

PubDate: November 2021

Teams: University of Bristol

Writers: Yao Lu; Walterio W. Mayol-Cuevas

PDF:

Abstract

In this paper, we concern with the problem of how to automatically extract the steps that compose real-life hand activities. This is a key competence towards processing, monitoring and providing video guidance in Mixed Reality systems. We use egocentric vision to observe hand-object interactions in real-world tasks and automatically decompose a video into its constituent steps. Our approach combines hand-object interaction (HOI) detection, object similarity measurement and a finite state machine (FSM) representation to automatically edit videos into steps. We use a combination of Convolutional Neural Networks (CNNs) and the FSM to discover, edit cuts and merge segments while observing real hand activities. We evaluate quantitatively and qualitatively our algorithm on two datasets: the GTEA [19], and a new dataset we introduce for Chinese Tea making. Results show our method is able to segment hand-object interaction videos into key step segments with high levels of precision.

您可能还喜欢...

Paper