Reconstruction of Dexterous 3D Motion Data From a Flexible Magnetic Sensor With Deep Learning and Structure-Aware Filtering
PubDate: October 2020
Teams: Tohoku University;University of Hong Kong
Writers: Jiawei Huang; Ryo Sugawara; Kinfung Chu; Taku Komura; Yoshifumi Kitamura
We propose IM3D+, a novel approach to reconstructing 3D motion data from a flexible magnetic flux sensor array using deep learning and a structure-aware temporal bilateral filter. Computing the 3D configuration of markers (inductor-capacitor (LC) coils) from flux sensor data is difficult because the existing numerical approaches suffer from system noise, dead angles, the need for initialization, and limitations in the sensor array’s layout. We solve these issues with deep neural networks to learn the regression from the simulation flux values to the LC coils’ 3D configuration, which can be applied to the actual LC coils at any location and orientation within the capture volume. To cope with the influence of system noise and the dead-angle limitation caused by the characteristics of the hardware and sensing principle, we propose a structure-aware temporal bilateral filter for reconstructing motion sequences. Our method can track various movements, including fingers that manipulate objects, beetles that move inside a vivarium with leaves and soil, and the flow of opaque fluid. Since no power supply is needed for the lightweight wireless markers, our method can robustly track movements for a very long time, making it suitable for various types of observations whose tracking is difficult with existing motion-tracking systems. Furthermore, the flexibility of the flux sensor layout allows users to reconfigure it based on their own applications, thus making our approach suitable for a variety of virtual reality applications.