VisMerge: Light Adaptive Vision Augmentation via Spectral and Temporal Fusion of Non-visible Light
PubDate: November 2017
Teams: Osaka University
Writers: Jason Orlosky; Peter Kim; Kiyoshi Kiyokawa; Tomohiro Mashita; Photchara Ratsamee; Yuki Uranishi; Haruo Takemura
PDF: VisMerge: Light Adaptive Vision Augmentation via Spectral and Temporal Fusion of Non-visible Light
Abstract
Low light situations pose a significant challenge to individuals working in a variety of different fields such as firefighting, rescue, maintenance and medicine. Tools like flashlights and infrared (IR) cameras have been used to augment light in the past, but they must often be operated manually, provide a field of view that is decoupled from the operator’s own view, and utilize color schemes that can occlude content from the original scene. To help address these issues, we present VisMerge, a framework that combines a thermal imaging head mounted display (HMD) and algorithms that temporally and spectrally merge video streams of different light bands into the same field of view. For temporal synchronization, we first develop a variant of the time warping algorithm used in virtual reality (VR), but redesign it to merge video see-through (VST) cameras with different latencies. Next, using computer vision and image compositing we develop five new algorithms designed to merge non-uniform video streams from a standard RGB camera and small form-factor infrared (IR) camera. We then implement six other existing fusion methods, and conduct a series of comparative experiments, including a system level analysis of the augmented reality (AR) time warping algorithm, a pilot experiment to test perceptual consistency across all eleven merging algorithms, and an in-depth experiment on performance testing the top algorithms in a VR (simulated AR) search task. Results showed that we can reduce temporal registration error due to inter-camera latency by an average of 87.04%, that the wavelet and inverse stipple algorithms were perceptually rated the highest, that noise modulation performed best, and that freedom of user movement is significantly increased with visualizations engaged.