Three Tiered Visual-Inertial Tracking and Mapping for Augmented Reality in Urban Settings
PubDate: July 2020
Teams: UMASS Lowell
Writers: Thomas Calloway; Dalila B. Megherbi
Reliably localizing and tracking optical see-through augmented reality (AR) displays moving relative to content in the physical world is one of the primary technical challenges facing widespread augmented reality adoption today. While significant progress has been made in recent years, most augmented reality applications allowing for shared, co-registered experiences require either specialized markers or local maps of relatively small rooms or spaces. In this work we propose a three-tiered approach to visual-inertial simultaneous localization and mapping (SLAM) in urban environments. We estimate head pose locally with a highly flexible inertial navigation system, make novel use of edge computing for local mapping and drift correction and then propose the use of cloud computing for dense 3D modelling and map sharing on a global scale. We evaluate the proposed using an open source optical see-through headset in the streets of downtown Boston.