Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones
PubDate: November 2015
Teams: The University of Tokyo
Writers: Tomohiro Tanikawa;Hidenori Uzuka;Takuji Narumi;Michitaka Hirose
PDF: Integrated view-input ar interaction for virtual object manipulation using tablets and smartphones
Abstract
Lately, mobile augmented reality (AR) has become very popular and is used for many commercial and product promotional activities. However, in almost all mobile AR applications, the user only views annotated information or the preset motion of the virtual object in an AR environment and is unable to interact with the virtual objects as if he/she were interacting with real objects in the real environment. In this paper, in an attempt to realize enhanced intuitive and realistic object manipulation in the mobile AR environment, we propose an integrated view-input AR interaction method, which integrates user device manipulation and virtual object manipulation. The method enables the user to hold a 3D virtual object by touching the displayed object on the 2D touch screen of a mobile device, and to move and rotate the object by moving and rotating the mobile device while viewing the held object by way of the 2D screen of the mobile device. Based on this concept, we implemented three types of integrated methods, namely the Rod, Center, and Touch methods, and conducted a user study to investigate the baseline performance metrics of the proposed method on an AR object manipulation task. The Rod method achieved the highest success rate (91%). Participants’ feedback indicated that this is because the Rod method is the most natural, and evoked a fixed mental model that is conceivable in the real environment. These results indicated that visualizing the manipulation point on the screen and restricting the user’s interactivity with virtual objects from the user’s position of view based on a conceivable mental model would be able to aid the user to achieve precise manipulation.