Microsoft – Nweon Paper https://paper.nweon.com 映维网,影响力虚拟现实(VR)、增强现实(AR)产业信息数据平台 Thu, 30 Jun 2022 07:49:21 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.17 https://paper.nweon.com/wp-content/uploads/2021/04/nweon-icon.png Microsoft – Nweon Paper https://paper.nweon.com 32 32 Quantifying the Effects of Working in VR for One Week https://paper.nweon.com/12389 Thu, 16 Jun 2022 02:16:25 +0000 https://paper.nweon.com/12389 PubDate: Jun 2022

Quantifying the Effects of Working in VR for One Week最先出现在Nweon Paper

]]>
PubDate: Jun 2022

Teams: Coburg University of Applied Sciences;Microsoft Research;University of Cambridge;University of Primorska

Writers: Verena Biener, Snehanjali Kalamkar, Negar Nouri, Eyal Ofek, Michel Pahud, John J. Dudley, Jinghui Hu, Per Ola Kristensson, Maheshya Weerasinghe, Klen Čopič Pucihar, Matjaž Kljun, Stephan Streuber, Jens Grubert

PDF: Quantifying the Effects of Working in VR for One Week

Abstract

Virtual Reality (VR) provides new possibilities for modern knowledge work. However, the potential advantages of virtual work environments can only be used if it is feasible to work in them for an extended period of time. Until now, there are limited studies of long-term effects when working in VR. This paper addresses the need for understanding such long-term effects. Specifically, we report on a comparative study (n=16), in which participants were working in VR for an entire week – for five days, eight hours each day – as well as in a baseline physical desktop environment. This study aims to quantify the effects of exchanging a desktop-based work environment with a VR-based environment. Hence, during this study, we do not present the participants with the best possible VR system but rather a setup delivering a comparable experience to working in the physical desktop environment. The study reveals that, as expected, VR results in significantly worse ratings across most measures. Among other results, we found concerning levels of simulator sickness, below average usability ratings and two participants dropped out on the first day using VR, due to migraine, nausea and anxiety. Nevertheless, there is some indication that participants gradually overcame negative first impressions and initial discomfort. Overall, this study helps lay the groundwork for subsequent research, by clearly highlighting current shortcomings and identifying opportunities for improving the experience of working in VR.

Quantifying the Effects of Working in VR for One Week最先出现在Nweon Paper

]]>
BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis https://paper.nweon.com/12345 Mon, 06 Jun 2022 07:52:26 +0000 https://paper.nweon.com/12345 PubDate: May 2022

BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis最先出现在Nweon Paper

]]>
PubDate: May 2022

Teams: University of Science and Technology of China;Microsoft Research Asia;Imperial College London;Microsoft Azure Speech;University of Surrey;South China University of Technology

Writers: Yichong Leng, Zehua Chen, Junliang Guo, Haohe Liu, Jiawei Chen, Xu Tan, Danilo Mandic, Lei He, Xiang-Yang Li, Tao Qin, Sheng Zhao, Tie-Yan Liu

PDF: BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis

Abstract

Binaural audio plays a significant role in constructing immersive augmented and virtual realities. As it is expensive to record binaural audio from the real world, synthesizing them from mono audio has attracted increasing attention. This synthesis process involves not only the basic physical warping of the mono audio, but also room reverberations and head/ear related filtrations, which, however, are difficult to accurately simulate in traditional digital signal processing. In this paper, we formulate the synthesis process from a different perspective by decomposing the binaural audio into a common part that shared by the left and right channels as well as a specific part that differs in each channel. Accordingly, we propose BinauralGrad, a novel two-stage framework equipped with diffusion models to synthesize them respectively. Specifically, in the first stage, the common information of the binaural audio is generated with a single-channel diffusion model conditioned on the mono audio, based on which the binaural audio is generated by a two-channel diffusion model in the second stage. Combining this novel perspective of two-stage synthesis with advanced generative models (i.e., the diffusion models),the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples. Experiment results show that on a benchmark dataset, BinauralGrad outperforms the existing baselines by a large margin in terms of both object and subject evaluation metrics (Wave L2: 0.128 vs. 0.157, MOS: 3.80 vs. 3.61). The generated audio samples are available online.

BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis最先出现在Nweon Paper

]]>
DreamStream: Immersive and Interactive Spectating in VR https://paper.nweon.com/12294 Wed, 25 May 2022 07:13:25 +0000 https://paper.nweon.com/12294 PubDate:

DreamStream: Immersive and Interactive Spectating in VR最先出现在Nweon Paper

]]>
PubDate:

Teams: Microsoft Research

Writers: Balasaravanan Thoravi Kumaravel;Andrew D Wilson

PDF: DreamStream: Immersive and Interactive Spectating in VR

Abstract

Today spectating and streaming virtual reality (VR) activities typically involves spectators viewing a 2D stream of the VR user’s view. Streaming 2D videos of the game play is popular and well-supported by platforms such as Twitch. However, the generic streaming of full 3D representations is less explored. Thus, while the VR player’s experience may be fully immersive, spectators are limited to 2D videos. This asymmetry lessens the overall experience for spectators, who themselves may be eager to spectate in VR. DreamStream puts viewers in the virtual environment of the VR application, allowing them to look “over the shoulder” of the VR player. Spectators can view streamed VR content immersively in 3D, independently explore the VR scene beyond what the VR player sees and ultimately cohabit the virtual environment alongside the VR player. For the VR player, DreamStream provides a spatial awareness of all their spectators. DreamStream retrofits and works with existing VR applications. We discuss the design and implementation of DreamStream, and carry out three qualitative informal evaluations. These evaluations shed light on the strengths and weakness of using DreamStream for the purpose of interactive spectating. Our participants found that DreamStream’s VR viewer interface offered increased immersion, and made it easier to communicate and interact with the VR player.

DreamStream: Immersive and Interactive Spectating in VR最先出现在Nweon Paper

]]>
Rotation-constrained optical see-through headset calibration with bare-hand alignment https://paper.nweon.com/12062 Thu, 21 Apr 2022 06:13:21 +0000 https://paper.nweon.com/12062 PubDate:

Rotation-constrained optical see-through headset calibration with bare-hand alignment最先出现在Nweon Paper

]]>
PubDate:

Teams: Imperial College London

Writers: Xue Hu; Ferdinando Rodriguez y Baena; Fabrizio Cutolo

PDF: Rotation-constrained optical see-through headset calibration with bare-hand alignment

Abstract

The inaccessibility of user-perceived reality remains an open issue in pursuing the accurate calibration of optical see-through (OST) head-mounted displays (HMDs). Manual user alignment is usually required to collect a set of virtual-to-real correspondences, so that a default or an offline display calibration can be updated to account for the user’s eye position(s). Current alignment-based calibration procedures usually require point-wise alignments between rendered image point(s) and associated physical landmark(s) of a target calibration tool. As each alignment can only provide one or a few correspondences, repeated alignments are required to ensure calibration quality. This work presents an accurate and tool-less online OST calibration method to update an offline-calibrated eye-display model. The user’s bare hand is markerlessly tracked by a commercial RGBD camera anchored to the OST headset to generate a user-specific cursor for correspondence collection. The required alignment is object-wise, and can provide thousands of unordered corresponding points in tracked space. The collected correspondences are registered by a proposed rotation-constrained iterative closest point (rcICP) method to optimise the viewpoint-related calibration parameters. We implemented such a method for the Microsoft HoloLens 1. The resiliency of the proposed procedure to noisy data was evaluated through simulated tests and real experiments performed with an eye-replacement camera. According to the simulation test, the rcICP registration is robust against possible user-induced rotational misalignment. With a single alignment, our method achieves 8.81 arcmin (1.37 mm) positional error and 1. 76° rotational error by camera-based tests in the arm-reach distance, and 10.79 arcmin (7.71 pixels) reprojection error by user tests.

Rotation-constrained optical see-through headset calibration with bare-hand alignment最先出现在Nweon Paper

]]>
Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality https://paper.nweon.com/12060 Thu, 21 Apr 2022 05:55:28 +0000 https://paper.nweon.com/12060 PubDate: November 2021

Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: Mississippi State University;University of Nebraska–Lincoln

Writers: Farzana Alam Khan; Veera Venkata Ram Murali Krishna Rao Muvva; Dennis Wu; Mohammed Safayet Arefin; Nate Phillips; J. Edward Swan

PDF: Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality

Abstract

For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object’s location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1 st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1 st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.

Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality最先出现在Nweon Paper

]]>
Walking Through Walls: The Effect of Collision-Based Feedback on Affordance Judgments in Augmented Reality https://paper.nweon.com/12028 Thu, 21 Apr 2022 04:34:43 +0000 https://paper.nweon.com/12028 PubDate: November 2021

Walking Through Walls: The Effect of Collision-Based Feedback on Affordance Judgments in Augmented Reality最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: University of Utah;Vanderbilt University

Writers: Holly C. Gagnon; Dun Na; Keith Heiner; Jeanine Stefanucci; Sarah Creem-Regehr; Bobby Bodenheimer

PDF: Walking Through Walls: The Effect of Collision-Based Feedback on Affordance Judgments in Augmented Reality

Abstract

Feedback about actions in augmented reality (AR) is limited and can be ambi due to the nature of interacting with virtual objects. AR devices also have a restricted field of view (FOV), limiting the amount of available visual information that can be used to perform an action or provide feedback during or after an action. We used the Microsoft HoloLens 1 to investigate whether perceptual-motor, collision-based outcome feedback calibrates judgments of whether one can pass through an aperture in AR. Additionally, we manipulated the amount of information available within the FOV by having participants view the aperture at two different distances. Feedback calibrated passing-through judgments at both distances but resulted in an overestimation of the just-passable aperture width. Moreover, the far viewing condition had more overestimation of just-passable aperture width than the near viewing condition.

Walking Through Walls: The Effect of Collision-Based Feedback on Affordance Judgments in Augmented Reality最先出现在Nweon Paper

]]>
Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and Robotics Together https://paper.nweon.com/11843 Tue, 08 Mar 2022 23:34:25 +0000 https://paper.nweon.com/11843 PubDate: Feb 2022

Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and Robotics Together最先出现在Nweon Paper

]]>
PubDate: Feb 2022

Teams: Microsoft Mixed Reality and AI Lab;ETH Zurich

Writers: Jeffrey Delmerico, Roi Poranne, Federica Bogo, Helen Oleynikova, Eric Vollenweider, Stelian Coros, Juan Nieto, Marc Pollefeys

PDF: Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and Robotics Together

Abstract

Spatial computing – the ability of devices to be aware of their surroundings and to represent this digitally – offers novel capabilities in human-robot interaction. In particular, the combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning, which offers exciting new possibilities for collaboration between humans and robots. This paper presents several human-robot systems that utilize these capabilities to enable novel robot use cases: mission planning for inspection, gesture-based control, and immersive teleoperation. These works demonstrate the power of mixed reality as a tool for human-robot interaction, and the potential of spatial computing and mixed reality to drive the future of human-robot interaction.

Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and Robotics Together最先出现在Nweon Paper

]]>
Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays https://paper.nweon.com/11320 Thu, 14 Oct 2021 04:26:14 +0000 https://paper.nweon.com/11320 PubDate: Sep 2021

Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays最先出现在Nweon Paper

]]>
PubDate: Sep 2021

Teams: Coburg University of Applied Sciences and Arts;University of Primorska;Microsoft Research;University of Cambridge

Writers: Daniel Schneider, Verena Biener, Alexander Otte, Travis Gesslein, Philipp Gagel, Cuauhtli Campos, Klen Čopič Pucihar, Matjaž Kljun, Eyal Ofek, Michel Pahud, Per Ola Kristensson, Jens Grubert

PDF: Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays

Abstract

An increasing number of consumer-oriented head-mounted displays (HMD) for augmented and virtual reality (AR/VR) are capable of finger and hand tracking. We report on the accuracy of off-the-shelf VR and AR HMDs when used for touch-based tasks such as pointing or drawing. Specifically, we report on the finger tracking accuracy of the VR head-mounted displays Oculus Quest, Vive Pro and the Leap Motion controller, when attached to a VR HMD, as well as the finger tracking accuracy of the AR head-mounted displays Microsoft HoloLens 2 and Magic Leap. We present the results of two experiments in which we compare the accuracy for absolute and relative pointing tasks using both human participants and a robot. The results suggest that HTC Vive has a lower spatial accuracy than the Oculus Quest and Leap Motion and that the Microsoft HoloLens 2 provides higher spatial accuracy than Magic Leap One. These findings can serve as decision support for researchers and practitioners in choosing which systems to use in the future.

Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays最先出现在Nweon Paper

]]>
Rotation-constrained optical see-through headset calibration withbare-hand alignment https://paper.nweon.com/11288 Tue, 12 Oct 2021 03:01:19 +0000 https://paper.nweon.com/11288 PubDate: Aug 2021

Rotation-constrained optical see-through headset calibration withbare-hand alignment最先出现在Nweon Paper

]]>
PubDate: Aug 2021

Teams: Imperial College London;University of Pisa

Writers: Xue Hu, Ferdinando Rodriguez y Baena, Fabrizio Cutolo

PDF: Rotation-constrained optical see-through headset calibration withbare-hand alignment

Abstract

The inaccessibility of user-perceived reality remains an open issue in pursuing the accurate calibration of optical see-through (OST) head-mounted displays (HMDs). Manual user alignment is usually required to collect a set of virtual-to-real correspondences, so that a default or an offline display calibration can be updated to account for the user’s eye position(s). Current alignment-based calibration procedures usually require point-wise alignments between rendered image point(s) and associated physical landmark(s) of a target calibration tool. As each alignment can only provide one or a few correspondences, repeated alignments are required to ensure calibration quality.
This work presents an accurate and tool-less online OST calibration method to update an offline-calibrated eye-display model. The user’s bare hand is markerlessly tracked by a commercial RGBD camera anchored to the OST headset to generate a user-specific cursor for correspondence collection. The required alignment is object-wise, and can provide thousands of unordered corresponding points in tracked space. The collected correspondences are registered by a proposed rotation-constrained iterative closest point (rcICP) method to optimise the viewpoint-related calibration parameters. We implemented such a method for the Microsoft HoloLens 1. The resiliency of the proposed procedure to noisy data was evaluated through simulated tests and real experiments performed with an eye-replacement camera. According to the simulation test, the rcICP registration is robust against possible user-induced rotational misalignment. With a single alignment, our method achieves 8.81 arcmin (1.37 mm) positional error and 1.76 degree rotational error by camera-based tests in the arm-reach distance, and 10.79 arcmin (7.71 pixels) reprojection error by user tests.

Rotation-constrained optical see-through headset calibration withbare-hand alignment最先出现在Nweon Paper

]]>
Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication https://paper.nweon.com/11246 Thu, 23 Sep 2021 02:43:22 +0000 https://paper.nweon.com/11246 PubDate: January 2019

Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication最先出现在Nweon Paper

]]>
PubDate: January 2019

Teams: NICT and Osaka University

Writers: Michal Joachimczak; Juan Liu; Hiroshi Ando

PDF: Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication

Abstract

We study how mixed-reality (MR) telepresence can enhance long-distance human interaction and how altering three-dimensional (3D) representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft’s Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using Hololens. In this pilot study, we used mock job interview paradigm to induce stress in human-subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants’ stress levels and their sense of presence. NR condition induced more stress and presence than SR condition and was significantly different from LCD condition.

Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication最先出现在Nweon Paper

]]>
Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device https://paper.nweon.com/10872 Wed, 11 Aug 2021 07:28:38 +0000 https://paper.nweon.com/10872 PubDate: April 2021

Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device最先出现在Nweon Paper

]]>
PubDate: April 2021

Teams: The University of Electro-Communications

Writers: Jiazhen Guo; Peng Chen; Yinlai Jiang; Hiroshi Yokoi; Shunta Togo

PDF: Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device

Abstract

Mixed reality device sensing capabilities are valuable for robots, for example, the inertial measurement unit (IMU) sensor and time-of-flight (TOF) depth sensor can support the robot in navigating its environment. This paper demonstrates a deep learning (YOLO model) background, realtime object detection system implemented on mixed reality device. The goal of the system is to create a real-time communication system between HoloLens and Ubuntu systems to enable real-time object detection using the YOLO model. The experimental results show that the proposed method has a fast speed to achieve real-time object detection using HoloLens. This enables Microsoft HoloLens as a device for robot vision. To enhance human-robot interaction, we will apply it to a wearable robot arm system to automatically grasp objects in the future.

Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device最先出现在Nweon Paper

]]>
A Taxonomy of Sounds in Virtual Reality https://paper.nweon.com/10563 Thu, 08 Jul 2021 04:37:23 +0000 https://paper.nweon.com/10563 PubDate: June 2021

A Taxonomy of Sounds in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: June 2021

Teams: Microsoft Research

Writers: Dhruv Jain Sasa Junuzovic Eyal Ofek Mike Sinclair John Porter Chris Yoon Swetha Machanavajhala Meredith Ringel Morris

PDF: A Taxonomy of Sounds in Virtual Reality

Abstract

Virtual reality (VR) leverages human sight, hearing and touch senses to convey virtual experiences. For d/Deaf and hard of hearing (DHH) people, information conveyed through sound may not be accessible. To help with future design of accessible VR sound representations for DHH users, this paper contributes a consistent language and structure for representing sounds in VR. Using two studies, we report on the design and evaluation of a novel taxonomy for VR sounds. Study 1 included interviews with 10 VR sound designers to develop our taxonomy along two dimensions: sound source and intent. To evaluate this taxonomy, we conducted another study (Study 2) where eight HCI researchers used our taxonomy to document sounds in 33 VR apps. We found that our taxonomy was able to successfully categorize nearly all sounds (265/267) in these apps. We also uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.

A Taxonomy of Sounds in Virtual Reality最先出现在Nweon Paper

]]>
AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality https://paper.nweon.com/9755 Mon, 26 Apr 2021 05:43:22 +0000 https://paper.nweon.com/9755 PubDate: August 2017

AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality最先出现在Nweon Paper

]]>
PubDate: August 2017

Teams: Inria;INSA Rennes

Writers: Yoren Gaffary; Benoît Le Gouis; Maud Marchal; Ferran Argelaguet; Bruno Arnaldi; Anatole Lécuyer

PDF: AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality

Abstract

Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel ”softer” than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.

AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality最先出现在Nweon Paper

]]>
Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction https://paper.nweon.com/8400 Fri, 04 Dec 2020 07:43:38 +0000 https://paper.nweon.com/8400 PubDate: January 2019

Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction最先出现在Nweon Paper

]]>
PubDate: January 2019

Teams: University of Hamburg

Writers: Dennis Krupke; Frank Steinicke; Paul Lubos; Yannick Jonetzko; Michael Görner; Jianwei Zhang

PDF: Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction

Abstract

Mixed reality (MR)opens up new vistas for human-robot interaction (HRI)scenarios in which a human operator can control and collaborate with co-located robots. For instance, when using a see-through head-mounted-display (HMD)such as the Microsoft HoloLens, the operator can see the real robots and additional virtual information can be superimposed over the real-world view to improve security, acceptability and predictability in HRI situations. In particular, previewing potential robot actions in-situ before they are executed has enormous potential to reduce the risks of damaging the system or injuring the human operator. In this paper, we introduce the concept and implementation of such an MR human-robot collaboration system in which a human can intuitively and naturally control a co-located industrial robot arm for pick-and-place tasks. In addition, we compared two different, multimodal HRI techniques to select the pick location on a target object using (i)head orientation (aka heading)or (ii)pointing, both in combination with speech. The results show that heading-based interaction techniques are more precise, require less time and are perceived as less physically, temporally and mentally demanding for MR-based pick-and-place scenarios. We confirmed these results in an additional usability study in a delivery-service task with a multi-robot system. The developed MR interface shows a preview of the current robot programming to the operator, e. g., pick selection or trajectory. The findings provide important implications for the design of future MR setups.

Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction最先出现在Nweon Paper

]]>
Interactive Multi-User 3d Visual Analytics in Augmented Reality https://paper.nweon.com/5286 Thu, 20 Aug 2020 07:18:10 +0000 https://paper.nweon.com/5286 PubDate: Feb 2020

Interactive Multi-User 3d Visual Analytics in Augmented Reality最先出现在Nweon Paper

]]>
PubDate: Feb 2020

Teams: BodyLogical;University of California San Diego

Writers: Wanze Xie, Yining Liang, Janet Johnson, Andrea Mower, Samuel Burns, Colleen Chelini, Paul D Alessandro, Nadir Weibel, Jürgen P. Schulze

PDF: Interactive Multi-User 3D Visual Analytics in Augmented Reality

Abstract

This publication reports on a research project in which we set out to explore the advantages and disadvantages augmented reality (AR) technology has for visual data analytics. We developed a prototype of an AR data analytics application, which provides users with an interactive 3D interface, hand gesture-based controls and multi-user support for a shared experience, enabling multiple people to collaboratively visualize, analyze and manipulate data with high dimensional features in 3D space. Our software prototype, called DataCube, runs on the Microsoft HoloLens – one of the first true stand-alone AR headsets, through which users can see computer-generated images overlaid onto real-world objects in the user’s physical environment. Using hand gestures, the users can select menu options, control the 3D data visualization with various filtering and visualization functions, and freely arrange the various menus and virtual displays in their environment. The shared multi-user experience allows all participating users to see and interact with the virtual environment, changes one user makes will become visible to the other users instantly. As users engage together they are not restricted from observing the physical world simultaneously and therefore they can also see non-verbal cues such as gesturing or facial reactions of other users in the physical environment. The main objective of this research project was to find out if AR interfaces and collaborative analysis can provide an effective solution for data analysis tasks, and our experience with our prototype system confirms this.

Interactive Multi-User 3d Visual Analytics in Augmented Reality最先出现在Nweon Paper

]]>
Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets? https://paper.nweon.com/2893 https://paper.nweon.com/2893#respond Tue, 23 Jun 2020 04:54:22 +0000 https://paper.nweon.com/2893 PubDate: February 2018Teams: Microsoft Research,Stanford UniversityWriters: Eduardo Cuervo;Krishna Chintalapudi;Manikanta KotaruPDF: Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?AbstractAs Virtual Reality (VR) Head Mounted Displays (HMD) push the boundaries of technology, in this paper, we try and answer the question, “What would it take to make the visual experience of a VR-HMD Life-Like, i.e., indistinguishable from physical reality?”

Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?最先出现在Nweon Paper

]]>
PubDate: February 2018

Teams: Microsoft Research,Stanford University

Writers: Eduardo Cuervo;Krishna Chintalapudi;Manikanta Kotaru

PDF: Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?

Abstract

As Virtual Reality (VR) Head Mounted Displays (HMD) push the boundaries of technology, in this paper, we try and answer the question, “What would it take to make the visual experience of a VR-HMD Life-Like, i.e., indistinguishable from physical reality?” Based on the limits of human perception, we first try and establish the specifications for a Life-Like HMD. We then examine crucial technological trends and speculate on the feasibility of Life-Like VR headsets in the near future. Our study indicates that while display technology will be capable of Life-Like VR, rendering computation is likely to be the key bottleneck. Life-Like VR solutions will likely involve frames rendered on a separate machine and then transmitted to the HMD. Can we transmit Life-Like VR frames wirelessly to the HMD and make the HMD cable-free? We find that current wireless and compression technology may not be sufficient to accommodate the bandwidth and latency requirements. We outline research directions towards achieving Life-Like VR.

Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?最先出现在Nweon Paper

]]>
https://paper.nweon.com/2893/feed 0
When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality https://paper.nweon.com/2812 https://paper.nweon.com/2812#respond Mon, 22 Jun 2020 06:30:14 +0000 https://paper.nweon.com/2812 PubDate: May 2018Teams: Michigan State University,Singapore Management UniversityWriters: Taiwoo Park;Mi Zhang;Youngki LeePDF: When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed RealityAbstractFrom panoramic paintings and stereoscopic photos in the early 19th century, there has been a centurylong effort to realize mixed reality, interweaving real and virtual worlds that interact with each other.

When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality最先出现在Nweon Paper

]]>
PubDate: May 2018

Teams: Michigan State University,Singapore Management University

Writers: Taiwoo Park;Mi Zhang;Youngki Lee

PDF: When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality

Abstract

From panoramic paintings and stereoscopic photos in the early 19th century, there has been a centurylong effort to realize mixed reality, interweaving real and virtual worlds that interact with each other. Recently, over the past few years, we have witnessed the first wave of “affordable” mixed reality platforms, such as Oculus Rift and Microsoft HoloLens hitting the market. In particular, 2017 was the showcase year of mixed reality technologies: The Academy awarded its first Oscar to virtual reality storytelling1; AAA caliber virtual reality games started to hit the market with impacts2. Furthermore, major mobile operating systems, including Android and iOS, began to support augmented reality at the platform level (e.g., Android ARCore, Apple ARKit). Looking down the road, a recent forecast by Orbis Research projects over $40 billion mixed reality market worldwide by 20203.

When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/2812/feed 0
Expanding the sense of touch outside the body https://paper.nweon.com/2589 https://paper.nweon.com/2589#respond Thu, 18 Jun 2020 05:11:36 +0000 https://paper.nweon.com/2589 PubDate: August 2018Teams: California Institute of Technology,Microsoft ResearchWriters: Christopher C. Berger;Mar Gonzalez-FrancoPDF: Expanding the sense of touch outside the bodyAbstractUnder normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner.

Expanding the sense of touch outside the body最先出现在Nweon Paper

]]>
PubDate: August 2018

Teams: California Institute of Technology,Microsoft Research

Writers: Christopher C. Berger;Mar Gonzalez-Franco

PDF: Expanding the sense of touch outside the body

Abstract

Under normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner. Here, we examined whether an extra-corporeal illusory sense of touch could be elicited using vibrotactile stimuli delivered via two independent handheld controllers while in virtual reality. Our results suggest that under the right conditions, one’s sense of touch in space can be extended outside the body, and even into the empty space that surrounds us. Specifically, we show, in virtual reality, that one’s sense of touch can be extended to a virtual stick one is holding, and also into the empty space between one’s hands. These findings provide a means with which to expand the sense of touch beyond the hands in VR systems using two independent controllers, and also have important implications for our understanding of the human representation of touch.

Expanding the sense of touch outside the body最先出现在Nweon Paper

]]>
https://paper.nweon.com/2589/feed 0
Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction https://paper.nweon.com/1629 https://paper.nweon.com/1629#respond Wed, 27 May 2020 05:19:38 +0000 https://paper.nweon.com/1629 PubDate: October 2019Teams: Korea Advanced Institute of Science and Technology,Microsoft ResearcWriters: Jaeyeon Lee;Mike Sinclair;Mar Gonzalez-Franco;Eyal Ofek;Christian HolzPDF: Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger InteractionAbstractRecent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects.

Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Korea Advanced Institute of Science and Technology,Microsoft Researc

Writers: Jaeyeon Lee;Mike Sinclair;Mar Gonzalez-Franco;Eyal Ofek;Christian Holz

PDF: Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction

Abstract

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. We demonstrate the TORC interaction scenarios for a virtual object in hand.

Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction最先出现在Nweon Paper

]]>
https://paper.nweon.com/1629/feed 0
Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight https://paper.nweon.com/1592 https://paper.nweon.com/1592#respond Tue, 26 May 2020 05:29:16 +0000 https://paper.nweon.com/1592 PubDate: October 2019Teams: Microsoft Research & Hasso Plattner Institute, University of PotsdamWriters: Sebastian Marwecki;Andrew D. Wilson;Eyal Ofek;Mar Gonzalez Franco;Christian HolzPDF: Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain SightAbstractCreating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view.

Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Microsoft Research & Hasso Plattner Institute, University of Potsdam

Writers: Sebastian Marwecki;Andrew D. Wilson;Eyal Ofek;Mar Gonzalez Franco;Christian Holz

PDF: Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight

Abstract

Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.

Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight最先出现在Nweon Paper

]]>
https://paper.nweon.com/1592/feed 0
Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality https://paper.nweon.com/1550 https://paper.nweon.com/1550#respond Mon, 25 May 2020 07:22:38 +0000 https://paper.nweon.com/1550 PubDate: October 2019Teams: Dixie State University,University of Central FloridaWriters: Kendra Richards, Nikhil Mahalanobis, Kangsoo Kim, Ryan Schubert, Myungho Lee, Salam Daher, Nahal Norouzi, Jason Hochreiter, Gerd Bruder, Greg WelchPDF: Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented RealityAbstractA primary goal of augmented reality (AR) is to seamlessly embed virtual content into a real environment.

Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Dixie State University,University of Central Florida

Writers: Kendra Richards, Nikhil Mahalanobis, Kangsoo Kim, Ryan Schubert, Myungho Lee, Salam Daher, Nahal Norouzi, Jason Hochreiter, Gerd Bruder, Greg Welch

PDF: Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality

Abstract

A primary goal of augmented reality (AR) is to seamlessly embed virtual content into a real environment. There are many factors that can affect the perceived physicality and co-presence of virtual entities, including the hardware capabilities, the fidelity of the virtual behaviors, and sensory feedback associated with the interactions. In this paper, we present a study investigating participants’ perceptions and behaviors during a time-limited search task in close proximity with virtual entities in AR. In particular, we analyze the effects of (i) visual conflicts in the periphery of an optical see-through head-mounted display, a Microsoft HoloLens, (ii) overall lighting in the physical environment, and (iii) multimodal feedback based on vibrotactile transducers mounted on a physical platform. Our results show significant benefits of vibrotactile feedback and reduced peripheral lighting for spatial and social presence, and engagement. We discuss implications of these effects for AR applications.

Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/1550/feed 0
MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot https://paper.nweon.com/1342 https://paper.nweon.com/1342#respond Tue, 19 May 2020 05:27:51 +0000 https://paper.nweon.com/1342 ...

MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot最先出现在Nweon Paper

]]>
PubDate: April 2020

Teams: Tsinghua University,Microsoft Corporation,Beijing University of Posts and Telecommunications,Chinese Academy of Sciences,University of Washington

Writers: Yuntao Wang, Zichao (Tyson) Chen, Hanchuan Li, Zhengyi Cao, Huiyi Luo, Tengxiang Zhang, Ke Ou, John Raiti, Chun Yu, Shwetak Patel, Yuanchun Shi

PDF: MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot

Abstract

Haptic feedback can significantly enhance the realism and immersiveness of virtual reality (VR) systems. In this paper, we propose MoveVR, a technique that enables realistic, multiform force feedback in VR leveraging commonplace cleaning robots. MoveVR can generate tension, resistance, impact and material rigidity force feedback with multiple levels of force intensity and directions. This is achieved by changing the robot’s moving speed, rotation, position as well as the carried proxies. We demonstrated the feasibility and effectiveness of MoveVR through interactive VR gaming. In our quantitative and qualitative evaluation studies, participants found that MoveVR provides more realistic and enjoyable user experience when compared to commercially available haptic solutions such as vibrotactile haptic systems.

MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot最先出现在Nweon Paper

]]>
https://paper.nweon.com/1342/feed 0
Image mosaicing for tele-reality applications https://paper.nweon.com/1242 https://paper.nweon.com/1242#respond Mon, 18 May 2020 13:00:03 +0000 https://paper.nweon.com/1242 Title: Image mosaicing for tele-reality applicationsTeams: MicrosoftWriters: Szeliski R.Publication date: January 1994AbstractWhile a large number of virtual reality applications, such as fluid flow analysis and molecular modeling, deal with simulated data, many newer applications attempt to recreate true reality as convincingly as possible. Building detailed models for such applications, which we call tele-reality, is a major bottleneck holding back their deployment.

Image mosaicing for tele-reality applications最先出现在Nweon Paper

]]>
Title: Image mosaicing for tele-reality applications

Teams: Microsoft

Writers: Szeliski R.

Publication date: January 1994

Abstract

While a large number of virtual reality applications, such as fluid flow analysis and molecular modeling, deal with simulated data, many newer applications attempt to recreate true reality as convincingly as possible. Building detailed models for such applications, which we call tele-reality, is a major bottleneck holding back their deployment. In this paper, we present techniques for automatically deriving realistic 2-D scenes and 3-D texture-mapped models from video sequences, which can help overcome this bottleneck. The fundamental technique we use is image mosaicing, i.e., the automatic alignment of multiple images into larger aggregates which are then used to represent portions of a 3-D scene. We begin with the easiest problems, those of flat scene and panoramic scene mosaicing, and progress to more complicated scenes, culminating in full 3-D models. We also present a number of novel applications based on tele-reality technology.

Image mosaicing for tele-reality applications最先出现在Nweon Paper

]]>
https://paper.nweon.com/1242/feed 0
Alice: Rapid prototyping system for virtual reality https://paper.nweon.com/1240 https://paper.nweon.com/1240#respond Mon, 18 May 2020 12:59:56 +0000 https://paper.nweon.com/1240 Title: Alice: Rapid prototyping system for virtual realityTeams: MicrosoftWriters: Randy Pausch Tommy Burnette A.C. Capeheart Matthew Conway Dennis Cosgrove Rob DeLine Jim Durbin Rich Gossweiler Shuichi Koga Jeff WhitePublication date: May 1995AbstractWe are developing Alice, a rapid prototyping system for virtual reality software. Alice programs are written in an object-oriented, interpreted language which allows programmers to immediately see the effects of changes.

Alice: Rapid prototyping system for virtual reality最先出现在Nweon Paper

]]>
Title: Alice: Rapid prototyping system for virtual reality

Teams: Microsoft

Writers: Randy Pausch Tommy Burnette A.C. Capeheart Matthew Conway Dennis Cosgrove Rob DeLine Jim Durbin Rich Gossweiler Shuichi Koga Jeff White

Publication date: May 1995

Abstract

We are developing Alice, a rapid prototyping system for virtual reality software. Alice programs are written in an object-oriented, interpreted language which allows programmers to immediately see the effects of changes. As an Alice program executes, the author can update the current state either by interactively evaluating program code fragments, or by manipulating GUI tools. Although the system is extremely flexible at runtime, we are able to maintain high interactive frame rates (typically, 20-50 fps) by transparently decoupling simulation and rendering. We have been using Alice internally at Virginia for over two years, and we are currently porting a “desktop” version of Alice to Windows 95. We will distribute desktop Alice freely to all universities via the World Wide Web; for more information, see http://www.cs.virginia.edu/ alice/

Alice: Rapid prototyping system for virtual reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/1240/feed 0
InLoc: Indoor Visual Localization with Dense Matching and View Synthesis https://paper.nweon.com/1238 https://paper.nweon.com/1238#respond Mon, 18 May 2020 12:59:49 +0000 https://paper.nweon.com/1238 Title: InLoc: Indoor Visual Localization with Dense Matching and View SynthesisTeams: MicrosoftWriters: Hajime Taira Masatoshi Okutomi Torsten Sattler Mircea Cimpoi Marc Pollefeys Josef Sivic Tomas Pajdla Akihiko ToriiPublication date: April 2018AbstractWe seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold.

InLoc: Indoor Visual Localization with Dense Matching and View Synthesis最先出现在Nweon Paper

]]>
Title: InLoc: Indoor Visual Localization with Dense Matching and View Synthesis

Teams: Microsoft

Writers: Hajime Taira Masatoshi Okutomi Torsten Sattler Mircea Cimpoi Marc Pollefeys Josef Sivic Tomas Pajdla Akihiko Torii

Publication date: April 2018

Abstract

We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor environments. The method proceeds along three steps: (i) efficient retrieval of candidate poses that ensures scalability to large-scale environments, (ii) pose estimation using dense matching rather than local features to deal with textureless indoor scenes, and (iii) pose verification by virtual view synthesis to cope with significant changes in viewpoint, scene layout, and occluders. Second, we collect a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data.

InLoc: Indoor Visual Localization with Dense Matching and View Synthesis最先出现在Nweon Paper

]]>
https://paper.nweon.com/1238/feed 0
Semantic Visual Localization https://paper.nweon.com/1236 https://paper.nweon.com/1236#respond Mon, 18 May 2020 12:59:43 +0000 https://paper.nweon.com/1236 Title: Semantic Visual LocalizationTeams: MicrosoftWriters: Johannes Schönberger Marc Pollefeys Andreas Geiger Torsten SattlerPublication date: April 2018AbstractRobust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots.

Semantic Visual Localization最先出现在Nweon Paper

]]>
Title: Semantic Visual Localization

Teams: Microsoft

Writers: Johannes Schönberger Marc Pollefeys Andreas Geiger Torsten Sattler

Publication date: April 2018

Abstract

Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

Semantic Visual Localization最先出现在Nweon Paper

]]>
https://paper.nweon.com/1236/feed 0
If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong https://paper.nweon.com/1234 https://paper.nweon.com/1234#respond Mon, 18 May 2020 12:55:56 +0000 https://paper.nweon.com/1234 Title: If (Virtual) Reality Feels Almost Right, It’s Exactly WrongTeams: MicrosoftWriters: Mar Gonzalez Franco Christopher C Berger Ken HinckleyPublication date: April 2018AbstractWe can all remember the crisply beveled edges of our cheery-yellow No. 2 pencil, the cool, smooth feel of a chalk-powdered blackboard, the gritty red bricks of the schoolhouse walls. Surely that all wasn’t just an illusion? No, of course not. But—as it turns out—it kind of is.

If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong最先出现在Nweon Paper

]]>
Title: If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong

Teams: Microsoft

Writers: Mar Gonzalez Franco Christopher C Berger Ken Hinckley

Publication date: April 2018

Abstract

We can all remember the crisply beveled edges of our cheery-yellow No. 2 pencil, the cool, smooth feel of a chalk-powdered blackboard, the gritty red bricks of the schoolhouse walls. Surely that all wasn’t just an illusion? No, of course not. But—as it turns out—it kind of is.

If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong最先出现在Nweon Paper

]]>
https://paper.nweon.com/1234/feed 0
Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation https://paper.nweon.com/1232 https://paper.nweon.com/1232#respond Mon, 18 May 2020 12:55:49 +0000 https://paper.nweon.com/1232 Title: Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane SimulationTeams: MicrosoftWriters: Yuhang Zhao Cynthia Bennett Hrvoje Benko Ed Cutrell Christian Holz Meredith Ringel Morris Mike SinclairPublication date: April 2018AbstractTraditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation最先出现在Nweon Paper

]]>
Title: Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation

Teams: Microsoft

Writers: Yuhang Zhao Cynthia Bennett Hrvoje Benko Ed Cutrell Christian Holz Meredith Ringel Morris Mike Sinclair

Publication date: April 2018

Abstract

Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces. We discuss potential applications supported by Canetroller ranging from entertainment to mobility training.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation最先出现在Nweon Paper

]]>
https://paper.nweon.com/1232/feed 0
Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices https://paper.nweon.com/947 https://paper.nweon.com/947#respond Tue, 12 May 2020 02:13:48 +0000 https://paper.nweon.com/947 ...

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices最先出现在Nweon Paper

]]>
PubDate: March 2020

Teams: Microsoft Research Lab

Writers: Robert Gruen, Eyal Ofek, Anthony Steed1, Ran Gal, Mike Sinclair, Mar Gonzalez-Franco

PDF: Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices

Project: Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices

Abstract

Measuring Visual Latency in VR and AR devices has become increasingly complicated as many of the components will influence others in multiple loops and ultimately affect the human cognitive and sensory perception. In this paper we present a new method based on the idea that the performance of humans on a rapid motor task will remain constant, and that any added delay will correspond to the system latency. We ask users to perform a task inside video see-through devices to compare latency. We also calculate the latency of the systems using hardware instrumentation measurements for bench-marking. Results show that measurement through human cognitive performance can be reliable and comparable to hardware measurement.

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices最先出现在Nweon Paper

]]>
https://paper.nweon.com/947/feed 0
CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality https://paper.nweon.com/865 https://paper.nweon.com/865#respond Mon, 11 May 2020 01:42:15 +0000 https://paper.nweon.com/865 Title: CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual RealityTeams: Microsoft Research, 2Stanford UniversityWriters: Inrak Choi1,2, Eyal Ofek1, Hrvoje Benko1, Mike Sinclair1, Christian HolzPublication date: Apr 2018AbstractCLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger.

CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality最先出现在Nweon Paper

]]>
Title: CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality

Teams: Microsoft Research, 2Stanford University

Writers: Inrak Choi1,2, Eyal Ofek1, Hrvoje Benko1, Mike Sinclair1, Christian Holz

Publication date: Apr 2018

Abstract

CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user’s grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun. We describe the design considerations for CLAW and evaluate its performance through two user studies. The first study obtained qualitative user feedback on the naturalness, effectiveness, and comfort when using the device. The second study investigated the ease of the transition between grasping and touching when using our device.

CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/865/feed 0