Microsoft – Nweon Paper https://paper.nweon.com 映维网,影响力虚拟现实(VR)、增强现实(AR)产业信息数据平台 Thu, 14 Oct 2021 08:01:40 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.17 https://paper.nweon.com/wp-content/uploads/2021/04/nweon-icon.png Microsoft – Nweon Paper https://paper.nweon.com 32 32 Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays https://paper.nweon.com/11320 Thu, 14 Oct 2021 04:26:14 +0000 https://paper.nweon.com/11320 PubDate: Sep 2021

Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays最先出现在Nweon Paper

]]>
PubDate: Sep 2021

Teams: Coburg University of Applied Sciences and Arts;University of Primorska;Microsoft Research;University of Cambridge

Writers: Daniel Schneider, Verena Biener, Alexander Otte, Travis Gesslein, Philipp Gagel, Cuauhtli Campos, Klen Čopič Pucihar, Matjaž Kljun, Eyal Ofek, Michel Pahud, Per Ola Kristensson, Jens Grubert

PDF: Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays

Abstract

An increasing number of consumer-oriented head-mounted displays (HMD) for augmented and virtual reality (AR/VR) are capable of finger and hand tracking. We report on the accuracy of off-the-shelf VR and AR HMDs when used for touch-based tasks such as pointing or drawing. Specifically, we report on the finger tracking accuracy of the VR head-mounted displays Oculus Quest, Vive Pro and the Leap Motion controller, when attached to a VR HMD, as well as the finger tracking accuracy of the AR head-mounted displays Microsoft HoloLens 2 and Magic Leap. We present the results of two experiments in which we compare the accuracy for absolute and relative pointing tasks using both human participants and a robot. The results suggest that HTC Vive has a lower spatial accuracy than the Oculus Quest and Leap Motion and that the Microsoft HoloLens 2 provides higher spatial accuracy than Magic Leap One. These findings can serve as decision support for researchers and practitioners in choosing which systems to use in the future.

Accuracy Evaluation of Touch Tasks in Commodity Virtual and Augmented Reality Head-Mounted Displays最先出现在Nweon Paper

]]>
Rotation-constrained optical see-through headset calibration withbare-hand alignment https://paper.nweon.com/11288 Tue, 12 Oct 2021 03:01:19 +0000 https://paper.nweon.com/11288 PubDate: Aug 2021

Rotation-constrained optical see-through headset calibration withbare-hand alignment最先出现在Nweon Paper

]]>
PubDate: Aug 2021

Teams: Imperial College London;University of Pisa

Writers: Xue Hu, Ferdinando Rodriguez y Baena, Fabrizio Cutolo

PDF: Rotation-constrained optical see-through headset calibration withbare-hand alignment

Abstract

The inaccessibility of user-perceived reality remains an open issue in pursuing the accurate calibration of optical see-through (OST) head-mounted displays (HMDs). Manual user alignment is usually required to collect a set of virtual-to-real correspondences, so that a default or an offline display calibration can be updated to account for the user’s eye position(s). Current alignment-based calibration procedures usually require point-wise alignments between rendered image point(s) and associated physical landmark(s) of a target calibration tool. As each alignment can only provide one or a few correspondences, repeated alignments are required to ensure calibration quality.
This work presents an accurate and tool-less online OST calibration method to update an offline-calibrated eye-display model. The user’s bare hand is markerlessly tracked by a commercial RGBD camera anchored to the OST headset to generate a user-specific cursor for correspondence collection. The required alignment is object-wise, and can provide thousands of unordered corresponding points in tracked space. The collected correspondences are registered by a proposed rotation-constrained iterative closest point (rcICP) method to optimise the viewpoint-related calibration parameters. We implemented such a method for the Microsoft HoloLens 1. The resiliency of the proposed procedure to noisy data was evaluated through simulated tests and real experiments performed with an eye-replacement camera. According to the simulation test, the rcICP registration is robust against possible user-induced rotational misalignment. With a single alignment, our method achieves 8.81 arcmin (1.37 mm) positional error and 1.76 degree rotational error by camera-based tests in the arm-reach distance, and 10.79 arcmin (7.71 pixels) reprojection error by user tests.

Rotation-constrained optical see-through headset calibration withbare-hand alignment最先出现在Nweon Paper

]]>
Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication https://paper.nweon.com/11246 Thu, 23 Sep 2021 02:43:22 +0000 https://paper.nweon.com/11246 PubDate: January 2019

Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication最先出现在Nweon Paper

]]>
PubDate: January 2019

Teams: NICT and Osaka University

Writers: Michal Joachimczak; Juan Liu; Hiroshi Ando

PDF: Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication

Abstract

We study how mixed-reality (MR) telepresence can enhance long-distance human interaction and how altering three-dimensional (3D) representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft’s Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using Hololens. In this pilot study, we used mock job interview paradigm to induce stress in human-subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants’ stress levels and their sense of presence. NR condition induced more stress and presence than SR condition and was significantly different from LCD condition.

Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication最先出现在Nweon Paper

]]>
Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device https://paper.nweon.com/10872 Wed, 11 Aug 2021 07:28:38 +0000 https://paper.nweon.com/10872 PubDate: April 2021

Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device最先出现在Nweon Paper

]]>
PubDate: April 2021

Teams: The University of Electro-Communications

Writers: Jiazhen Guo; Peng Chen; Yinlai Jiang; Hiroshi Yokoi; Shunta Togo

PDF: Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device

Abstract

Mixed reality device sensing capabilities are valuable for robots, for example, the inertial measurement unit (IMU) sensor and time-of-flight (TOF) depth sensor can support the robot in navigating its environment. This paper demonstrates a deep learning (YOLO model) background, realtime object detection system implemented on mixed reality device. The goal of the system is to create a real-time communication system between HoloLens and Ubuntu systems to enable real-time object detection using the YOLO model. The experimental results show that the proposed method has a fast speed to achieve real-time object detection using HoloLens. This enables Microsoft HoloLens as a device for robot vision. To enhance human-robot interaction, we will apply it to a wearable robot arm system to automatically grasp objects in the future.

Real-time Object Detection with Deep Learning for Robot Vision on Mixed Reality Device最先出现在Nweon Paper

]]>
A Taxonomy of Sounds in Virtual Reality https://paper.nweon.com/10563 Thu, 08 Jul 2021 04:37:23 +0000 https://paper.nweon.com/10563 PubDate: June 2021

A Taxonomy of Sounds in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: June 2021

Teams: Microsoft Research

Writers: Dhruv Jain Sasa Junuzovic Eyal Ofek Mike Sinclair John Porter Chris Yoon Swetha Machanavajhala Meredith Ringel Morris

PDF: A Taxonomy of Sounds in Virtual Reality

Abstract

Virtual reality (VR) leverages human sight, hearing and touch senses to convey virtual experiences. For d/Deaf and hard of hearing (DHH) people, information conveyed through sound may not be accessible. To help with future design of accessible VR sound representations for DHH users, this paper contributes a consistent language and structure for representing sounds in VR. Using two studies, we report on the design and evaluation of a novel taxonomy for VR sounds. Study 1 included interviews with 10 VR sound designers to develop our taxonomy along two dimensions: sound source and intent. To evaluate this taxonomy, we conducted another study (Study 2) where eight HCI researchers used our taxonomy to document sounds in 33 VR apps. We found that our taxonomy was able to successfully categorize nearly all sounds (265/267) in these apps. We also uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.

A Taxonomy of Sounds in Virtual Reality最先出现在Nweon Paper

]]>
AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality https://paper.nweon.com/9755 Mon, 26 Apr 2021 05:43:22 +0000 https://paper.nweon.com/9755 PubDate: August 2017

AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality最先出现在Nweon Paper

]]>
PubDate: August 2017

Teams: Inria;INSA Rennes

Writers: Yoren Gaffary; Benoît Le Gouis; Maud Marchal; Ferran Argelaguet; Bruno Arnaldi; Anatole Lécuyer

PDF: AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality

Abstract

Does it feel the same when you touch an object in Augmented Reality (AR) or in Virtual Reality (VR)? In this paper we study and compare the haptic perception of stiffness of a virtual object in two situations: (1) a purely virtual environment versus (2) a real and augmented environment. We have designed an experimental setup based on a Microsoft HoloLens and a haptic force-feedback device, enabling to press a virtual piston, and compare its stiffness successively in either Augmented Reality (the virtual piston is surrounded by several real objects all located inside a cardboard box) or in Virtual Reality (the same virtual piston is displayed in a fully virtual scene composed of the same other objects). We have conducted a psychophysical experiment with 12 participants. Our results show a surprising bias in perception between the two conditions. The virtual piston is on average perceived stiffer in the VR condition compared to the AR condition. For instance, when the piston had the same stiffness in AR and VR, participants would select the VR piston as the stiffer one in 60% of cases. This suggests a psychological effect as if objects in AR would feel ”softer” than in pure VR. Taken together, our results open new perspectives on perception in AR versus VR, and pave the way to future studies aiming at characterizing potential perceptual biases.

AR Feels “Softer” than VR: Haptic Perception of Stiffness in Augmented versus Virtual Reality最先出现在Nweon Paper

]]>
Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction https://paper.nweon.com/8400 Fri, 04 Dec 2020 07:43:38 +0000 https://paper.nweon.com/8400 PubDate: January 2019

Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction最先出现在Nweon Paper

]]>
PubDate: January 2019

Teams: University of Hamburg

Writers: Dennis Krupke; Frank Steinicke; Paul Lubos; Yannick Jonetzko; Michael Görner; Jianwei Zhang

PDF: Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction

Abstract

Mixed reality (MR)opens up new vistas for human-robot interaction (HRI)scenarios in which a human operator can control and collaborate with co-located robots. For instance, when using a see-through head-mounted-display (HMD)such as the Microsoft HoloLens, the operator can see the real robots and additional virtual information can be superimposed over the real-world view to improve security, acceptability and predictability in HRI situations. In particular, previewing potential robot actions in-situ before they are executed has enormous potential to reduce the risks of damaging the system or injuring the human operator. In this paper, we introduce the concept and implementation of such an MR human-robot collaboration system in which a human can intuitively and naturally control a co-located industrial robot arm for pick-and-place tasks. In addition, we compared two different, multimodal HRI techniques to select the pick location on a target object using (i)head orientation (aka heading)or (ii)pointing, both in combination with speech. The results show that heading-based interaction techniques are more precise, require less time and are perceived as less physically, temporally and mentally demanding for MR-based pick-and-place scenarios. We confirmed these results in an additional usability study in a delivery-service task with a multi-robot system. The developed MR interface shows a preview of the current robot programming to the operator, e. g., pick selection or trajectory. The findings provide important implications for the design of future MR setups.

Comparison of Multimodal Heading and Pointing Gestures for Co-Located Mixed Reality Human-Robot Interaction最先出现在Nweon Paper

]]>
Interactive Multi-User 3d Visual Analytics in Augmented Reality https://paper.nweon.com/5286 Thu, 20 Aug 2020 07:18:10 +0000 https://paper.nweon.com/5286 PubDate: Feb 2020

Interactive Multi-User 3d Visual Analytics in Augmented Reality最先出现在Nweon Paper

]]>
PubDate: Feb 2020

Teams: BodyLogical;University of California San Diego

Writers: Wanze Xie, Yining Liang, Janet Johnson, Andrea Mower, Samuel Burns, Colleen Chelini, Paul D Alessandro, Nadir Weibel, Jürgen P. Schulze

PDF: Interactive Multi-User 3D Visual Analytics in Augmented Reality

Abstract

This publication reports on a research project in which we set out to explore the advantages and disadvantages augmented reality (AR) technology has for visual data analytics. We developed a prototype of an AR data analytics application, which provides users with an interactive 3D interface, hand gesture-based controls and multi-user support for a shared experience, enabling multiple people to collaboratively visualize, analyze and manipulate data with high dimensional features in 3D space. Our software prototype, called DataCube, runs on the Microsoft HoloLens – one of the first true stand-alone AR headsets, through which users can see computer-generated images overlaid onto real-world objects in the user’s physical environment. Using hand gestures, the users can select menu options, control the 3D data visualization with various filtering and visualization functions, and freely arrange the various menus and virtual displays in their environment. The shared multi-user experience allows all participating users to see and interact with the virtual environment, changes one user makes will become visible to the other users instantly. As users engage together they are not restricted from observing the physical world simultaneously and therefore they can also see non-verbal cues such as gesturing or facial reactions of other users in the physical environment. The main objective of this research project was to find out if AR interfaces and collaborative analysis can provide an effective solution for data analysis tasks, and our experience with our prototype system confirms this.

Interactive Multi-User 3d Visual Analytics in Augmented Reality最先出现在Nweon Paper

]]>
Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets? https://paper.nweon.com/2893 https://paper.nweon.com/2893#respond Tue, 23 Jun 2020 04:54:22 +0000 https://paper.nweon.com/2893 PubDate: February 2018Teams: Microsoft Research,Stanford UniversityWriters: Eduardo Cuervo;Krishna Chintalapudi;Manikanta KotaruPDF: Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?AbstractAs Virtual Reality (VR) Head Mounted Displays (HMD) push the boundaries of technology, in this paper, we try and answer the question, “What would it take to make the visual experience of a VR-HMD Life-Like, i.e., indistinguishable from physical reality?”

Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?最先出现在Nweon Paper

]]>
PubDate: February 2018

Teams: Microsoft Research,Stanford University

Writers: Eduardo Cuervo;Krishna Chintalapudi;Manikanta Kotaru

PDF: Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?

Abstract

As Virtual Reality (VR) Head Mounted Displays (HMD) push the boundaries of technology, in this paper, we try and answer the question, “What would it take to make the visual experience of a VR-HMD Life-Like, i.e., indistinguishable from physical reality?” Based on the limits of human perception, we first try and establish the specifications for a Life-Like HMD. We then examine crucial technological trends and speculate on the feasibility of Life-Like VR headsets in the near future. Our study indicates that while display technology will be capable of Life-Like VR, rendering computation is likely to be the key bottleneck. Life-Like VR solutions will likely involve frames rendered on a separate machine and then transmitted to the HMD. Can we transmit Life-Like VR frames wirelessly to the HMD and make the HMD cable-free? We find that current wireless and compression technology may not be sufficient to accommodate the bandwidth and latency requirements. We outline research directions towards achieving Life-Like VR.

Creating the Perfect Illusion: What will it take to Create Life-Like Virtual Reality Headsets?最先出现在Nweon Paper

]]>
https://paper.nweon.com/2893/feed 0
When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality https://paper.nweon.com/2812 https://paper.nweon.com/2812#respond Mon, 22 Jun 2020 06:30:14 +0000 https://paper.nweon.com/2812 PubDate: May 2018Teams: Michigan State University,Singapore Management UniversityWriters: Taiwoo Park;Mi Zhang;Youngki LeePDF: When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed RealityAbstractFrom panoramic paintings and stereoscopic photos in the early 19th century, there has been a centurylong effort to realize mixed reality, interweaving real and virtual worlds that interact with each other.

When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality最先出现在Nweon Paper

]]>
PubDate: May 2018

Teams: Michigan State University,Singapore Management University

Writers: Taiwoo Park;Mi Zhang;Youngki Lee

PDF: When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality

Abstract

From panoramic paintings and stereoscopic photos in the early 19th century, there has been a centurylong effort to realize mixed reality, interweaving real and virtual worlds that interact with each other. Recently, over the past few years, we have witnessed the first wave of “affordable” mixed reality platforms, such as Oculus Rift and Microsoft HoloLens hitting the market. In particular, 2017 was the showcase year of mixed reality technologies: The Academy awarded its first Oscar to virtual reality storytelling1; AAA caliber virtual reality games started to hit the market with impacts2. Furthermore, major mobile operating systems, including Android and iOS, began to support augmented reality at the platform level (e.g., Android ARCore, Apple ARKit). Looking down the road, a recent forecast by Orbis Research projects over $40 billion mixed reality market worldwide by 20203.

When Mixed Reality Meets Internet of Things: Toward the Realization of Ubiquitous Mixed Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/2812/feed 0
Expanding the sense of touch outside the body https://paper.nweon.com/2589 https://paper.nweon.com/2589#respond Thu, 18 Jun 2020 05:11:36 +0000 https://paper.nweon.com/2589 PubDate: August 2018Teams: California Institute of Technology,Microsoft ResearchWriters: Christopher C. Berger;Mar Gonzalez-FrancoPDF: Expanding the sense of touch outside the bodyAbstractUnder normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner.

Expanding the sense of touch outside the body最先出现在Nweon Paper

]]>
PubDate: August 2018

Teams: California Institute of Technology,Microsoft Research

Writers: Christopher C. Berger;Mar Gonzalez-Franco

PDF: Expanding the sense of touch outside the body

Abstract

Under normal circumstances, our sense of touch is limited to our body. Recent evidence suggests, however, that our perception of touch can also be expanded to objects we are holding when certain tactile illusions are elicited by delivering vibrotactile stimuli in a particular manner. Here, we examined whether an extra-corporeal illusory sense of touch could be elicited using vibrotactile stimuli delivered via two independent handheld controllers while in virtual reality. Our results suggest that under the right conditions, one’s sense of touch in space can be extended outside the body, and even into the empty space that surrounds us. Specifically, we show, in virtual reality, that one’s sense of touch can be extended to a virtual stick one is holding, and also into the empty space between one’s hands. These findings provide a means with which to expand the sense of touch beyond the hands in VR systems using two independent controllers, and also have important implications for our understanding of the human representation of touch.

Expanding the sense of touch outside the body最先出现在Nweon Paper

]]>
https://paper.nweon.com/2589/feed 0
Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction https://paper.nweon.com/1629 https://paper.nweon.com/1629#respond Wed, 27 May 2020 05:19:38 +0000 https://paper.nweon.com/1629 PubDate: October 2019Teams: Korea Advanced Institute of Science and Technology,Microsoft ResearcWriters: Jaeyeon Lee;Mike Sinclair;Mar Gonzalez-Franco;Eyal Ofek;Christian HolzPDF: Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger InteractionAbstractRecent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects.

Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Korea Advanced Institute of Science and Technology,Microsoft Researc

Writers: Jaeyeon Lee;Mike Sinclair;Mar Gonzalez-Franco;Eyal Ofek;Christian Holz

PDF: Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction

Abstract

Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. We demonstrate the TORC interaction scenarios for a virtual object in hand.

Demonstration of TORC: A Virtual Reality Controller for In-Hand High-Dexterity Finger Interaction最先出现在Nweon Paper

]]>
https://paper.nweon.com/1629/feed 0
Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight https://paper.nweon.com/1592 https://paper.nweon.com/1592#respond Tue, 26 May 2020 05:29:16 +0000 https://paper.nweon.com/1592 PubDate: October 2019Teams: Microsoft Research & Hasso Plattner Institute, University of PotsdamWriters: Sebastian Marwecki;Andrew D. Wilson;Eyal Ofek;Mar Gonzalez Franco;Christian HolzPDF: Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain SightAbstractCreating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view.

Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Microsoft Research & Hasso Plattner Institute, University of Potsdam

Writers: Sebastian Marwecki;Andrew D. Wilson;Eyal Ofek;Mar Gonzalez Franco;Christian Holz

PDF: Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight

Abstract

Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.

Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight最先出现在Nweon Paper

]]>
https://paper.nweon.com/1592/feed 0
Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality https://paper.nweon.com/1550 https://paper.nweon.com/1550#respond Mon, 25 May 2020 07:22:38 +0000 https://paper.nweon.com/1550 PubDate: October 2019Teams: Dixie State University,University of Central FloridaWriters: Kendra Richards, Nikhil Mahalanobis, Kangsoo Kim, Ryan Schubert, Myungho Lee, Salam Daher, Nahal Norouzi, Jason Hochreiter, Gerd Bruder, Greg WelchPDF: Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented RealityAbstractA primary goal of augmented reality (AR) is to seamlessly embed virtual content into a real environment.

Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality最先出现在Nweon Paper

]]>
PubDate: October 2019

Teams: Dixie State University,University of Central Florida

Writers: Kendra Richards, Nikhil Mahalanobis, Kangsoo Kim, Ryan Schubert, Myungho Lee, Salam Daher, Nahal Norouzi, Jason Hochreiter, Gerd Bruder, Greg Welch

PDF: Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality

Abstract

A primary goal of augmented reality (AR) is to seamlessly embed virtual content into a real environment. There are many factors that can affect the perceived physicality and co-presence of virtual entities, including the hardware capabilities, the fidelity of the virtual behaviors, and sensory feedback associated with the interactions. In this paper, we present a study investigating participants’ perceptions and behaviors during a time-limited search task in close proximity with virtual entities in AR. In particular, we analyze the effects of (i) visual conflicts in the periphery of an optical see-through head-mounted display, a Microsoft HoloLens, (ii) overall lighting in the physical environment, and (iii) multimodal feedback based on vibrotactile transducers mounted on a physical platform. Our results show significant benefits of vibrotactile feedback and reduced peripheral lighting for spatial and social presence, and engagement. We discuss implications of these effects for AR applications.

Analysis of Peripheral Vision and Vibrotactile Feedback During Proximal Search Tasks with Dynamic Virtual Entities in Augmented Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/1550/feed 0
MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot https://paper.nweon.com/1342 https://paper.nweon.com/1342#respond Tue, 19 May 2020 05:27:51 +0000 https://paper.nweon.com/1342 ...

MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot最先出现在Nweon Paper

]]>
PubDate: April 2020

Teams: Tsinghua University,Microsoft Corporation,Beijing University of Posts and Telecommunications,Chinese Academy of Sciences,University of Washington

Writers: Yuntao Wang, Zichao (Tyson) Chen, Hanchuan Li, Zhengyi Cao, Huiyi Luo, Tengxiang Zhang, Ke Ou, John Raiti, Chun Yu, Shwetak Patel, Yuanchun Shi

PDF: MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot

Abstract

Haptic feedback can significantly enhance the realism and immersiveness of virtual reality (VR) systems. In this paper, we propose MoveVR, a technique that enables realistic, multiform force feedback in VR leveraging commonplace cleaning robots. MoveVR can generate tension, resistance, impact and material rigidity force feedback with multiple levels of force intensity and directions. This is achieved by changing the robot’s moving speed, rotation, position as well as the carried proxies. We demonstrated the feasibility and effectiveness of MoveVR through interactive VR gaming. In our quantitative and qualitative evaluation studies, participants found that MoveVR provides more realistic and enjoyable user experience when compared to commercially available haptic solutions such as vibrotactile haptic systems.

MoveVR: Enabling Multiform Force Feedback in Virtual Reality using Household Cleaning Robot最先出现在Nweon Paper

]]>
https://paper.nweon.com/1342/feed 0
Image mosaicing for tele-reality applications https://paper.nweon.com/1242 https://paper.nweon.com/1242#respond Mon, 18 May 2020 13:00:03 +0000 https://paper.nweon.com/1242 Title: Image mosaicing for tele-reality applicationsTeams: MicrosoftWriters: Szeliski R.Publication date: January 1994AbstractWhile a large number of virtual reality applications, such as fluid flow analysis and molecular modeling, deal with simulated data, many newer applications attempt to recreate true reality as convincingly as possible. Building detailed models for such applications, which we call tele-reality, is a major bottleneck holding back their deployment.

Image mosaicing for tele-reality applications最先出现在Nweon Paper

]]>
Title: Image mosaicing for tele-reality applications

Teams: Microsoft

Writers: Szeliski R.

Publication date: January 1994

Abstract

While a large number of virtual reality applications, such as fluid flow analysis and molecular modeling, deal with simulated data, many newer applications attempt to recreate true reality as convincingly as possible. Building detailed models for such applications, which we call tele-reality, is a major bottleneck holding back their deployment. In this paper, we present techniques for automatically deriving realistic 2-D scenes and 3-D texture-mapped models from video sequences, which can help overcome this bottleneck. The fundamental technique we use is image mosaicing, i.e., the automatic alignment of multiple images into larger aggregates which are then used to represent portions of a 3-D scene. We begin with the easiest problems, those of flat scene and panoramic scene mosaicing, and progress to more complicated scenes, culminating in full 3-D models. We also present a number of novel applications based on tele-reality technology.

Image mosaicing for tele-reality applications最先出现在Nweon Paper

]]>
https://paper.nweon.com/1242/feed 0
Alice: Rapid prototyping system for virtual reality https://paper.nweon.com/1240 https://paper.nweon.com/1240#respond Mon, 18 May 2020 12:59:56 +0000 https://paper.nweon.com/1240 Title: Alice: Rapid prototyping system for virtual realityTeams: MicrosoftWriters: Randy Pausch Tommy Burnette A.C. Capeheart Matthew Conway Dennis Cosgrove Rob DeLine Jim Durbin Rich Gossweiler Shuichi Koga Jeff WhitePublication date: May 1995AbstractWe are developing Alice, a rapid prototyping system for virtual reality software. Alice programs are written in an object-oriented, interpreted language which allows programmers to immediately see the effects of changes.

Alice: Rapid prototyping system for virtual reality最先出现在Nweon Paper

]]>
Title: Alice: Rapid prototyping system for virtual reality

Teams: Microsoft

Writers: Randy Pausch Tommy Burnette A.C. Capeheart Matthew Conway Dennis Cosgrove Rob DeLine Jim Durbin Rich Gossweiler Shuichi Koga Jeff White

Publication date: May 1995

Abstract

We are developing Alice, a rapid prototyping system for virtual reality software. Alice programs are written in an object-oriented, interpreted language which allows programmers to immediately see the effects of changes. As an Alice program executes, the author can update the current state either by interactively evaluating program code fragments, or by manipulating GUI tools. Although the system is extremely flexible at runtime, we are able to maintain high interactive frame rates (typically, 20-50 fps) by transparently decoupling simulation and rendering. We have been using Alice internally at Virginia for over two years, and we are currently porting a “desktop” version of Alice to Windows 95. We will distribute desktop Alice freely to all universities via the World Wide Web; for more information, see http://www.cs.virginia.edu/ alice/

Alice: Rapid prototyping system for virtual reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/1240/feed 0
InLoc: Indoor Visual Localization with Dense Matching and View Synthesis https://paper.nweon.com/1238 https://paper.nweon.com/1238#respond Mon, 18 May 2020 12:59:49 +0000 https://paper.nweon.com/1238 Title: InLoc: Indoor Visual Localization with Dense Matching and View SynthesisTeams: MicrosoftWriters: Hajime Taira Masatoshi Okutomi Torsten Sattler Mircea Cimpoi Marc Pollefeys Josef Sivic Tomas Pajdla Akihiko ToriiPublication date: April 2018AbstractWe seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold.

InLoc: Indoor Visual Localization with Dense Matching and View Synthesis最先出现在Nweon Paper

]]>
Title: InLoc: Indoor Visual Localization with Dense Matching and View Synthesis

Teams: Microsoft

Writers: Hajime Taira Masatoshi Okutomi Torsten Sattler Mircea Cimpoi Marc Pollefeys Josef Sivic Tomas Pajdla Akihiko Torii

Publication date: April 2018

Abstract

We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph with respect to a large indoor 3D map. The contributions of this work are three-fold. First, we develop a new large-scale visual localization method targeted for indoor environments. The method proceeds along three steps: (i) efficient retrieval of candidate poses that ensures scalability to large-scale environments, (ii) pose estimation using dense matching rather than local features to deal with textureless indoor scenes, and (iii) pose verification by virtual view synthesis to cope with significant changes in viewpoint, scene layout, and occluders. Second, we collect a new dataset with reference 6DoF poses for large-scale indoor localization. Query photographs are captured by mobile phones at a different time than the reference 3D map, thus presenting a realistic indoor localization scenario. Third, we demonstrate that our method significantly outperforms current state-of-the-art indoor localization approaches on this new challenging data.

InLoc: Indoor Visual Localization with Dense Matching and View Synthesis最先出现在Nweon Paper

]]>
https://paper.nweon.com/1238/feed 0
Semantic Visual Localization https://paper.nweon.com/1236 https://paper.nweon.com/1236#respond Mon, 18 May 2020 12:59:43 +0000 https://paper.nweon.com/1236 Title: Semantic Visual LocalizationTeams: MicrosoftWriters: Johannes Schönberger Marc Pollefeys Andreas Geiger Torsten SattlerPublication date: April 2018AbstractRobust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots.

Semantic Visual Localization最先出现在Nweon Paper

]]>
Title: Semantic Visual Localization

Teams: Microsoft

Writers: Johannes Schönberger Marc Pollefeys Andreas Geiger Torsten Sattler

Publication date: April 2018

Abstract

Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

Semantic Visual Localization最先出现在Nweon Paper

]]>
https://paper.nweon.com/1236/feed 0
If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong https://paper.nweon.com/1234 https://paper.nweon.com/1234#respond Mon, 18 May 2020 12:55:56 +0000 https://paper.nweon.com/1234 Title: If (Virtual) Reality Feels Almost Right, It’s Exactly WrongTeams: MicrosoftWriters: Mar Gonzalez Franco Christopher C Berger Ken HinckleyPublication date: April 2018AbstractWe can all remember the crisply beveled edges of our cheery-yellow No. 2 pencil, the cool, smooth feel of a chalk-powdered blackboard, the gritty red bricks of the schoolhouse walls. Surely that all wasn’t just an illusion? No, of course not. But—as it turns out—it kind of is.

If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong最先出现在Nweon Paper

]]>
Title: If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong

Teams: Microsoft

Writers: Mar Gonzalez Franco Christopher C Berger Ken Hinckley

Publication date: April 2018

Abstract

We can all remember the crisply beveled edges of our cheery-yellow No. 2 pencil, the cool, smooth feel of a chalk-powdered blackboard, the gritty red bricks of the schoolhouse walls. Surely that all wasn’t just an illusion? No, of course not. But—as it turns out—it kind of is.

If (Virtual) Reality Feels Almost Right, It’s Exactly Wrong最先出现在Nweon Paper

]]>
https://paper.nweon.com/1234/feed 0
Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation https://paper.nweon.com/1232 https://paper.nweon.com/1232#respond Mon, 18 May 2020 12:55:49 +0000 https://paper.nweon.com/1232 Title: Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane SimulationTeams: MicrosoftWriters: Yuhang Zhao Cynthia Bennett Hrvoje Benko Ed Cutrell Christian Holz Meredith Ringel Morris Mike SinclairPublication date: April 2018AbstractTraditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation最先出现在Nweon Paper

]]>
Title: Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation

Teams: Microsoft

Writers: Yuhang Zhao Cynthia Bennett Hrvoje Benko Ed Cutrell Christian Holz Meredith Ringel Morris Mike Sinclair

Publication date: April 2018

Abstract

Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces. We discuss potential applications supported by Canetroller ranging from entertainment to mobility training.

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation最先出现在Nweon Paper

]]>
https://paper.nweon.com/1232/feed 0
Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices https://paper.nweon.com/947 https://paper.nweon.com/947#respond Tue, 12 May 2020 02:13:48 +0000 https://paper.nweon.com/947 ...

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices最先出现在Nweon Paper

]]>
PubDate: March 2020

Teams: Microsoft Research Lab

Writers: Robert Gruen, Eyal Ofek, Anthony Steed1, Ran Gal, Mike Sinclair, Mar Gonzalez-Franco

PDF: Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices

Project: Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices

Abstract

Measuring Visual Latency in VR and AR devices has become increasingly complicated as many of the components will influence others in multiple loops and ultimately affect the human cognitive and sensory perception. In this paper we present a new method based on the idea that the performance of humans on a rapid motor task will remain constant, and that any added delay will correspond to the system latency. We ask users to perform a task inside video see-through devices to compare latency. We also calculate the latency of the systems using hardware instrumentation measurements for bench-marking. Results show that measurement through human cognitive performance can be reliable and comparable to hardware measurement.

Measuring System Visual Latency through Cognitive Latency on Video See-Through AR Devices最先出现在Nweon Paper

]]>
https://paper.nweon.com/947/feed 0
CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality https://paper.nweon.com/865 https://paper.nweon.com/865#respond Mon, 11 May 2020 01:42:15 +0000 https://paper.nweon.com/865 Title: CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual RealityTeams: Microsoft Research, 2Stanford UniversityWriters: Inrak Choi1,2, Eyal Ofek1, Hrvoje Benko1, Mike Sinclair1, Christian HolzPublication date: Apr 2018AbstractCLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger.

CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality最先出现在Nweon Paper

]]>
Title: CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality

Teams: Microsoft Research, 2Stanford University

Writers: Inrak Choi1,2, Eyal Ofek1, Hrvoje Benko1, Mike Sinclair1, Christian Holz

Publication date: Apr 2018

Abstract

CLAW is a handheld virtual reality controller that augments the typical controller functionality with force feedback and actuated movement to the index finger. Our controller enables three distinct interactions (grasping virtual object, touching virtual surfaces, and triggering) and changes its corresponding haptic rendering by sensing the differences in the user’s grasp. A servo motor coupled with a force sensor renders controllable forces to the index finger during grasping and touching. Using position tracking, a voice coil actuator at the index fingertip generates vibrations for various textures synchronized with finger movement. CLAW also supports a haptic force feedback in the trigger mode when the user holds a gun. We describe the design considerations for CLAW and evaluate its performance through two user studies. The first study obtained qualitative user feedback on the naturalness, effectiveness, and comfort when using the device. The second study investigated the ease of the transition between grasping and touching when using our device.

CLAW: A Multifunctional Handheld Haptic Controller for Grasping, Touching, and Triggering in Virtual Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/865/feed 0
Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces https://paper.nweon.com/734 https://paper.nweon.com/734#respond Wed, 06 May 2020 07:40:03 +0000 https://paper.nweon.com/734 Title: Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday SurfacesTeams: University of British Columbia Writers: Xiao, R., Schwarz, J., Throm, N., Wilson, A. and Benko, HPublication date: March 2018AbstractWe present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems.

Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces最先出现在Nweon Paper

]]>
Title: Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces

Teams: University of British Columbia

Writers: Xiao, R., Schwarz, J., Throm, N., Wilson, A. and Benko, H

Publication date: March 2018

Abstract

We present MRTouch, a novel multitouch input solution for head-mounted mixed reality systems. Our system enables users to reach out and directly manipulate virtual interfaces affixed to surfaces in their environment, as though they were touchscreens. Touch input offers precise, tactile and comfortable user input, and naturally complements existing popular modalities, such as voice and hand gesture. Our research prototype combines both depth and infrared camera streams together with real-time detection and tracking of surface planes to enable robust finger-tracking even when both the hand and head are in motion. Our technique is implemented on a commercial Microsoft HoloLens without requiring any additional hardware nor any user or environmental calibration. Through our performance evaluation, we demonstrate high input accuracy with an average positional error of 5.4 mm and 95% button size of 16 mm, across 17 participants, 2 surface orientations and 4 surface materials. Finally, we demonstrate the potential of our technique to enable on-world touch interactions through 5 example applications.

Supporting Responsive Cohabitation Between Virtual Interfaces and Physical Objects on Everyday Surfaces最先出现在Nweon Paper

]]>
https://paper.nweon.com/734/feed 0
Remixed Reality: Manipulating Space and Time in Augmented Reality https://paper.nweon.com/732 https://paper.nweon.com/732#respond Wed, 06 May 2020 07:39:59 +0000 https://paper.nweon.com/732 Title: Remixed Reality: Manipulating Space and Time in Augmented RealityTeams: TU Berlin;Microsoft ResearchWriters: David Lindlbauer, Andy D. WilsonPublication date: April 2018AbstractWe present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras.

Remixed Reality: Manipulating Space and Time in Augmented Reality最先出现在Nweon Paper

]]>
Title: Remixed Reality: Manipulating Space and Time in Augmented Reality

Teams: TU Berlin;Microsoft Research

Writers: David Lindlbauer, Andy D. Wilson

Publication date: April 2018

Abstract

We present Remixed Reality, a novel form of mixed reality. In contrast to classical mixed reality approaches where users see a direct view or video feed of their environment, with Remixed Reality they see a live 3D reconstruction, gathered from multiple external depth cameras. This approach enables changing the environment as easily as geometry can be changed in virtual reality, while allowing users to view and interact with the actual physical world as they would in augmented reality. We characterize a taxonomy of manipulations that are possible with Remixed Reality: spatial changes such as erasing objects; appearance changes such as changing textures; temporal changes such as pausing time; and viewpoint changes that allow users to see the world from different points without changing their physical location. We contribute a method that uses an underlying voxel grid holding information like visibility and transformations, which is applied to live geometry in real time.

Remixed Reality: Manipulating Space and Time in Augmented Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/732/feed 0
A Simple Baseline for Multi-Object Tracking https://paper.nweon.com/654 https://paper.nweon.com/654#respond Sun, 03 May 2020 11:55:19 +0000 https://paper.nweon.com/654 Title: A Simple Baseline for Multi-Object Tracking Teams: Huazhong University of Science and Technology,Microsoft Research AsiaWriters: Yifu Zhang, Chunyu Wang, Xinggang Wang, Wenjun Zeng, Wenyu LiuPubDate: Apr 2020Project: A Simple Baseline for Multi-Object Tracking AbstractThere has been remarkable progress on object detection and re-identification in recent years which are the core components for multi-object tracking.

A Simple Baseline for Multi-Object Tracking最先出现在Nweon Paper

]]>
Title: A Simple Baseline for Multi-Object Tracking

Teams: Huazhong University of Science and Technology,Microsoft Research Asia

Writers: Yifu Zhang, Chunyu Wang, Xinggang Wang, Wenjun Zeng, Wenyu Liu

PubDate: Apr 2020

Project: A Simple Baseline for Multi-Object Tracking

Abstract

There has been remarkable progress on object detection and re-identification in recent years which are the core components for multi-object tracking. However, little attention has been focused on accomplishing the two tasks in a single network to improve the inference speed. The initial attempts along this path ended up with degraded results mainly because the re-identification branch is not appropriately learned. In this work, we study the essential reasons behind the failure, and accordingly present a simple baseline to addresses the problems. It remarkably outperforms the state-of-the-arts on the public datasets at [Math Processing Error] fps. We hope this baseline could inspire and help evaluate new ideas in this field.

A Simple Baseline for Multi-Object Tracking最先出现在Nweon Paper

]]>
https://paper.nweon.com/654/feed 0
The Ethics of Realism in Virtual and Augmented Reality https://paper.nweon.com/648 https://paper.nweon.com/648#respond Tue, 28 Apr 2020 13:37:49 +0000 https://paper.nweon.com/648 ...

The Ethics of Realism in Virtual and Augmented Reality最先出现在Nweon Paper

]]>
Title: The Ethics of Realism in Virtual and Augmented Reality

Teams: UNIVERSITY OF BARCELONA、BBC R&D、Digital Catapult、Facebook London、NESTA\Microsoft Research、University College London

Writers: Mel Slater, Cristina Gonzalez-Liencres, Patrick Haggard, Charlotte Vinkers, Rebecca Gregory-Clarke, Steve Jelley6, Zillah Watson, Graham Breen, Raz Schwarz, William Steptoe, Dalila Szostak, Shivashankar Halan, Deborah Fox,Jeremy Silver

Publication date: March 2020

Abstract

The development of increasingly realistic virtual worlds allows for advancements in XR technology to be used in training, education, psychotherapy, physical and mental rehabilitation, marketing, entertainment, and for further applications in research. The benefits of superrealism are clear: realistic virtual scenarios can make XR applications more efficacious. For example, aviators can be better trained because the virtual simulation in which they operate is more accurate and closer to reality; exposure therapy in which a patient is presented with a realistic virtual version of the agent they are afraid of (for example, a spider) may be more efficient if the agent seems real, and so on. As occurs with most things in the world, with benefits come potential misuse, abuse or neglect, all of which bring about ethical concerns.

We started with a version of the golden rule: “That which is hateful to you, do not do to your fellow. That is the whole law; the rest is the explanation; go and learn it.” This is not at all about “empathy,” but very practical guidance. When we construct experiences for others, we need to think about whether we would want to have this experience—without prior warning, education, training, and assured compliance with a generally agreed and debated code of conduct. The challenge now is for researchers, content creators, and distributors of XR systems to determine what should be within this code of conduct.

The Ethics of Realism in Virtual and Augmented Reality最先出现在Nweon Paper

]]>
https://paper.nweon.com/648/feed 0
Cross View Fusion for 3D Human Pose Estimation https://paper.nweon.com/627 https://paper.nweon.com/627#respond Mon, 20 Apr 2020 07:16:52 +0000 https://paper.nweon.com/627 Title: Cross View Fusion for 3D Human Pose EstimationTeams: MicrosoftWriters: Haibo Qiu Chunyu Wang Jingdong Wang Naiyan Wang Wenjun ZengPublication date: October 2019AbstractWe present an approach to recover absolute 3D human poses from multi-view images by incorporating multi-view geometric priors in our model. It consists of two separate steps: (1) estimating the 2D poses in multi-view images and (2) recovering the 3D poses from the multi-view 2D poses.

Cross View Fusion for 3D Human Pose Estimation最先出现在Nweon Paper

]]>
Title: Cross View Fusion for 3D Human Pose Estimation

Teams: Microsoft

Writers: Haibo Qiu Chunyu Wang Jingdong Wang Naiyan Wang Wenjun Zeng

Publication date: October 2019

Abstract

We present an approach to recover absolute 3D human poses from multi-view images by incorporating multi-view geometric priors in our model. It consists of two separate steps: (1) estimating the 2D poses in multi-view images and (2) recovering the 3D poses from the multi-view 2D poses. First, we introduce a cross-view fusion scheme into CNN to jointly estimate 2D poses for multiple views. Consequently, the D pose estimation for each view already benefits from other views.
Second, we present a recursive Pictorial Structure Model to recover the 3D pose from the multi-view 2D poses. It gradually improves the accuracy of 3D pose with affordable computational cost. We test our method on two public datasets H36M and Total Capture. The Mean Per Joint Position Errors on the two datasets are 26mm and 29mm, which outperforms the state-of-the-arts remarkably (26mm vs 52mm, 29mm vs 35mm).

Cross View Fusion for 3D Human Pose Estimation最先出现在Nweon Paper

]]>
https://paper.nweon.com/627/feed 0
Optimizing Network Structure for 3D Human Pose Estimation https://paper.nweon.com/624 https://paper.nweon.com/624#respond Mon, 20 Apr 2020 07:11:16 +0000 https://paper.nweon.com/624 Title: Optimizing Network Structure for 3D Human Pose EstimationTeams: MicrosoftWriters: Hai Ci Chunyu Wang Xiaoxuan Ma Yizhou WangPublication date: October 2019AbstractHuman pose is essentially a skeletal graph where the joints are the nodes and the bones linking the joints are the edges.
So it is natural to apply Graph Convolutional Network (GCN) to estimate 3D poses from 2D poses.

Optimizing Network Structure for 3D Human Pose Estimation最先出现在Nweon Paper

]]>
Title: Optimizing Network Structure for 3D Human Pose Estimation

Teams: Microsoft

Writers: Hai Ci Chunyu Wang Xiaoxuan Ma Yizhou Wang

Publication date: October 2019

Abstract

Human pose is essentially a skeletal graph where the joints are the nodes and the bones linking the joints are the edges.
So it is natural to apply Graph Convolutional Network (GCN) to estimate 3D poses from 2D poses. In this work, we factor the Laplacian operator in GCN into the product of a structure matrix and a weight matrix. Based on the formulation we show that GCN has limited representation ability when it is used for estimating 3D poses. We overcome the limitation by introducing Locally Connected Network (LCN) which constructs the two matrices based on human anatomy. It notably improves the representation ability over GCN. In addition, since every joint is only connected to a small number of joints in its neighborhood, it has strong generalization ability. The experiments on public datasets show it: (1) outperforms the state-of-the-arts by a notable margin; (2) is less data hungry than alternative models; (3) generalizes well to unseen actions, datasets and even noisy 2D poses.

Optimizing Network Structure for 3D Human Pose Estimation最先出现在Nweon Paper

]]>
https://paper.nweon.com/624/feed 0
Learning to Refine 3D Human Pose Sequences https://paper.nweon.com/622 https://paper.nweon.com/622#respond Mon, 20 Apr 2020 07:11:14 +0000 https://paper.nweon.com/622 Title: Learning to Refine 3D Human Pose SequencesTeams: MicrosoftWriters: Jieru Mei Xingyu Chen Chunyu Wang Wenjun ZengPublication date: September 2019AbstractWe present a basis approach to refine noisy 3D human pose sequences by jointly projecting them onto a non-linear pose manifold, which is represented by a number of basis dictionaries with each covering a small manifold region.

Learning to Refine 3D Human Pose Sequences最先出现在Nweon Paper

]]>
Title: Learning to Refine 3D Human Pose Sequences

Teams: Microsoft

Writers: Jieru Mei Xingyu Chen Chunyu Wang Wenjun Zeng

Publication date: September 2019

Abstract

We present a basis approach to refine noisy 3D human pose sequences by jointly projecting them onto a non-linear pose manifold, which is represented by a number of basis dictionaries with each covering a small manifold region. We learn the dictionaries by jointly minimizing the distance between the original poses and their projections on the dictionaries, along with the temporal jittering of the projected poses. During testing, given a sequence of noisy poses which are probably off the manifold, we project them to the manifold using the same strategy as in training for refinement. We apply our approach to the monocular 3D pose estimation and the long term motion prediction tasks. The experimental results on the benchmark dataset shows the estimated 3D poses are notably improved in both tasks. In particular, the smoothness constraint helps generate more robust refinement results even when some poses in the original sequence have large errors.

Learning to Refine 3D Human Pose Sequences最先出现在Nweon Paper

]]>
https://paper.nweon.com/622/feed 0