Nweon Paper https://paper.nweon.com 映维网,影响力虚拟现实(VR)、增强现实(AR)产业信息数据平台 Wed, 19 Jan 2022 08:01:18 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.17 https://paper.nweon.com/wp-content/uploads/2021/04/nweon-icon.png Nweon Paper https://paper.nweon.com 32 32 A Hat-shaped Pressure-Sensitive Multi-Touch Interface for Virtual Reality https://paper.nweon.com/11672 Wed, 19 Jan 2022 08:01:18 +0000 https://paper.nweon.com/11672 PubDate: December 2021

A Hat-shaped Pressure-Sensitive Multi-Touch Interface for Virtual Reality最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: University of Tsukuba

Writers: Kazuki Sakata;Buntarou Shizuki;Ikkaku Kawaguchi;Shin Takahashi

PDF: A Hat-shaped Pressure-Sensitive Multi-Touch Interface for Virtual Reality

Abstract

We developed a hat-shaped touch interface for virtual reality viewpoint control. The hat is made of conductive fabric and thus is lightweight. The user can touch, drag, and push the surface, enabling three-dimensional viewpoint control.

A Hat-shaped Pressure-Sensitive Multi-Touch Interface for Virtual Reality最先出现在Nweon Paper

]]>
A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform https://paper.nweon.com/11670 Wed, 19 Jan 2022 07:25:19 +0000 https://paper.nweon.com/11670 PubDate: December 2021

A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: Hiroshima City University

Writers: Keito Morisaki;Wataru Wakita

PDF: A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform

Abstract

We evaluate how highly realistic the inclination of the ground can be perceived with our simple VR walking platform. Firstly we prepared seven maps with different ground inclinations of -30 to 30 degrees and every 10 degrees. Then we conducted a perception experiment of the inclination feeling with each of the treadmill and our proposed platform, and questionnaire evaluation about the presence, the fatigue, and the exhilaration. As a result, it was clarified that even if our proposed platform is used, not only the feeling of presence equivalent to that of the treadmill can be felt, but also the inclination of the ground up and down can be perceived.

A Perceptual Evaluation of the Ground Inclination with a Simple VR Walking Platform最先出现在Nweon Paper

]]>
Freehand Interaction in Virtual Reality: Bimanual Gestures for Cross-Workspace Interaction https://paper.nweon.com/11668 Wed, 19 Jan 2022 07:04:23 +0000 https://paper.nweon.com/11668 PubDate:

Freehand Interaction in Virtual Reality: Bimanual Gestures for Cross-Workspace Interaction最先出现在Nweon Paper

]]>
PubDate:

Teams: Rochester Institute of Technology

Writers: Chao Peng;Yangzi Dong;Lizhou Cao

PDF: Freehand Interaction in Virtual Reality: Bimanual Gestures for Cross-Workspace Interaction

Abstract

This work presents the design and evaluation of three bimanual interaction modalities for cross-workspace interaction in virtual reality (VR), in which the user can move items between a personal workspace and a shared workspace. We conducted an empirical study to understand three modalities and their suitability for cross-workspace interaction in VR.

Freehand Interaction in Virtual Reality: Bimanual Gestures for Cross-Workspace Interaction最先出现在Nweon Paper

]]>
Swaying Locomotion: A VR-based Locomotion System through Head Movements https://paper.nweon.com/11666 Wed, 19 Jan 2022 06:37:26 +0000 https://paper.nweon.com/11666 PubDate: December 202

Swaying Locomotion: A VR-based Locomotion System through Head Movements最先出现在Nweon Paper

]]>
PubDate: December 202

Teams: Waseda University

Writers: Masahiro Shimizu;Tatsuo Nakajima

PDF: Swaying Locomotion: A VR-based Locomotion System through Head Movements

Abstract

Locomotion systems used in virtual reality (VR) content have a significant impact on the content user experience. One of the most important factors of a walking system in VR is whether it can provide a plausible walking sensation because it is considered directly related to the user’s sense of presence. However, joystick-based and teleportation-based locomotion systems, which are commonly used today, can hardly provide an appropriate sense of presence to a user. To solve this problem, we present Swaying Locomotion, which is a novel VR-based locomotion system that uses head movements to support a user walking in a VR space while actually sitting in real space. Our user study suggests that Swaying Locomotion provides a better walking sensation than the traditional joystick-based approach.

Swaying Locomotion: A VR-based Locomotion System through Head Movements最先出现在Nweon Paper

]]>
A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness https://paper.nweon.com/11664 Wed, 19 Jan 2022 06:19:23 +0000 https://paper.nweon.com/11664 PubDate: December 2021

A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: University of Wollongong

Writers: Joel Anthony Teixeira;Sebastien Miellet;Stephen Palmisano

PDF: A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness

Abstract

The relationship between vection (illusory self-motion) and cybersickness is complex. This pilot study examined whether only unexpected vection provokes sickness during head-mounted display (HMD) based virtual reality (VR). 20 participants ran through the tutorial of Mission: ISS (an HMD VR app) until they experienced notable sickness (maximum exposure was 15 minutes). We found that: 1) cybersickness was positively related to vection strength; and 2) cybersickness appeared to be more likely to occur during unexpected vection. Given the implications of these findings, future studies should attempt to replicate them and confirm the unexpected vection hypothesis with larger sample sizes and rigorous experimental designs.

A Pilot Study Examining the Unexpected Vection Hypothesis of Cybersickness最先出现在Nweon Paper

]]>
Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality https://paper.nweon.com/11662 Wed, 19 Jan 2022 06:01:20 +0000 https://paper.nweon.com/11662 PubDate: December 2021

Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: University of Duisburg-Essen;

Writers: Jonathan Liebers;Patrick Horn;Christian Burschik;Uwe Gruenefeld;Stefan Schneegass

PDF: Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality

Abstract

Identifying users of a Virtual Reality (VR) headset provides designers of VR content with the opportunity to adapt the user interface, set user-specific preferences, or adjust the level of difficulty either for games or training applications. While most identification methods currently rely on explicit input, implicit user identification is less disruptive and does not impact the immersion of the users. In this work, we introduce a biometric identification system that employs the user’s gaze behavior as a unique, individual characteristic. In particular, we focus on the user’s gaze behavior and head orientation while following a moving stimulus. We verify our approach in a user study. A hybrid post-hoc analysis results in an identification accuracy of up to 75 % for an explainable machine learning algorithm and up to 100 % for a deep learning approach. We conclude with discussing application scenarios in which our approach can be used to implicitly identify users.

Using Gaze Behavior and Head Orientation for Implicit Identification in Virtual Reality最先出现在Nweon Paper

]]>
Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR https://paper.nweon.com/11660 Wed, 19 Jan 2022 05:25:33 +0000 https://paper.nweon.com/11660 PubDate: December 2021

Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: Inria;

Writers: Marc Baloup;Thomas Pietrzak;Martin Hachet;Géry Casiez

PDF: Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR

Abstract

The control of an avatar’s facial expressions in virtual reality is mainly based on the automated recognition and transposition of the user’s facial expressions. These isomorphic techniques are limited to what users can convey with their own face and have recognition issues. To overcome these limitations, non-isomorphic techniques rely on interaction techniques using input devices to control the avatar’s facial expressions. Such techniques need to be designed to quickly and easily select and control an expression, and not disrupt a main task such as talking. We present the design of a set of new non-isomorphic interaction techniques for controlling an avatar facial expression in VR using a standard VR controller. These techniques have been evaluated through two controlled experiments to help designing an interaction technique combining the strengths of each approach. This technique was evaluated in a final ecological study showing it can be used in contexts such as social applications.

Non-isomorphic Interaction Techniques for Controlling Avatar Facial Expressions in VR最先出现在Nweon Paper

]]>
Avatar Tracking Control with Generations of Physically Natural Responses on Contact to Reduce Performers’ Loads https://paper.nweon.com/11658 Wed, 19 Jan 2022 04:25:25 +0000 https://paper.nweon.com/11658 PubDate: December 2021

Avatar Tracking Control with Generations of Physically Natural Responses on Contact to Reduce Performers’ Loads最先出现在Nweon Paper

]]>
PubDate: December 2021

Teams: Tokyo Institute of Technology;

Writers: Ken Sugimori;Hironori Mitake;Hirohito Sato;Kensho Oguri;Shoichi Hasegawa

PDF: Avatar Tracking Control with Generations of Physically Natural Responses on Contact to Reduce Performers’ Loads

Abstract

The real-time performance of motion-captured avatars in virtual space is becoming increasingly popular, especially within applications including social virtual realities (VRs), virtual performers (e.g., virtual YouTubers), and VR games. Such applications often include contact between multiple avatars or between avatars and objects as communication or gameplay. However, most current applications do not solve the effects of contact for avatars, causing penetration or unnatural behavior to occur. In reality, no contact with the player’s body occurs; nevertheless, the player must perform as if contact occurred. While physics simulation can solve the contact issue, the naive use of physics simulation causes tracking delay. We propose a novel avatar tracking controller with feedforward control. Our method enables quick, accurate tracking and flexible motion in response to contacts. Furthermore, the technique frees avatar performers from the loads of performing as if contact occurred. We implemented our method and experimentally evaluated the naturalness of the resulting motions and our approach’s effectiveness in reducing performers’ loads.

Avatar Tracking Control with Generations of Physically Natural Responses on Contact to Reduce Performers’ Loads最先出现在Nweon Paper

]]>
Flyables: Haptic Input Devices for Virtual Realityusing Quadcopters https://paper.nweon.com/11656 Wed, 19 Jan 2022 03:01:23 +0000 https://paper.nweon.com/11656 PubDate: December 202

Flyables: Haptic Input Devices for Virtual Realityusing Quadcopters最先出现在Nweon Paper

]]>
PubDate: December 202

Teams: University of Duisburg-Essen;LMU Munich

Writers: Jonas Auda;Nils Verheyen;Sven Mayer;Stefan Schneegass

PDF: Flyables: Haptic Input Devices for Virtual Realityusing Quadcopters

Abstract

Virtual Reality (VR) has made its way into everyday life. While VR delivers an ever-increasing level of immersion, controls and their haptics are still limited. Current VR headsets come with dedicated controllers that are used to control every virtual interface element. However, the controller input mostly differs from the virtual interface. This reduces immersion. To provide a more realistic input, we present Flyables, a toolkit that provides matching haptics for virtual user interface elements using quadcopters. We took five common virtual UI elements and built their physical counterparts. We attached them to quadcopters to deliver on-demand haptic feedback. In a user study, we compared Flyables to controller-based VR input. While controllers still outperform Flyables in terms of precision and task completion time, we found that Flyables present a more natural and playful way to interact with VR environments. Based on the results from the study, we outline research challenges that could improve interaction with Flyables in the future.

Flyables: Haptic Input Devices for Virtual Realityusing Quadcopters最先出现在Nweon Paper

]]>
VR Natural Walking in Impossible Spaces https://paper.nweon.com/11654 Tue, 18 Jan 2022 04:40:28 +0000 https://paper.nweon.com/11654 PubDate: November 2021

VR Natural Walking in Impossible Spaces最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: University of Cape Town

Writers: Daniel Christopher Lochner;James Edward Gain

PDF: VR Natural Walking in Impossible Spaces

Abstract

Locomotion techniques in Virtual Reality (VR) are the means by which users traverse a Virtual Environment (VE) and are considered an integral and indispensable part of user interaction.

This paper investigates the potential that natural walking in impossible spaces provides as a viable locomotion technique in VR when compared to conventional alternatives, such as teleportation, arm-swinging and touchpad/joystick. In this context, impossible spaces are locally Euclidean orbit-manifolds — subspaces separated by portals that are individually consistent but are able to impossibly overlap in space without interacting.

A quantitative user experiment was conducted with n = 25 participants, who were asked to complete a set of tasks inside four houses, in each case using a different locomotion technique to navigate. After completing all tasks for a given house, participants were then asked to complete a set of three questionnaires regarding the technique used, namely the Simulator Sickness Questionnaire (SSQ), Game Experience Questionnaire (GEQ) and System Usability Scale (SUS). Time for task completion was also recorded.

It was found that natural walking in impossible spaces significantly improves (α = 0.05) immersion (as compared to teleportation and touchpad/joystick, r > 0.7) and system usability (over touchpad/joystick and arm-swinging, r ≥ 0.38), but seems to lead to slower task completion.

VR Natural Walking in Impossible Spaces最先出现在Nweon Paper

]]>
ALiSE: Through the mirrored space, and what user interacts with avatars naturally https://paper.nweon.com/11652 Tue, 18 Jan 2022 04:22:29 +0000 https://paper.nweon.com/11652 PubDate: November 2021

ALiSE: Through the mirrored space, and what user interacts with avatars naturally最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: University of Tsukuba

Writers: Hiroki Uchida;Tadashi Ebihara;Naoto Wakatsuki;Keiichi Zempo

PDF: ALiSE: Through the mirrored space, and what user interacts with avatars naturally

Abstract

Augmented Reality (AR) and Virtual Reality (VR) interfaces, such as a conventional head-mounted display (HMD), have the problem of being unable to share the content experience with people who are not wearing the device. To solve this problem, we focus on AR mirrors and propose ALiSE (Augmented Layer Interweaved Semi-Reflecting Existence)—a display that displays images in a mirrored space using half-mirrors and gaps. We compared the service quality of this device with that of an ordinary display and an HMD. As a result, we were able to confirm the superiority of the ALiSE over conventional displays in several items. The results suggest that the connection generated between the service provider and the user using ALiSE is equivalent to the experience in VR. In other words, our proposed display method can provide the same level of satisfaction as the services provided in the conventional VR space. In addition, it is possible to share the content experience with accessibility equivalent to observing digital signage without wearing an HMD.

ALiSE: Through the mirrored space, and what user interacts with avatars naturally最先出现在Nweon Paper

]]>
Does Synthetic Voice alter Social Response to a Photorealistic Character in Virtual Reality? https://paper.nweon.com/11650 Tue, 18 Jan 2022 03:01:36 +0000 https://paper.nweon.com/11650 PubDate: November 2021

Does Synthetic Voice alter Social Response to a Photorealistic Character in Virtual Reality?最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: Mimetic;Trinity College Dublin

Writers: Katja Zibrek;Joao Cabral;Rachel McDonnell

PDF: Does Synthetic Voice alter Social Response to a Photorealistic Character in Virtual Reality?

Abstract

In this paper, we investigate the effect of a realism mismatch in the voice and appearance of a photorealistic virtual character in virtual reality. While many studies have investigated voice attributes for robots, not much is known about the effect voice naturalness has on the perception of realistic virtual characters. We conducted an experiment in Virtual Reality (VR) with over two hundred participants investigating the mismatch between realistic appearance and unrealistic voice on the feeling of presence, and the emotional response of the user to the character expressing a strong negative emotion (sadness, guilt). We predicted that the mismatched voice would lower social presence and cause users to have a negative emotional reaction and feelings of discomfort towards the character. We found that the concern for the virtual character was indeed altered by the unnatural voice, though interestingly it did not affect social presence.

Does Synthetic Voice alter Social Response to a Photorealistic Character in Virtual Reality?最先出现在Nweon Paper

]]>
Boundaries facilitate spatial orientation in virtual environments https://paper.nweon.com/11648 Tue, 18 Jan 2022 02:25:31 +0000 https://paper.nweon.com/11648 PubDate: November 2021

Boundaries facilitate spatial orientation in virtual environments最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: Iowa State University

Writers: Jonathan W. Kelly;Jason Terrill;Moriah Zimmerman;Taylor A. Doty;Lucia A. Cherep;Melynda T. Hoover;Nicole R. Powell;Owen J. Perrin;Stephen B. Gilbert

PDF: Boundaries facilitate spatial orientation in virtual environments

Abstract

Teleporting is a popular interface for locomotion through virtual environments (VEs). However, teleporting can cause disorientation. Spatial boundaries, such as room walls, are effective cues for reducing disorientation. This experiment explored the characteristics that make a boundary effective. All boundaries tested reduced disorientation, and boundaries representing navigational barriers (e.g., a fence) were no more effective than those defined only by texture changes (e.g., flooring transition). The findings indicate that boundaries need not be navigational barriers to reduce disorientation, giving VE designers greater flexibility in the spatial cues to include.

Boundaries facilitate spatial orientation in virtual environments最先出现在Nweon Paper

]]>
Toward Predicting User Waist Location From VR Headset and Controllers Through Machine Learning https://paper.nweon.com/11646 Tue, 18 Jan 2022 02:01:27 +0000 https://paper.nweon.com/11646 PubDate: November 2021

Toward Predicting User Waist Location From VR Headset and Controllers Through Machine Learning最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: University of Southern California

Writers: Adityan Jothi;Powen Yao;Andrew Zhao;Mark Miller;Sloan Swieso;Michael Zyda

PDF: Toward Predicting User Waist Location From VR Headset and Controllers Through Machine Learning

Abstract

Commercial VR Headsets typically include a headset and two motion controllers. From this VR setup, we have access to the user’s head and hands, but lack information about other parts of the user’s body without using additional equipment. Accurate position of other body parts such as the waist would expand the user’s interaction space. In this paper, we describe our efforts at using machine learning to predict the position and rotation of the user’s waist using only the headset and two motion controllers with an additional tracker at the waist for training.

Toward Predicting User Waist Location From VR Headset and Controllers Through Machine Learning最先出现在Nweon Paper

]]>
Motion Sickness Conditioning to Reduce Cybersickness https://paper.nweon.com/11644 Tue, 18 Jan 2022 01:28:21 +0000 https://paper.nweon.com/11644 PubDate: November 2021

Motion Sickness Conditioning to Reduce Cybersickness最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: Carleton University

Writers: Assem Kroma;Naz Al Kassm;Robert J Teather

PDF: Motion Sickness Conditioning to Reduce Cybersickness

Abstract

We present a remote longitudinal experiment to assess the effectiveness of a common motion sickness conditioning technique (MSCT), the Puma method, on cybersickness in VR. Our goal was to evaluate benefits of conditioning techniques as an alternative to visual cybersickness reduction methods (e.g., viewpoint restriction) or habituation approaches which ”train” the user to become acclimatized to cybersickness. We compared three techniques – habituation, the Puma method conditioning exercise, and a placebo (Tai Chi) – in a cybersickness-inducing navigation task over 10 sessions. Preliminary results indicate promising effects.

Motion Sickness Conditioning to Reduce Cybersickness最先出现在Nweon Paper

]]>
Visuo-haptic Illusions for Motor Skill Acquisition in Virtual Reality https://paper.nweon.com/11642 Wed, 12 Jan 2022 08:01:35 +0000 https://paper.nweon.com/11642 PubDate: November 2021

Visuo-haptic Illusions for Motor Skill Acquisition in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: Sorbonne Université;SEGULA Technologies

Writers: Benoît Geslain;Gilles Bailly;Sinan D Haliyo;Corentin Duboc

PDF: Visuo-haptic Illusions for Motor Skill Acquisition in Virtual Reality

Abstract

In this article we investigate the potential of using visuo-haptic illusions in Virtual Reality environment to learn motor skills in a real environment. We report on an empirical study where 20 participants perform a multi-object pick-and-place task. The results show that although users do not perform the same motion trajectories in the virtual and real environments, skills acquired in VR augmented with visuo-haptic illusions can be successfully reused in a real environment: There is a high amount of skill transfer (78.5%), similar to the one obtained in an optimal real training environment (82.4%); Finally, participants did not notice the illusion and were enthusiastic about the VR environment. Our findings invite designers and researchers to consider visuo-haptic illusions to help operators to learn motor skills in a cost-effective environment.

Visuo-haptic Illusions for Motor Skill Acquisition in Virtual Reality最先出现在Nweon Paper

]]>
Introduce Floor Vibration to Virtual Reality https://paper.nweon.com/11640 Wed, 12 Jan 2022 07:28:25 +0000 https://paper.nweon.com/11640 PubDate: November 2021

Introduce Floor Vibration to Virtual Reality最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: The Hong Kong Polytechnic University;Kennesaw State University;University of Canterbury;UNC Chapel Hill;

Writers: Richard Chen Li;Sungchul Jung;Ryan Douglas McKee;Mary C. Whitton;Robert W. Lindeman

PDF: Introduce Floor Vibration to Virtual Reality

Abstract

Floor vibration, a type of whole-body tactile stimulation, could mitigate cybersickness during virtual reality (VR) exposure. This study aims to further investigate its effects on cybersickness, as well as presence and emotional arousal by introducing floor vibration as a proxy for representing different virtual ground surfaces. For the investigation, a realistic walking-on-the-beach scenario was implemented, and floor vibrations were introduced in synchrony with the footsteps. Three conditions were designed based on the same scenario with different floor vibrations. The user study involving 26 participants found that there was no significant difference in presence and cybersickness across the three conditions, but the introduction of floor vibration (regardless the vibration type) had a mixed impact on the emotional arousal, as measured by changes in pupil sizes and skin conductance. Also, participants generally most preferred the matched vibration.

Introduce Floor Vibration to Virtual Reality最先出现在Nweon Paper

]]>
Don’t Block the Ground: Reducing Discomfort in Virtual Reality with an Asymmetric Field-of-View Restrictor https://paper.nweon.com/11638 Wed, 12 Jan 2022 07:01:23 +0000 https://paper.nweon.com/11638 PubDate: November 2021

Don’t Block the Ground: Reducing Discomfort in Virtual Reality with an Asymmetric Field-of-View Restrictor最先出现在Nweon Paper

]]>
PubDate: November 2021

Teams: University of Minnesota

Writers: Fei Wu;George S Bailey;Thomas Stoffregen;Evan Suma Rosenberg

PDF: Don’t Block the Ground: Reducing Discomfort in Virtual Reality with an Asymmetric Field-of-View Restrictor

Abstract

Although virtual reality has been gaining in popularity, users continue to report discomfort during and after use of VR applications, and many experience symptoms associated with motion sickness. To mitigate this problem, dynamic field-of-view restriction is a common technique that has been widely implemented in commercial VR games. Although artificially reducing the field-of-view during movement can improve comfort, the standard restrictor is typically implemented using a symmetric circular mask that blocks imagery in the periphery of the visual field. This reduces users’ visibility of the virtual environment and can negatively impact their subjective experience. In this paper, we propose and evaluate a novel asymmetric field-of-view restrictor that maintains visibility of the ground plane during movement. We conducted a remote user study that sampled from the population of VR headset owners. The experiment used a within-subjects design that compared the ground-visible restrictor, the traditional symmetric restrictor, and a control condition without FOV restriction. Participation required navigating through a complex maze-like environment using a controller during three separate virtual reality sessions conducted at least 24 hours apart. Results showed that ground-visible FOV restriction offers benefits for user comfort, postural stability, and subjective sense of presence. Additionally, we found no evidence of negative drawbacks to maintaining visibility of the ground plane during FOV restriction, suggesting that the proposed technique is superior for experienced users compared to the widely used symmetric restrictor.

Don’t Block the Ground: Reducing Discomfort in Virtual Reality with an Asymmetric Field-of-View Restrictor最先出现在Nweon Paper

]]>
Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming https://paper.nweon.com/11636 Wed, 12 Jan 2022 06:25:32 +0000 https://paper.nweon.com/11636 PubDate: Jan 2022

Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: New York University;Adobe Research

Writers: Shaoyu Chen, Budmonde Duinkharjav, Xin Sun, Li-Yi Wei, Stefano Petrangeli, Jose Echevarria, Claudio Silva, Qi Sun

PDF: Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming

Abstract

Media streaming has been adopted for a variety of applications such as entertainment, visualization, and design. Unlike video/audio streaming where the content is usually consumed sequentially, 3D applications such as gaming require streaming 3D assets to facilitate client-side interactions such as object manipulation and viewpoint movement. Compared to audio and video streaming, 3D streaming often requires larger data sizes and yet lower latency to ensure sufficient rendering quality, resolution, and latency for perceptual comfort. Thus, streaming 3D assets can be even more challenging than streaming audios/videos, and existing solutions often suffer from long loading time or limited quality.
To address this critical and timely issue, we propose a perceptually-optimized progressive 3D streaming method for spatial quality and temporal consistency in immersive interactions. Based on the human visual mechanisms in the frequency domain, our model selects and schedules the streaming dataset for optimal spatial-temporal quality. We also train a neural network for our model to accelerate this decision process for real-time client-server applications. We evaluate our method via subjective studies and objective analysis under varying network conditions (from 3G to 5G) and client devices (HMD and traditional displays), and demonstrate better visual quality and temporal consistency than alternative solutions.

Instant Reality: Gaze-Contingent Perceptual Optimization for 3D Virtual Reality Streaming最先出现在Nweon Paper

]]>
In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles https://paper.nweon.com/11634 Wed, 12 Jan 2022 06:01:21 +0000 https://paper.nweon.com/11634 PubDate: Jan 2022

In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: Xi’an Jiaotong-Liverpool University;The University of
Liverpool

Writers: Yiming Luo, Jialin Wang, Rongkai Shi, Hai-Ning Liang, Shan Luo

PDF: In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles

Abstract

In recent years, Virtual Reality (VR) Head-Mounted Displays (HMD) have been used to provide an immersive, first-person view in real-time for the remote-control of Unmanned Ground Vehicles (UGV). One critical issue is that it is challenging to perceive the distance of obstacles surrounding the vehicle from 2D views in the HMD, which deteriorates the control of UGV. Conventional distance indicators used in HMD take up screen space which leads clutter on the display and can further reduce situation awareness of the physical environment. To address the issue, in this paper we propose off-screen in-device feedback using vibro-tactile and/or light-visual cues to provide real-time distance information for the remote control of UGV. Results from a study show a significantly better performance with either feedback type, reduced workload and improved usability in a driving task that requires continuous perception of the distance between the UGV and its environmental objects or obstacles. Our findings show a solid case for in-device vibro-tactile and/or light-visual feedback to support remote operation of UGVs that highly relies on distance perception of objects.

In-Device Feedback in Immersive Head-Mounted Displays for Distance Perception During Teleoperation of Unmanned Ground Vehicles最先出现在Nweon Paper

]]>
A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets https://paper.nweon.com/11632 Wed, 12 Jan 2022 05:25:21 +0000 https://paper.nweon.com/11632 PubDate: Jan 2022

A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: Bishop’s University

Writers: Doan Duy Vo, Russell Butler

PDF: A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets

Abstract

Markerless motion capture has become an active field of research in computer vision in recent years. Its extensive applications are known in a great variety of fields, including computer animation, human motion analysis, biomedical research, virtual reality, and sports science. Estimating human posture has recently gained increasing attention in the computer vision community, but due to the depth of uncertainty and the lack of the synthetic datasets, it is a challenging task. Various approaches have recently been proposed to solve this problem, many of which are based on deep learning. They are primarily focused on improving the performance of existing benchmarks with significant advances, especially 2D images. Based on powerful deep learning techniques and recently collected real-world datasets, we explored a model that can predict the skeleton of an animation based solely on 2D images. Frames generated from different real-world datasets with synthesized poses using different body shapes from simple to complex. The implementation process uses DeepLabCut on its own dataset to perform many necessary steps, then use the input frames to train the model. The output is an animated skeleton for human movement. The composite dataset and other results are the “ground truth” of the deep model.

A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets最先出现在Nweon Paper

]]>
Security Considerations for Virtual Reality Systems https://paper.nweon.com/11630 Wed, 12 Jan 2022 05:01:20 +0000 https://paper.nweon.com/11630 PubDate: Jan 2022

Security Considerations for Virtual Reality Systems最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: Kennesaw State University;University of Guelph

Writers: Karthik Viswanathan, Abbas Yazdinejad

PDF: Security Considerations for Virtual Reality Systems

Abstract

There is a growing need for authentication methodology in virtual reality applications. Current systems assume that the immersive experience technology is a collection of peripheral devices connected to a personal computer or mobile device. Hence there is a complete reliance on the computing device with traditional authentication mechanisms to handle the authentication and authorization decisions. Using the virtual reality controllers and headset poses a different set of challenges as it is subject to unauthorized observation, unannounced to the user given the fact that the headset completely covers the field of vision in order to provide an immersive experience. As the need for virtual reality experiences in the commercial world increases, there is a need to provide other alternative mechanisms for secure authentication. In this paper, we analyze a few proposed authentication systems and reached a conclusion that a multidimensional approach to authentication is needed to address the granular nature of authentication and authorization needs of a commercial virtual reality applications in the commercial world.

Security Considerations for Virtual Reality Systems最先出现在Nweon Paper

]]>
De-rendering 3D Objects in the Wild https://paper.nweon.com/11628 Wed, 12 Jan 2022 04:22:24 +0000 https://paper.nweon.com/11628 PubDate: Jan 2022

De-rendering 3D Objects in the Wild最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: University of Oxford

Writers: Felix Wimbauer, Shangzhe Wu, Christian Rupprecht

PDF: De-rendering 3D Objects in the Wild

Abstract

With increasing focus on augmented and virtual reality applications (XR) comes the demand for algorithms that can lift objects from images and videos into representations that are suitable for a wide variety of related 3D tasks. Large-scale deployment of XR devices and applications means that we cannot solely rely on supervised learning, as collecting and annotating data for the unlimited variety of objects in the real world is infeasible. We present a weakly supervised method that is able to decompose a single image of an object into shape (depth and normals), material (albedo, reflectivity and shininess) and global lighting parameters. For training, the method only relies on a rough initial shape estimate of the training objects to bootstrap the learning process. This shape supervision can come for example from a pretrained depth network or – more generically – from a traditional structure-from-motion pipeline. In our experiments, we show that the method can successfully de-render 2D images into a decomposed 3D representation and generalizes to unseen object categories. Since in-the-wild evaluation is difficult due to the lack of ground truth data, we also introduce a photo-realistic synthetic test set that allows for quantitative evaluation.

De-rendering 3D Objects in the Wild最先出现在Nweon Paper

]]>
Stay in Touch! Shape and Shadow Influence Surface Contact in XR Displays https://paper.nweon.com/11626 Wed, 12 Jan 2022 02:43:28 +0000 https://paper.nweon.com/11626 PubDate: Jan 2022

Stay in Touch! Shape and Shadow Influence Surface Contact in XR Displays最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: Vanderbilt University;University of Utah

Writers: Haley Adams, Holly Gagnon, Sarah Creem-Regehr, Jeanine Stefanucci, Bobby Bodenheimer

PDF: Stay in Touch! Shape and Shadow Influence Surface Contact in XR Displays

Abstract

The information provided to a person’s visual system by extended reality (XR) displays is not a veridical match to the information provided by the real world. Due in part to graphical limitations in XR head-mounted displays (HMDs), which vary by device, our perception of space may be altered. However, we do not yet know which properties of virtual objects rendered by HMDs – particularly augmented reality displays – influence our ability to understand space. In the current research, we evaluate how immersive graphics affect spatial perception across three unique XR displays: virtual reality (VR), video see-through augmented reality (VST AR), and optical see-through augmented reality (OST AR). We manipulated the geometry of the presented objects as well as the shading techniques for objects’ cast shadows. Shape and shadow were selected for evaluation as they play an important role in determining where an object is in space by providing points of contact between an object and its environment – be it real or virtual. Our results suggest that a non-photorealistic (NPR) shading technique, in this case for cast shadows, may be used to improve depth perception by enhancing perceived surface contact in XR. Further, the benefit of NPR graphics is more pronounced in AR than in VR displays. One’s perception of ground contact is influenced by an object’s shape, as well. However, the relationship between shape and surface contact perception is more complicated.

Stay in Touch! Shape and Shadow Influence Surface Contact in XR Displays最先出现在Nweon Paper

]]>
ENI: Quantifying Environment Compatibility for Natural Walking in Virtual Reality https://paper.nweon.com/11624 Wed, 12 Jan 2022 02:13:20 +0000 https://paper.nweon.com/11624 PubDate: Jan 2022

ENI: Quantifying Environment Compatibility for Natural Walking in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: Jan 2022

Teams: University of Maryland;

Writers: Niall L. Williams, Aniket Bera, Dinesh Manocha

PDF: ENI: Quantifying Environment Compatibility for Natural Walking in Virtual Reality

Abstract

We present a novel metric to analyze the similarity between the physical environment and the virtual environment for natural walking in virtual reality. Our approach is general and can be applied to any pair of physical and virtual environments. We use geometric techniques based on conforming constrained Delaunay triangulations and visibility polygons to compute the Environment Navigation Incompatibility (ENI) metric that can be used to measure the complexity of performing simultaneous navigation. We demonstrate applications of ENI for highlighting regions of incompatibility for a pair of environments, guiding the design of the virtual environments to make them more compatible with a fixed physical environment, and evaluating the performance of different redirected walking controllers. We validate the ENI metric using simulations and two user studies. Results of our simulations and user studies show that in the environment pair that our metric identified as more navigable, users were able to walk for longer before colliding with objects in the physical environment. Overall, ENI is the first general metric that can automatically identify regions of high and low compatibility in physical and virtual environments. Our project website is available at this https URL.

ENI: Quantifying Environment Compatibility for Natural Walking in Virtual Reality最先出现在Nweon Paper

]]>
The Time Perception Control and Regulation in VR Environment https://paper.nweon.com/11622 Tue, 11 Jan 2022 06:37:20 +0000 https://paper.nweon.com/11622 PubDate: Dec 2021

The Time Perception Control and Regulation in VR Environment最先出现在Nweon Paper

]]>
PubDate: Dec 2021

Teams: University of Electronic Science and Technology of China;Glasgow College

Writers: Zhitao Liu (1), Jinke Shi (3), Junhao He (3), Yu Wu (3), Ning Xie (2), Ke Xiong (3), Yutong Liu (2)

PDF: The Time Perception Control and Regulation in VR Environment

Abstract

To adapt to different environments, human circadian rhythms will be constantly adjusted as the environment changes, which follows the principle of survival of the fittest. According to this principle, objective factors (such as circadian rhythms, and light intensity) can be utilized to control time perception. The subjective judgment on the estimation of elapsed time is called time perception. In the physical world, factors that can affect time perception, represented by illumination, are called the Zeitgebers. In recent years, with the development of Virtual Reality (VR) technology, effective control of zeitgebers has become possible, which is difficult to achieve in the physical world. Based on previous studies, this paper deeply explores the actual performance in VR environment of four types of time zeitgebers (music, color, cognitive load, and concentration) that have been proven to have a certain impact on time perception in the physical world. It discusses the study of the measurement of the difference between human time perception and objective escaped time in the physical world.

The Time Perception Control and Regulation in VR Environment最先出现在Nweon Paper

]]>
Bottom-up approaches for multi-person pose estimation and it’s applications: A brief review https://paper.nweon.com/11620 Tue, 11 Jan 2022 04:25:19 +0000 https://paper.nweon.com/11620 PubDate: Dec 2021

Bottom-up approaches for multi-person pose estimation and it’s applications: A brief review最先出现在Nweon Paper

]]>
PubDate: Dec 2021

Teams: Norwegian University of Science and Technology

Writers: Milan Kresović, Thong Duy Nguyen

PDF: Bottom-up approaches for multi-person pose estimation and it’s applications: A brief review

Abstract

Human Pose Estimation (HPE) is one of the fundamental problems in computer vision. It has applications ranging from virtual reality, human behavior analysis, video surveillance, anomaly detection, self-driving to medical assistance. The main objective of HPE is to obtain the person’s posture from the given input. Among different paradigms for HPE, one paradigm is called bottom-up multi-person pose estimation. In the bottom-up approach, initially, all the key points of the targets are detected, and later in the optimization stage, the detected key points are associated with the corresponding targets. This review paper discussed the recent advancements in bottom-up approaches for the HPE and listed the possible high-quality datasets used to train the models. Additionally, a discussion of the prominent bottom-up approaches and their quantitative results on the standard performance matrices are given. Finally, the limitations of the existing methods are highlighted, and guidelines of the future research directions are given.

Bottom-up approaches for multi-person pose estimation and it’s applications: A brief review最先出现在Nweon Paper

]]>
Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects https://paper.nweon.com/11618 Tue, 11 Jan 2022 03:04:20 +0000 https://paper.nweon.com/11618 PubDate: Dec 2021

Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects最先出现在Nweon Paper

]]>
PubDate: Dec 2021

Teams: NVIDIA ‡The University of Tokyo §RIKEN

Writers: Atsuhiro Noguchi, Umar Iqbal, Jonathan Tremblay, Tatsuya Harada, Orazio Gallo

PDF: Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects

Abstract

Rendering articulated objects while controlling their poses is critical to applications such as virtual reality or animation for movies. Manipulating the pose of an object, however, requires the understanding of its underlying structure, that is, its joints and how they interact with each other. Unfortunately, assuming the structure to be known, as existing methods do, precludes the ability to work on new object categories. We propose to learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views, with no additional supervision, such as joints annotations, or information about the structure. Our insight is that adjacent parts that move relative to each other must be connected by a joint. To leverage this observation, we model the object parts in 3D as ellipsoids, which allows us to identify joints. We combine this explicit representation with an implicit one that compensates for the approximation introduced. We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.

Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects最先出现在Nweon Paper

]]>
Pseudo-Haptic Button for Improving User Experience of Mid-Air Interaction in VR https://paper.nweon.com/11616 Tue, 11 Jan 2022 02:25:27 +0000 https://paper.nweon.com/11616 PubDate: Dec 2021

Pseudo-Haptic Button for Improving User Experience of Mid-Air Interaction in VR最先出现在Nweon Paper

]]>
PubDate: Dec 2021

Teams: Korea Advanced Institute of Science and Technology

Writers: Woojoo Kim, Shuping Xiong

PDF: Pseudo-Haptic Button for Improving User Experience of Mid-Air Interaction in VR

Abstract

Mid-air interaction is one of the promising interaction modalities in virtual reality (VR) due to its merits in naturalness and intuitiveness, but the interaction suffers from the lack of haptic feedback as no force or vibrotactile feedback can be provided in mid-air. As a breakthrough to compensate for this insufficiency, the application of pseudo-haptic features which create the visuo-haptic illusion without actual physical haptic stimulus can be explored. Therefore, this study aimed to investigate the effect of four pseudo-haptic features: proximity feedback, protrusion, hit effect, and penetration blocking on user experience for free-hand mid-air button interaction in VR. We conducted a user study on 21 young subjects to collect user ratings on various aspects of user experience while users were freely interacting with 16 buttons with different combinations of four features. Results indicated that all investigated features significantly improved user experience in terms of haptic illusion, embodiment, sense of reality, spatiotemporal perception, satisfaction, and hedonic quality. In addition, protrusion and hit effect were more beneficial in comparison with the other two features. It is recommended to utilize the four proposed pseudo-haptic features in 3D user interfaces (UIs) to make users feel more pleased and amused, but caution is needed when using proximity feedback together with other features. The findings of this study could be helpful for VR developers and UI designers in providing better interactive buttons in the 3D interfaces.

Pseudo-Haptic Button for Improving User Experience of Mid-Air Interaction in VR最先出现在Nweon Paper

]]>
Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study https://paper.nweon.com/11613 Thu, 06 Jan 2022 07:13:23 +0000 https://paper.nweon.com/11613 PubDate: Dec 2021

Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study最先出现在Nweon Paper

]]>
PubDate: Dec 2021

Teams: DeepMind Technologies;University of Alberta

Writers: Dylan J. A. Brenneis, Adam S. Parker, Michael Bradley Johanson, Andrew Butcher, Elnaz Davoodi, Leslie Acker, Matthew M. Botvinick, Joseph Modayil, Adam White, Patrick M. Pilarski

PDF: Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study

Abstract

Artificial intelligence systems increasingly involve continual learning to enable flexibility in general situations that are not encountered during system training. Human interaction with autonomous systems is broadly studied, but research has hitherto under-explored interactions that occur while the system is actively learning, and can noticeably change its behaviour in minutes. In this pilot study, we investigate how the interaction between a human and a continually learning prediction agent develops as the agent develops competency. Additionally, we compare two different agent architectures to assess how representational choices in agent design affect the human-agent interaction. We develop a virtual reality environment and a time-based prediction task wherein learned predictions from a reinforcement learning (RL) algorithm augment human predictions. We assess how a participant’s performance and behaviour in this task differs across agent types, using both quantitative and qualitative analyses. Our findings suggest that human trust of the system may be influenced by early interactions with the agent, and that trust in turn affects strategic behaviour, but limitations of the pilot study rule out any conclusive statement. We identify trust as a key feature of interaction to focus on when considering RL-based technologies, and make several recommendations for modification to this study in preparation for a larger-scale investigation. A video summary of this paper can be found at this https URL .

Assessing Human Interaction in Virtual Reality With Continually Learning Prediction Agents Based on Reinforcement Learning Algorithms: A Pilot Study最先出现在Nweon Paper

]]>