Nweon Paper https://paper.nweon.com 映维网,影响力虚拟现实(VR)、增强现实(AR)产业信息数据平台 Thu, 26 Nov 2020 08:00:16 +0000 zh-CN hourly 1 https://wordpress.org/?v=4.8.15 https://paper.nweon.com/wp-content/uploads/2020/03/cropped-nweon-icon-1-150x150.png Nweon Paper https://paper.nweon.com 32 32 ARRay-Tracing – A Middleware to Provide Ray Tracing Capabilities to Augmented Reality Libraries https://paper.nweon.com/8204 Thu, 26 Nov 2020 08:00:16 +0000 https://paper.nweon.com/8204 PubDate: November 2020

ARRay-Tracing – A Middleware to Provide Ray Tracing Capabilities to Augmented Reality Libraries最先出现在Nweon Paper

]]>
PubDate: November 2020

Teams: Federal University of Juiz de Fora

Writers: Lidiane T. Pereira; Wellingston Cataldo R. Junior; Jairo F. Souza; Rodrigo L. S. Silva

PDF: ARRay-Tracing – A Middleware to Provide Ray Tracing Capabilities to Augmented Reality Libraries

Abstract

In recent years we saw the increase and popularization of Augmented Reality applications. However, the visual mismatch between real and virtual elements produce the absence of realism, discouraging the use of these applications. Rendering techniques, as Ray Tracing, can produce very photorealistic scenes. Despite its high computational cost, recent advances in graphics hardware allow the use of these techniques in real-time applications, like those of Augmented Reality. In the literature, we can find some works combining these two technologies, AR and Ray Tracing, but rigidly, without modularization, making the solutions dependent on certain frameworks. In this work, we propose a middleware to integrate Augmented Reality and Ray Tracing in a modularized way, allowing the developer to switch between existing libraries and frameworks to better fulfill their needs and expertise. To evaluate the proposal, we build an application using the artoolkitX library and the Optix framework. Through our middleware, we verified that it was possible to integrate these tools in a simple way, maintaining performance and photorealism in our AR application.

ARRay-Tracing – A Middleware to Provide Ray Tracing Capabilities to Augmented Reality Libraries最先出现在Nweon Paper

]]>
Walking With Augmented Reality: A Preliminary Assessment of Visual Feedback With a Cable-Driven Active Leg Exoskeleton (C-ALEX) https://paper.nweon.com/8202 Thu, 26 Nov 2020 07:44:41 +0000 https://paper.nweon.com/8202 PubDate: July 2019

Walking With Augmented Reality: A Preliminary Assessment of Visual Feedback With a Cable-Driven Active Leg Exoskeleton (C-ALEX)最先出现在Nweon Paper

]]>
PubDate: July 2019

Teams: Columbia University;Quadrus Medical Technologies;

Writers: Rand Hidayah; Siddharth Chamarthy; Avni Shah; Matthew Fitzgerald-Maguire; Sunil K. Agrawal

PDF: Walking With Augmented Reality: A Preliminary Assessment of Visual Feedback With a Cable-Driven Active Leg Exoskeleton (C-ALEX)

Abstract

Visual and force feedback are common elements in rehabilitation robotics, but visual feedback is difficult to provide in over-ground mobile exoskeleton systems. This letter aims to provide a method to integrate visual feedback using an augmented reality HoloLens headset with our mobile C-ALEX system. A preliminary study was carried out to assess the effects of providing force-only (Haptic), force and visual (HoloHapt) or visual (Visual) feedback to three independent groups, each containing eight participants. The groups showed an increase in normalized step height, nSH (HoloHapt: 1.10 ± 0.13, Haptic: 1.03 ± 0.23 Visual: 1.61 ± 0.52) and decreased normalized trajectory tracking error, TE (HoloHapt: 42.8% ± 23.4%, Haptic: 47.6% ± 18.4%, Visual: 114 .2 % ± 60.0 % ). Visual nSH differed significantly from HoloHapt and Haptic nSH ( p<0.005 ). Lap-wise normalized tracking error differed significantly ( p<0.005 ) within participants. The results show the feasibility of and differences between each form of feedback for overground gait training. This information is useful for future studies targeted at patients with gait impairments.

Walking With Augmented Reality: A Preliminary Assessment of Visual Feedback With a Cable-Driven Active Leg Exoskeleton (C-ALEX)最先出现在Nweon Paper

]]>
Sound Field Translation Methods for Binaural Reproduction https://paper.nweon.com/8200 Thu, 26 Nov 2020 07:34:57 +0000 https://paper.nweon.com/8200 PubDate: December 2019

Sound Field Translation Methods for Binaural Reproduction最先出现在Nweon Paper

]]>
PubDate: December 2019

Teams: The Australian National University

Writers: Lachlan Birnie; Thushara Abhayapala; Prasanga Samarasinghe; Vladimir Tourbabin

PDF: Sound Field Translation Methods for Binaural Reproduction

Abstract

Virtual-reality reproduction of real-world acoustic environments often fix the listener position to that of the microphone. In this paper, we propose a method for listener translation in a virtual reproduction that incorporates a mix of near-field and far-field sources. Compared to conventional plane-wave techniques, the mixed-source method offers stronger near-field reproduction and translation capabilities in the case of a sparse virtualization.

Sound Field Translation Methods for Binaural Reproduction最先出现在Nweon Paper

]]>
Multimodal interface for temporal pattern based interactive large volumetric visualization https://paper.nweon.com/8198 Thu, 26 Nov 2020 07:15:02 +0000 https://paper.nweon.com/8198 PubDate: December 2017

Multimodal interface for temporal pattern based interactive large volumetric visualization最先出现在Nweon Paper

]]>
PubDate: December 2017

Teams: Indian Institute of Information Technology;Nanyang Technical University

Writers: Piyush Kumar; Anupam Agrawal; Shitala Prasad

PDF: Multimodal interface for temporal pattern based interactive large volumetric visualization

Abstract

Scientific data visualization is a prominent area of research in the development of Virtual Reality Applications in order to make it more interactive and robotic. But the efficient interaction with the large size of medical data is a challenging task to diagnose virtual surgerical environment learning for a Physician. In this paper, we proposed a multimodal interface for GPU-accelerated interactive large scale volumetric data rendering to overcome this limitation. The large data has been pre-processed by octree method. An improved raycasting algorithm is used in association with a transfer function classification method for the effective rendering. The temporal data is used for defining gestures, retrieving in a pattern from the wearable device for providing multimodality with the large rendered data. A gesture vocabulary has been defined by these patterns for the navigation in visualizing the large scale medical data, which consists of five complex interactive postures used for Normal, Picking, Rotation, Dragging, and Zooming gestures. These gesture vocabularies have been categorized by kNN classification method of pattern recognition. Experimental results of the proposed approach are analyzed with the help of various ANOVA and T-testing graphs using SPSS 20 version tool and confidence interval of interaction with hand gestures vocabulary. The results of proposed approach are further compared with the existing approaches in which Microsoft Kinect and P5 dataglove have been used. The proposed system has been navigated by the DG5 VHand 2.0 Bluetooth version hand dataglove as wearable assistive device to achieve an effective interaction. The system has been tested on 10 different sizes of volume datasets ranging from 10MB to 3.15 GB. The scope of this paper is basically to develop system training with robotic arm in medical domain.

Multimodal interface for temporal pattern based interactive large volumetric visualization最先出现在Nweon Paper

]]>
Human Pose Tracking from RGB Inputs https://paper.nweon.com/8196 Thu, 26 Nov 2020 07:03:07 +0000 https://paper.nweon.com/8196 PubDate: August 2019

Human Pose Tracking from RGB Inputs最先出现在Nweon Paper

]]>
PubDate: August 2019

Teams: UFPE

Writers: Ricardo R. Barioni; Lucas Figueiredo; Kelvin Cunha; Veronica Teichrieb

PDF: Human Pose Tracking from RGB Inputs

Abstract

In the context of Virtual and Augmented Reality, in order to allow systems to provide natural interaction through gestures and general understanding of user body behavior it is fundamental to obtain the configuration of human poses. Once achieved, the goal of obtaining such poses from RGB images through cameras brings the possibility of a wide range of applications in the areas of security (i.e.: local activity monitoring), healthcare (i.e.: postural analysis) and entertainment (i.e.: games and animations motion capture). However, the acquisition of human poses solely through RGB images is still considered a challenge, once that pure visual data doesnt explicitly give us information about the human body joints (keypoints in pixels) localization in the image. In this work we propose the a machine learning method, more specifically deep learning based on convolutional neural networks, capable of tackling this problem.

Human Pose Tracking from RGB Inputs最先出现在Nweon Paper

]]>
Creating an ambient intelligence network using insight and merged reality technologies https://paper.nweon.com/8194 Thu, 26 Nov 2020 07:00:26 +0000 https://paper.nweon.com/8194 PubDate: January 2018

Creating an ambient intelligence network using insight and merged reality technologies最先出现在Nweon Paper

]]>
PubDate: January 2018

Teams: Middlesex University

Writers: Ralph Moseley

PDF: Creating an ambient intelligence network using insight and merged reality technologies

Abstract

Humans live and work in environments which are essentially “dumb”, though recently, due to information networks, devices within these areas have increasingly become connected. The system presented here builds on previous work to create an ambient intelligence zone using facets of a merged reality system and a new process based on recognition/insight patterns. When combined, agents within the system communicate and react as one to form a responsive ambient intelligence at a given location.

Creating an ambient intelligence network using insight and merged reality technologies最先出现在Nweon Paper

]]>
An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans https://paper.nweon.com/8192 Thu, 26 Nov 2020 06:36:46 +0000 https://paper.nweon.com/8192 PubDate: October 2018

An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans最先出现在Nweon Paper

]]>
PubDate: October 2018

Teams: University of Toronto

Writers: Michael Nixon; Steve DiPaola; Ulysses Bernardet

PDF: An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

Abstract

Designing highly believable characters remains a major concern within digital games. Matching a chosen personality and other dramatic qualities to displayed behavior is an important part of improving overall believability. Gaze is a critical component of social exchanges and serves to make characters engaging or aloof, as well as to establish character’s role in a conversation.In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We constructed a cross-domain verbal-conceptual computational model of gaze for virtual humans to facilitate the display of social status. We describe the validation of the model’s parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture. In a first set of studies, conducted on Amazon Mechanical Turk using prerecorded video clips of animated characters, we found statistically significant differences in how the characters’ status was rated based on the variation in social status.In a second step based on these empirical findings, we designed an interactive system that incorporates dynamic eye tracking and spoken dialog, along with real-time control of a virtual character. We evaluated the model using a presential, interactive scenario of a simulated hiring interview. Corroborating our previous finding, the interactive study yielded significant differences in perception of status were found (p = .046). Thus, we believe status is an important aspect of dramatic believability, and accordingly, this paper presents our social eye gaze model for realistic procedurally animated characters and shows its efficacy.

An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans最先出现在Nweon Paper

]]>
Comparison of Multi-Layer Perceptron and Cascade Feed-Forward Neural Network for Head-Related Transfer Function Interpolation https://paper.nweon.com/8190 Thu, 26 Nov 2020 06:30:10 +0000 https://paper.nweon.com/8190 PubDate: June 2019

Comparison of Multi-Layer Perceptron and Cascade Feed-Forward Neural Network for Head-Related Transfer Function Interpolation最先出现在Nweon Paper

]]>
PubDate: June 2019

Teams: Vilnius Gediminas Technical University

Writers: Mantas Tamulionis; Artūras Serackis

PDF: Comparison of Multi-Layer Perceptron and Cascade Feed-Forward Neural Network for Head-Related Transfer Function Interpolation

Abstract

Acoustic Virtual Reality (AVR) is a popular field of today’s research, and the technologies it explores allow users to experience the virtual reality even more interactively, creating a sense of being truly involved into a virtual acoustic field. Auralization is one of the most interesting and useful AVR techniques. This procedure makes it possible to simulate how sound waves will behave in a particular environment, including how the listener will perceive it. This is achieved by taking into account Head-related transfer function (HRTF), which is essential for creating the main auralization product – Binaural Room Impulse Response (BRIR). It is common to use pre-recorded HRTF databases, but the required HRTF value can also be modeled using Artificial Neural networks (ANN). This article presents an investigation on ANN application for HRTF interpolation from discrete measured functions. Two types of neural networks are investigated: a Multi-Layer Perceptron and a Cascade Feed-Forward Network. Experimental investigation has shown that additional feed of inputs to the hidden layer in cascade network does not improve the interpolation performance. The best results were received using Multi-Layer Perceptron having two hidden layers with 32 and 16 neurons respectively.

Comparison of Multi-Layer Perceptron and Cascade Feed-Forward Neural Network for Head-Related Transfer Function Interpolation最先出现在Nweon Paper

]]>
Stereoscopic visualization of 3D model using OpenGL https://paper.nweon.com/8188 Thu, 26 Nov 2020 06:18:17 +0000 https://paper.nweon.com/8188 PubDate: March 2018

Stereoscopic visualization of 3D model using OpenGL最先出现在Nweon Paper

]]>
PubDate: March 2018

Teams: Institute of Information and Control Hangzhou Dianzi University,Chinese Academy of Sciences

Writers: Zunjie Zhu; Chenggang Yan; Liang Li; Yongning Ren; Qiqi Luo; Jun Li

PDF: Stereoscopic visualization of 3D model using OpenGL

Abstract

The three-dimensional display technology is not only the key of the virtual reality but also the basement of Virtual Reality (VR) system, and the formation of depth perception which is achieved by binocular disparity provided a significant benefit to 3D display. In the current market, the 3D effect is directly determined by employing double-viewpoint method which contains the distance information of the scene. Based on stereo vision and the use of OpenGL, this paper is to extract multi-view images from the virtual 3D model, then transform it into a stereoscopic disparity map to solve the 3D display problems. The visualization system comprises the 3D model reading and the creation of binocular disparity map. The previous part introduces the reading process of vertex information and the drawing of vertices. The follow-up is made by the monocular transformation algorithm and the drawing of double-viewpoint map.

Stereoscopic visualization of 3D model using OpenGL最先出现在Nweon Paper

]]>
The Design and Implementation of a VR Gun Controller with Haptic Feedback https://paper.nweon.com/8186 Thu, 26 Nov 2020 06:12:03 +0000 https://paper.nweon.com/8186 PubDate: March 2019

The Design and Implementation of a VR Gun Controller with Haptic Feedback最先出现在Nweon Paper

]]>
PubDate: March 2019

Teams: Rutgers University

Writers: Ali Rahimi; Het Patel; Hammad Ajmal; Sasan Haghani

PDF: The Design and Implementation of a VR Gun Controller with Haptic Feedback

Abstract

Virtual reality is often interpreted as an experience where the user is immersed in a responsive virtual world. Gaming in a virtual world often utilizes peripherals in order to enhance user immersion. This paper provides the design and implementation of a VR gun controller with haptic feedback for the HTC Vive. Compared to current gun controllers with recoil systems that cost between 260to2400, the proposed design costs $150 while providing an immersive experience to the users.

The Design and Implementation of a VR Gun Controller with Haptic Feedback最先出现在Nweon Paper

]]>
Augmented Reality Dynamic Image Recognition Technology Based on Deep Learning Algorithm https://paper.nweon.com/8184 Thu, 26 Nov 2020 06:03:07 +0000 https://paper.nweon.com/8184 PubDate: July 2020

Augmented Reality Dynamic Image Recognition Technology Based on Deep Learning Algorithm最先出现在Nweon Paper

]]>
PubDate: July 2020

Teams: Zhengzhou University

Writers: Qiuyun Cheng; Sen Zhang; Shukui Bo; Dengxi Chen; Haijun Zhang

PDF: Augmented Reality Dynamic Image Recognition Technology Based on Deep Learning Algorithm

Abstract

Augmented reality is a research hotspot developed on the basis of virtual reality. Friendly human-computer interaction interface makes the application prospect of augmented reality technology very broad. Convolutional neural networks in deep learning have been widely used in the field of computer vision and become an important weapon in dynamic image recognition tasks. Combining deep learning and traditional machine learning techniques, this article uses convolutional neural networks to extract features from image data. The convolutional neural network uses the last layer of features and uses the softmax recognizer for recognition. This article combines a convolutional neural network that can learn good feature information with integrated learning that has good recognition effects. In the recognition tasks of the MNIST database and the CIFAR-10 database, comparison experiments were performed by adjusting the hierarchical structure, activation function, descent algorithm, data enhancement, pooling selection, and number of feature maps of the improved convolutional neural network. The convolutional neural network uses a pooling size of 3*3, and uses more cores (above 64), small receptive fields (2*2), and more hierarchical structures. In addition, the Relu activation function, gradient descent algorithm with momentum, and enhanced data set are also used. The research results show that under certain experimental conditions, the dynamic image recognition results have dropped to a very low error rate in the MNIST database, and the error rate in the CIFAR-10 database is also ideal.

Augmented Reality Dynamic Image Recognition Technology Based on Deep Learning Algorithm最先出现在Nweon Paper

]]>
Exploring Visuo-haptic Feedback Congruency in Virtual Reality https://paper.nweon.com/8182 Thu, 26 Nov 2020 06:00:04 +0000 https://paper.nweon.com/8182 PubDate: October 2020

Exploring Visuo-haptic Feedback Congruency in Virtual Reality最先出现在Nweon Paper

]]>
PubDate: October 2020

Teams: University of Lincoln

Writers: Benjamin Williams; Alexandra E. Garton; Christopher J. Headleand

PDF: Exploring Visuo-haptic Feedback Congruency in Virtual Reality

Abstract

Visuo-haptic feedback is an important aspect of virtual reality experiences, with several previous works investigating its benefits and effects. A key aspect of this domain is congruency of crossmodal feedback and how it affects users. However, an important sub-domain which has received surprisingly little focus is visuo-haptic congruency in an interactive multisensory setting. This is especially important given that multisensory integration is crucial to player immersion in the context of virtual reality video games. In this paper, we attempt to address this lack of research. To achieve this, a total of 50 participants played a virtual reality racing game with either congruent or incongruent visuo-haptic feedback. Specifically, these users engaged in a driving simulator with physical gear shift interfaces, with one treatment group using a stick-shift gearbox, and the other using a paddle-shift setup. The virtual car they drove (A Formula Rookie race car) was only visually congruent with the stick-shift setup. A motion simulator was also used to provide synchronous vestibular cues and diversify the range of modalities in multisensory integration. The racing simulator used was Project CARS 2, one of the world’s most popular commercial racing simulators. Our findings showed no significant differences between the groups in measures of user presence or in-game performance, counter to previous work regarding visuo-haptic congruency. However, the Self-evaluation of Performance PQ subscale was notably close to significance. Our results can be used to better inform games and simulation developers, especially those targeting virtual reality.

Exploring Visuo-haptic Feedback Congruency in Virtual Reality最先出现在Nweon Paper

]]>
Multi-User Redirected Walking and Resetting Using Artificial Potential Fields https://paper.nweon.com/8180 Thu, 26 Nov 2020 05:30:31 +0000 https://paper.nweon.com/8180 PubDate: February 2019

Multi-User Redirected Walking and Resetting Using Artificial Potential Fields最先出现在Nweon Paper

]]>
PubDate: February 2019

Teams: Miami University

Writers: Eric R. Bachmann; Eric Hodgson; Cole Hoffbauer; Justin Messinger

PDF: Multi-User Redirected Walking and Resetting Using Artificial Potential Fields

Abstract

Head-mounted displays (HMDs) and large area position tracking systems can enable users to navigate virtual worlds through natural walking. Redirected walking (RDW) imperceptibly steers immersed users away from physical world obstacles allowing them to explore unbounded virtual worlds while walking in limited physical space. In cases of imminent collisions, resetting techniques can reorient them into open space. This work introduces categorically new RDW and resetting algorithms based on the use of artificial potential fields that “push” users away from obstacles and other users. Data from human subject experiments indicate that these methods reduce potential single-user resets by 66% and increase the average distance between resets by 86% compared to previous techniques. A live multi-user study demonstrates the viability of the algorithm with up to 3 concurrent users, and simulation results indicate that the algorithm scales efficiently up to at least 8 users and is effective with larger groups.

Multi-User Redirected Walking and Resetting Using Artificial Potential Fields最先出现在Nweon Paper

]]>
FingerTouch: Touch Interaction Using a Fingernail-Mounted Sensor on a Head-Mounted Display for Augmented Reality https://paper.nweon.com/8178 Thu, 26 Nov 2020 05:18:15 +0000 https://paper.nweon.com/8178 PubDate: May 2020

FingerTouch: Touch Interaction Using a Fingernail-Mounted Sensor on a Head-Mounted Display for Augmented Reality最先出现在Nweon Paper

]]>
PubDate: May 2020

Teams: Korea University of Science and Technology;Korea Institute of Science and Technology

Writers: Ju Young Oh; Ji-Hyung Park; Jung-Min Park

PDF: FingerTouch: Touch Interaction Using a Fingernail-Mounted Sensor on a Head-Mounted Display for Augmented Reality

Abstract

This study proposes FingerTouch, a method of touch interaction using a head-mounted display for mobile augmented reality. FingerTouch allows users to manipulate virtual content with one-finger touch interaction regardless of the material or tilt of the plane the finger is touching. In addition, users can interact freely with virtual contents using FingerTouch. As the prototype developed in this study uses only one inertial measurement unit sensor attached to a fingernail, it features high mobility and allows users to feel natural tactile feedback, which is important for performing everyday tasks. The user evaluation of FingerTouch indicated that it provides high accuracy, with an average cursor navigation error of 2.15 mm and an average finger gesture recognition accuracy of 95% across 22 participants, two surface orientations, and three surface materials.

FingerTouch: Touch Interaction Using a Fingernail-Mounted Sensor on a Head-Mounted Display for Augmented Reality最先出现在Nweon Paper

]]>
Virtual bumps display based on electrical muscle stimulation https://paper.nweon.com/8176 Thu, 26 Nov 2020 05:00:23 +0000 https://paper.nweon.com/8176 PubDate: May 2020

Virtual bumps display based on electrical muscle stimulation最先出现在Nweon Paper

]]>
PubDate: May 2020

Teams: Kumamoto University

Writers: Takaya Ishimaru; Satoshi Saga

PDF: Virtual bumps display based on electrical muscle stimulation

Abstract

With the development of Virtual Reality (VR) technology, it is becoming possible to generate haptic sensations toward virtual objects. However, in the Augmented Reality (AR) environment with conventional haptic displays, it is difficult for the user to touch the object directly because the device enters between him/her and the touching object. Therefore, we propose to use electrical muscle stimulation (EMS) to present haptic sensations seamlessly in an AR environment. In this study, we constructed a system that reproduces bumps on a flat display using EMS. In addition, we conducted three psychophysical experiments to evaluate the effectiveness of our system.

Virtual bumps display based on electrical muscle stimulation最先出现在Nweon Paper

]]>
Pano: Design and evaluation of a 360° through-the-lens technique https://paper.nweon.com/8174 Thu, 26 Nov 2020 05:00:19 +0000 https://paper.nweon.com/8174 PubDate: April 2017

Pano: Design and evaluation of a 360° through-the-lens technique最先出现在Nweon Paper

]]>
PubDate: April 2017

Teams: Inria Université de Bordeaux;Université de Bordeaux

Writers: Damien Clergeaud; Pascal Guitton

PDF: Pano: Design and evaluation of a 360° through-the-lens technique

Abstract

Virtual Reality experiments have enabled immersed users to perform virtual tasks in a Virtual Environment (VE). Before beginning a task, however, users must locate and select the different objects they will need in the VE. This first step is important and affects global performance in the virtual task. If a user takes too long to locate and select an object, the duration of the task is increased. Moreover, both the comfort and global efficiency of users deteriorate as search and selection time increase. We have developed Pano, a technique which reduces this time by increasing the users natural field of view. More precisely, we provide a 360 panoramic virtual image which is displayed on a specific window, called the PanoWindow. Thanks to the PanoWindow, users can perceive and interact with the part of the Virtual Environment (VE) that is behind them without any additional head or body movement. In this paper, we present two user studies with 30 and 21 participants in different VEs. For the first study, participants were invited to perform object-finding tasks with and without Pano. The second study involved position-estimating tasks in order to know if the PanoWindow image enables users to build an accurate representation of the environment. First, the results show that Pano both reduces task duration and improves user comfort. Second, they demonstrate that good object-localization accuracy can be achieved using Pano.

Pano: Design and evaluation of a 360° through-the-lens technique最先出现在Nweon Paper

]]>
Next-Generation Networking and Edge Computing for Mixed Reality Real-Time Interactive Systems https://paper.nweon.com/8172 Thu, 26 Nov 2020 04:37:17 +0000 https://paper.nweon.com/8172 PubDate: July 2020

Next-Generation Networking and Edge Computing for Mixed Reality Real-Time Interactive Systems最先出现在Nweon Paper

]]>
PubDate: July 2020

Teams: Tennessee Technological University;University of Nebraska Omaha;Colorado State University

Writers: Susmit Shannigrahi; Spyridon Mastorakis; Francisco R. Ortega

PDF: Next-Generation Networking and Edge Computing for Mixed Reality Real-Time Interactive Systems

Abstract

With the proliferation of head-mounted displays, cloud computing platforms, and machine learning algorithms, the next-generation of AR/VR applications require research in several directions – more capable hardware, more proficient software and algorithms, and novel network protocols. While the first two problems have received considerable attention, the networking component is the least explored of these three. This paper discusses the networking challenges encountered by the AR/VR community that experiments with novel hardware, software, and computing platforms in a real-world environment. In this collaborative work, we discuss the current networking challenges both quantitatively (by analyzing AR/VR network interactions of head-mounted displays) and quantitatively (by distributing a targeted community survey among AR/VR researchers). We show that the cloud-provided network services are not ideal for the next-generation AR/VR applications. We then present a Named Data Networking (NDN) based framework that can address these challenges by offering a hybrid edge-cloud model for the execution of AR/VR computational tasks.

Next-Generation Networking and Edge Computing for Mixed Reality Real-Time Interactive Systems最先出现在Nweon Paper

]]>
A Low-cost Approach Towards Streaming 3D Videos of Large-scale Sport Events to Mixed Reality Headsets in Real-time https://paper.nweon.com/8170 Thu, 26 Nov 2020 04:31:05 +0000 https://paper.nweon.com/8170 PubDate: May 2020

A Low-cost Approach Towards Streaming 3D Videos of Large-scale Sport Events to Mixed Reality Headsets in Real-time最先出现在Nweon Paper

]]>
PubDate: May 2020

Teams: Auto-ID Labs MIT & ETHZ

Writers: Kevin Marty; Prithvi Rajasekaran; Yongbin Sun; Klaus Fuchs

PDF: A Low-cost Approach Towards Streaming 3D Videos of Large-scale Sport Events to Mixed Reality Headsets in Real-time

Abstract

Watching sports events via 3D- instead of two-dimensional video streaming allows for increased immersion, e.g. via mixed reality headsets in comparison to traditional screens. So far, capturing 3D video of sports events required expensive outside-in tracking with numerous cameras. This study demonstrates the feasibility of streaming sports content to mixed reality headsets as holographs in real-time using inside-out tracking and low-cost equipment only. We demonstrate our system by streaming a race car on an indoor track as 3D models, which are then rendered in an Magic Leap One headset. An onboard camera, mounted on the race car provides the video stream used to localize the car via computer vision. The localization is estimated by an end-to-end convolutional neural network (CNN). The study compares three state-of-the-art CNN models in their respective accuracy and execution time, with PoseNet+LSTM achieving position and orientation accuracy of 0.35m and 3.95°. The total streaming latency in this study was 1041ms, suggesting technical feasibility of streaming 3D sports content, e.g. on large playgrounds, in near real-time onto mixed-reality headsets.

A Low-cost Approach Towards Streaming 3D Videos of Large-scale Sport Events to Mixed Reality Headsets in Real-time最先出现在Nweon Paper

]]>
Closed – Loop Calibration for Optical See-Through Near Eye Display with Infinity Focus https://paper.nweon.com/8168 Thu, 26 Nov 2020 04:18:02 +0000 https://paper.nweon.com/8168 PubDate: April 2019

Closed – Loop Calibration for Optical See-Through Near Eye Display with Infinity Focus最先出现在Nweon Paper

]]>
PubDate: April 2019

Teams: University of Pisa

Writers: Umberto Fontana; Fabrizio Cutolo; Nadia Cattari; Vincenzo Ferrari

PDF: Closed – Loop Calibration for Optical See-Through Near Eye Display with Infinity Focus

Abstract

In wearable augmented reality systems, optical see-through near-eye displays (OST NEDs) based on waveguides are becoming a standard as they are generally preferred over solutions based on semi-reflective curved mirrors. This is mostly due to their ability to ensure reduced image distortion and sufficiently wide eye motion box without the need for bulky optical and electronics components to be placed in front of the user’s face and/or onto the user’s line of sight. In OST head-mounted displays (HMDs) the user’s own view is augmented by optically combining it with the virtual content rendered on a two-dimensional (2D) microdisplay. For achieving a perfect combination of the light field in the real 3D world and the computer-generated 2D graphics projected on the display, an accurate alignment between real and virtual content must be yielded at the level of the NED imaging plane. To this end, we must know the exact position of the user’s eyes within the HMD reference system. State-of-the-art methods models the eye-NED system as an off-axis pinhole camera model, and therefore include the contribution of the eyes’ positions into the modelling of the intrinsic matrix of the eye-NED. In this paper, we will describe a method for robustly calibrating OST NEDs that explicitly ignore this assumption. To verify the accuracy of our method, we conducted a set of experiments in a setup comprising a commercial waveguide-based OST NED and a camera in place of the user’s eye. We tested a set of different camera (or eye) positions within the eye box of the NED. The obtained results demonstrate that the proposed method yields accurate results in terms of real-to-virtual alignment, regardless of the position of the eyes within the eye box of the NEDs (Figure 1). The achieved viewing accuracy was of 1.85 ±1.37 pixels.

Closed – Loop Calibration for Optical See-Through Near Eye Display with Infinity Focus最先出现在Nweon Paper

]]>
Torso-mounted Vibrotactile Interface to Experimentally Induce Illusory Own-body Perceptions https://paper.nweon.com/8166 Thu, 26 Nov 2020 03:00:11 +0000 https://paper.nweon.com/8166 PubDate: January 2020

Torso-mounted Vibrotactile Interface to Experimentally Induce Illusory Own-body Perceptions最先出现在Nweon Paper

]]>
PubDate: January 2020

Teams: Ecole Polytechnique Federale de Lausanne;Saitama University

Writers: Atena Fadaei Jouybari; Giulio Rognini; Masayuki Hara; Hannes Bleuler; Olaf Blanke

PDF: Torso-mounted Vibrotactile Interface to Experimentally Induce Illusory Own-body Perceptions

Abstract

Recent developments in virtual reality and robotic technologies have allowed investigating the behavioural and brain mechanisms that grounds self-consciousness in the multi-sensory (e.g. vision and touch) and sensorimotor processing of bodily signals. Yet, previous technological solutions to apply tactile stimuli for body illusion induction limit participants movements, do not allow for stimulations in dynamic environments (e.g., the subject walking), and can hardly be integrated into real-life settings and complex, interactive, virtual reality environments. Here, we present the development and first validation of a new semi-wearable haptic system, based on vibration technology, to induce a range of bodily illusions that are of relevance for research in psychiatry. This is a first step towards the development of wearable haptic systems able to administer touch and induce specific bodily illusions under dynamic conditions and in real-life settings.

Torso-mounted Vibrotactile Interface to Experimentally Induce Illusory Own-body Perceptions最先出现在Nweon Paper

]]>
Realistic Interaction System for Human Hand in Virtual Environments https://paper.nweon.com/8164 Thu, 26 Nov 2020 03:00:10 +0000 https://paper.nweon.com/8164 PubDate: June 2020

Realistic Interaction System for Human Hand in Virtual Environments最先出现在Nweon Paper

]]>
PubDate: June 2020

Teams: Changchun University of Science and Technology

Writers: Wei Quan; He Yang; Cheng Han; Yinong Li

PDF: Realistic Interaction System for Human Hand in Virtual Environments

Abstract

In order to enhance the diversity of interaction between human hands and virtual objects, we adopt a physics-based method. Using the Coulomb friction model, we can interact with the virtual objects in a variety of ways, such as pushing, pulling, grasping manipulation without any predefined data. We define two interaction states, non-interactive state and interactive state. We adopt different update strategies of virtual hand posture in the two interaction states to effectively solve the problem of virtual hand penetrating the virtual object in the interaction process. In interactive state, we use inverse kinematics method to adjust the virtual hand posture. According to the physiological constraints of the real hand, we establish the motion constraints of the virtual finger which makes the posture of the virtual hand is natural and real. At last, an interaction system is developed. Experiments has shown that proposed method can satisfy real-time interaction and support diversity interactive operations. The virtual hand posture presented during interaction is natural and realistic.

Realistic Interaction System for Human Hand in Virtual Environments最先出现在Nweon Paper

]]>
Reconstructing Human Hand Pose and Configuration using a Fixed-Base Exoskeleton https://paper.nweon.com/8162 Thu, 26 Nov 2020 02:36:10 +0000 https://paper.nweon.com/8162 PubDate: August 2019

Reconstructing Human Hand Pose and Configuration using a Fixed-Base Exoskeleton最先出现在Nweon Paper

]]>
PubDate: August 2019

Teams: German Aerospace Centre;Delft University of Technology

Writers: A. Pereira; G. Stillfried; T. Baker; A. Schmidt; A. Maier; B. Pleintinger; Z. Chen; T. Hulin; N. Y. Lii

PDF: Reconstructing Human Hand Pose and Configuration using a Fixed-Base Exoskeleton

Abstract

Accurate real-time estimation of the pose and configuration of the human hand attached to a dexterous haptic input device is crucial to improve the interaction possibilities for teleoperation and in virtual and augmented reality. In this paper, we present an approach to reconstruct the pose of the human hand and the joint angles of the fingers when wearing a novel fixed-base (grounded) hand exoskeleton. Using a kinematic model of the human hand built from MRI data, we can reconstruct the hand pose and joint angles without sensors on the human hand, from attachment points on the first three fingers and the palm. We test the accuracy of our approach using motion capture as a ground truth. This reconstruction can be used to determine contact geometry and force-feedback from virtual or remote objects in virtual reality or teleoperation.

Reconstructing Human Hand Pose and Configuration using a Fixed-Base Exoskeleton最先出现在Nweon Paper

]]>
A Lightweight Real-Time Semantic Segmentation Network for Equipment Images in Space Capsule https://paper.nweon.com/8160 Thu, 26 Nov 2020 02:21:26 +0000 https://paper.nweon.com/8160 PubDate: October 2020

A Lightweight Real-Time Semantic Segmentation Network for Equipment Images in Space Capsule最先出现在Nweon Paper

]]>
PubDate: October 2020

Teams: Space Engineering University;China Astronaut Research and Training Center

Writers: Zhongkai Ma; Jin Yang; Jiangang Chao; Wanhong Lin

PDF: A Lightweight Real-Time Semantic Segmentation Network for Equipment Images in Space Capsule

Abstract

The combination of semantic segmentation technology and augmented reality technology can provide auxiliary information when astronauts train in augmented reality mode, which will greatly improve the training efficiency and reduce mishandling for astronauts. However, the equipment in space capsule have the characteristics of irregular shape, similar texture and small target while the mixed reality application requires high real-time performance, the above factors bring challenges to the context consistency, accuracy and real-time of semantic segmentation. In response to the challenges, referring to [3], one of the best lightweight real-time segmentation networks, a new network is specially designed for our application. Experimental results show that the designed network can obtain competitive segmentation results on target dataset and better real-time performance than classic networks such as [3]. Overall, the designed network meets the requirement.

A Lightweight Real-Time Semantic Segmentation Network for Equipment Images in Space Capsule最先出现在Nweon Paper

]]>
Feedback control of stable force output with evoked sEMG based on virtual hand prosthesis https://paper.nweon.com/8158 Thu, 26 Nov 2020 02:09:08 +0000 https://paper.nweon.com/8158 PubDate: January 2019

Feedback control of stable force output with evoked sEMG based on virtual hand prosthesis最先出现在Nweon Paper

]]>
PubDate: January 2019

Teams: Chongqing University

Writers: Yun Zhao; Xiao Y. Wu; Xu D. Wu; Wen Qu; Man Q. Wang; Lin Chen; Ning Hu; Wen S. Hou

PDF: Feedback control of stable force output with evoked sEMG based on virtual hand prosthesis

Abstract

The sustainedly stable control can improve the performance and user interaction to the control of the myoelectric hand prosthesis, and the common task is to generate force output for hand prosthesis. However, muscle fatigue and attention distraction usually weaken surface electromyography (sEMG) signals and reduce the stable force output. Thus, this paper proposed a new method to produce stable force based on sEMG signal evoked by neuromuscular electrical stimulation (NMES) on the ulnar nerve. To assess the feasibility of this feedback control method, we explored the impact of attention distraction on sEMG intensity of flexor muscle during stable grip. The results showed that the sEMG intensity was declined with attention distraction, and electrical stimulation on ulnar nerve can evoke the expected sEMG signals in forearm muscle. In order to further verify the stability of the present method for stable force output, the virtual hand prosthesis system with feedback control of evoked sEMG signals was implemented based on virtual reality toolbox for MATLAB. Then the offline EMG was collected to control the force intensity for a virtual hand prosthesis. The results demonstrated that the feedback control method could generate stable force output, and be used to further research for biomimetic control of electromyography hand prosthesis.

Feedback control of stable force output with evoked sEMG based on virtual hand prosthesis最先出现在Nweon Paper

]]>
Assessment of Optical See-Through Head Mounted Display Calibration for Interactive Augmented Reality https://paper.nweon.com/8156 Thu, 26 Nov 2020 01:40:27 +0000 https://paper.nweon.com/8156 PubDate: March 2020

Assessment of Optical See-Through Head Mounted Display Calibration for Interactive Augmented Reality最先出现在Nweon Paper

]]>
PubDate: March 2020

Teams: University of Genoa

Writers: Giorgio Ballestin; Manuela Chessa; Fabio Solari

PDF: Assessment of Optical See-Through Head Mounted Display Calibration for Interactive Augmented Reality

Abstract

Interaction in Augmented Reality environments requires the precise alignment of virtual elements added to the real scene. This can be achieved if the egocentric perception of the augmented scene is coherent in both the virtual and the real reference frames. To this aim, a proper calibration of the complete system, composed of the Augmented Reality device, the user and the environment, should be performed. Over the years, several calibration techniques have been proposed, and objectively evaluating their performance has shown to be troublesome. Since only the user can assess the hologram alignment fidelity, most researchers quantify the calibration error with subjective data from user studies. This paper describes the calibration process of an optical see-through device, based on a visual alignment method, and proposes a technique to objectively quantify the residual misalignment error.

Assessment of Optical See-Through Head Mounted Display Calibration for Interactive Augmented Reality最先出现在Nweon Paper

]]>
Kinetic Skin: Feasibility and Implementation of Bare Skin Tracking of Hand and Body Joints for 3D User Interfaces https://paper.nweon.com/8154 Wed, 25 Nov 2020 08:00:08 +0000 https://paper.nweon.com/8154 PubDate: May 2020

Kinetic Skin: Feasibility and Implementation of Bare Skin Tracking of Hand and Body Joints for 3D User Interfaces最先出现在Nweon Paper

]]>
PubDate: May 2020

Teams: University of Wyoming;ATLAS Institute University of Colorado

Writers: Amy Banić; Erik Horwitz; Clement Zheng

PDF: Kinetic Skin: Feasibility and Implementation of Bare Skin Tracking of Hand and Body Joints for 3D User Interfaces

Abstract

Kinetic Skin is a thin, adhesive patch that employs resistive sensing via a meandering carbon ink trace, where the circuit designs are printed on temporary tattoo material, similar to ones worn by children for decorative play. Each tattoo sensor is worn on your body across a joint (ie. finger, wrist, and elbow joints). In this paper, we present implementation details of prototyping Kinetic Skin worn on the finger and wrist joints for relative tracking to control 6-DoF manipulation. We demonstrate translation, rotation, and scale of virtual objects, including camera navigation. In this paper, we describe four interaction techniques that map finger joint input to 6-DOF control. The potential of this technology could be used for long-term light weight tracking of body movements to inform rehabilitation; for dance, music, or other performing arts; for gaming; and for other 3D User Interaction in Virtual and Augmented Reality Applications.

Kinetic Skin: Feasibility and Implementation of Bare Skin Tracking of Hand and Body Joints for 3D User Interfaces最先出现在Nweon Paper

]]>
A Novel Pseudo-Random Scan Method for Silicon-Based Microdisplay https://paper.nweon.com/8152 Wed, 25 Nov 2020 08:00:06 +0000 https://paper.nweon.com/8152 PubDate: July 2019

A Novel Pseudo-Random Scan Method for Silicon-Based Microdisplay最先出现在Nweon Paper

]]>
PubDate: July 2019

Teams: Shanghai University

Writers: Wendong Chen; Chunyan Zhang; Yuan Ji; Tingzhou Mu; Feng Ran

PDF: A Novel Pseudo-Random Scan Method for Silicon-Based Microdisplay

Abstract

The continuous warming of the virtual reality (VR) display pushes the microdisplay to the high resolution and high refresh rate. However, the limited bandwidth of the microdisplay is difficult to carry the massive image data in the virtual world. In order to reduce the time redundancy, improve the transmission efficiency and solve the problem of low linearity in the imaging process, this paper introduces a novel pseudo-random scan method which is established on the traditional fractal scan model. And a silicon-based OLED microdisplay verification platform with a resolution of 1.6K*3*1.6K is built. The pseudorandom scan method achieves 100% transmission efficiency and 94.1% linearity, which is suitable for ultra-high definition and high-resolution microdisplays.

A Novel Pseudo-Random Scan Method for Silicon-Based Microdisplay最先出现在Nweon Paper

]]>
Multi-user predictive rendering on remote multi-GPU clusters https://paper.nweon.com/8150 Wed, 25 Nov 2020 07:33:04 +0000 https://paper.nweon.com/8150 PubDate: February 2019

Multi-user predictive rendering on remote multi-GPU clusters最先出现在Nweon Paper

]]>
PubDate: February 2019

Teams: Université de Reims Champagne-Ardenne;PSA Peugeot Citroën;PSL-Research University

Writers: J. Randrianandrasana; A. Chanonier; H. Deleau; T. Muller; P. Porral; M. Krajecki; L. Lucas

PDF: Multi-user predictive rendering on remote multi-GPU clusters

Abstract

Many stages of the industry workflow have been benefiting from CAD software applications and real-time computer graphics for decades allowing manufacturers to perform team project reviews and assessments while decreasing the need for expensive physical mockups. However, when it comes to the perceived quality of the final product, more sophisticated physically based engines are often preferred though involving huge computation times. In this context, our work aims at reducing this gap by providing a predictive rendering solution leveraging the computing resources offered by modern multi-GPU supercomputers. To that end, we propose a simple static load balancing approach leveraging the stochastic nature of Monte Carlo rendering. Our solution efficiently exploits the available computing resources and addresses the industry collaboration needs by providing a real-time multi-user web access to the virtual mockup.

Multi-user predictive rendering on remote multi-GPU clusters最先出现在Nweon Paper

]]>
Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding https://paper.nweon.com/8148 Wed, 25 Nov 2020 07:24:10 +0000 https://paper.nweon.com/8148 PubDate: November 2018

Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding最先出现在Nweon Paper

]]>
PubDate: November 2018

Teams: University of California

Writers: Bharath Vishwanath; Tejaswi Nanjundaswamy; Kenneth Rose

PDF: Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding

Abstract

Spherical video is the key driving factor for the growth of virtual reality and augmented reality applications, as it offers truly immersive experience by capturing the entire 3D surroundings. However, it represents an enormous amount of data for storage/transmission and success of all related applications is critically dependent on efficient compression. A frequently encountered type of content in this video format is due to translational motion of the camera (e.g., a camera mounted on a moving vehicle). Existing approaches simply project this video onto a plane and use block based translational motion model for capturing the motion of the objects between the frames. This ad-hoc simplified approach completely ignores the complex deformities of objects caused due to the combined effect of the moving camera and projection onto a plane, rendering it significantly suboptimal. In this paper, we provide an efficient solution tailored to this problem. Specifically, we propose to perform motion compensated prediction by translating pixels along their geodesics, which intersect at the poles corresponding to the camera velocity vector. This setup not only captures the surrounding objects’ motion exactly along the geodesics of the sphere, but also accurately accounts for the deformations caused due to projection on the sphere. Experimental results demonstrate that the proposed framework achieves very significant gains over existing motion models.

Motion Compensated Prediction for Translational Camera Motion in Spherical Video Coding最先出现在Nweon Paper

]]>
Compact Light Field Augmented Reality Display with Eliminated Stray Light using Discrete Structures https://paper.nweon.com/8146 Wed, 25 Nov 2020 07:21:11 +0000 https://paper.nweon.com/8146 PubDate: January 2020

Compact Light Field Augmented Reality Display with Eliminated Stray Light using Discrete Structures最先出现在Nweon Paper

]]>
PubDate: January 2020

Teams: Beijing Institute of Technology

Writers: Cheng Yao; Yue Liu; Dewen Cheng; Yongtian Wang

PDF: Compact Light Field Augmented Reality Display with Eliminated Stray Light using Discrete Structures

Abstract

This paper discusses the design of a wearable display in the form of compact eyeglasses, supporting a fair field of view, correct focus cue, and optical see-through capacity. Based on integral imaging, our proposal comprises a discrete transparent microdisplay array as the image source and a discrete lenslet array as the spatial light modulator, without the need for a pre-imaging system or special prism. We designed an annular aperture array to eliminate stray light, controlled within an imperceptible limit. Through a stray light simulation and an imaging simulation, the system was proved to provide a good image quality for both virtual and real information.

Compact Light Field Augmented Reality Display with Eliminated Stray Light using Discrete Structures最先出现在Nweon Paper

]]>