跳至内容
  • 首页
  • 资讯
  • 行业方案
  • 付费阅读
  • Job招聘
  • Paper论文
  • Patent专利
  • 映维会员
  • 导航收录
  • 合作
  • 关于
  • 微信群
  • All
  • XR
  • CV
  • CG
  • HCI
  • Video
  • Optics
  • Perception
  • Reconstruction
空 挡 广 告 位 | 空 挡 广 告 位

Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-training

小编 广东客   |   分类:HCI   |   2025年3月27日

Note: We don't have the ability to review paper

PubDate: Otc 2024

Teams:Meta,University of Bristol

Writers:Junxiao Shen, Khadija Khaldi, Enmin Zhou, Hemant Bhaskar Surale, Amy Karlson

PDF:Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-training

Abstract

Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK

2, are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK

2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.

本文链接:https://paper.nweon.com/16249

您可能还喜欢...

  • Using Real Objects for Interaction in Virtual Reality

    2020年10月13日 映维

  • WiredSwarm: High Resolution Haptic Feedback Provided by a Swarm of Drones to the User’s Fingers for VR interaction

    2020年08月17日 映维

  • Where Do We Meet? Key Factors Influencing Collaboration Across Meeting Spaces

    2023年11月21日 映维

关注:

RSS 最新AR/VR行业分享

  • XR日报:苹果低功耗AI眼镜芯片,三星单层波导全彩显示,PICO大更新Native XR SDK 2025年5月9日
  • 生存建造类游戏《火星求生》VR版登陆Quest Store 2025年5月9日
  • Meta开源MR技术演示《North Star》 展示Quest顶尖视觉与交互 2025年5月9日
  • 美军测试AR全息战场模拟系统 头显+沙盘实现多兵种协同推演 2025年5月9日
  • 三星突破AR眼镜技术瓶颈,单层纳米波导实现全彩显示 2025年5月9日

RSS 最新AR/VR专利

  • Sony Patent | Information processing device and information processing method 2025年5月8日
  • Niantic Patent | Maintaining object alignment in 3d map segments 2025年5月8日
  • Samsung Patent | Deposition mask and method for manufacturing the same 2025年5月8日
  • ARM Patent | Firearm training system 2025年5月8日
  • Apple Patent | Electronic device system with supplemental lenses 2025年5月8日

RSS 最新AR/VR行业招聘

  • Apple AR/VR Job | Senior Software QA Engineer - Apple Vision Pro 2024年11月12日
  • Apple AR/VR Job | System Product Design Engineer - Apple Vision Pro 2024年11月12日
  • Microsoft AR/VR Job | Principal Software Engineer -Teams Premium Services 2024年11月12日
  • Meta AR/VR Job | Software Engineer - XR Codec Interactions and Avatars Team 2024年11月12日
  • Meta AR/VR Job | Product Cost Engineer 2024年11月12日

联系微信:ovalics

版权所有:广州映维网络有限公司 © 2025

备案许可:粤ICP备17113731号-2

粤公网安备 44011302004835号

友情链接: AR/VR行业导航

读者QQ群:251118691

Quest QQ群:526200310

开发者QQ群:688769630

Paper