空 挡 广 告 位 | 空 挡 广 告 位

VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning

Note: We don't have the ability to review paper

PubDate: Feb 2024

Teams: The Chinese University of Hong Kong1 SmartMore

Writers: Jingyao Li, Pengguang Chen, Xuan Ju, Hong Xu, Jiaya Jia

PDF: VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning

Abstract

Thanks to advances in deep learning techniques, Human Pose Estimation (HPE) has achieved significant progress in natural scenarios. However, these models perform poorly in artificial scenarios such as painting and sculpture due to the domain gap, constraining the development of virtual reality and augmented reality. With the growth of model size, retraining the whole model on both natural and artificial data is computationally expensive and inefficient. Our research aims to bridge the domain gap between natural and artificial scenarios with efficient tuning strategies. Leveraging the potential of language models, we enhance the adaptability of traditional pose estimation models across diverse scenarios with a novel framework called VLPose. VLPose leverages the synergy between language and vision to extend the generalization and robustness of pose estimation models beyond the traditional domains. Our approach has demonstrated improvements of 2.26% and 3.74% on HumanArt and MSCOCO, respectively, compared to state-of-the-art tuning strategies.

您可能还喜欢...

Paper