Human Pose Tracking from RGB Inputs

Note: We don't have the ability to review paper

PubDate: August 2019

Teams: UFPE

Writers: Ricardo R. Barioni; Lucas Figueiredo; Kelvin Cunha; Veronica Teichrieb

PDF: Human Pose Tracking from RGB Inputs

Abstract

In the context of Virtual and Augmented Reality, in order to allow systems to provide natural interaction through gestures and general understanding of user body behavior it is fundamental to obtain the configuration of human poses. Once achieved, the goal of obtaining such poses from RGB images through cameras brings the possibility of a wide range of applications in the areas of security (i.e.: local activity monitoring), healthcare (i.e.: postural analysis) and entertainment (i.e.: games and animations motion capture). However, the acquisition of human poses solely through RGB images is still considered a challenge, once that pure visual data doesnt explicitly give us information about the human body joints (keypoints in pixels) localization in the image. In this work we propose the a machine learning method, more specifically deep learning based on convolutional neural networks, capable of tackling this problem.

You may also like...

Paper