ArXiv Preprint
Motion capture (mocap) and time-of-flight based sensing of human actions are
becoming increasingly popular modalities to perform robust activity analysis.
Applications range from action recognition to quantifying movement quality for
health applications. While marker-less motion capture has made great progress,
in critical applications such as healthcare, marker-based systems, especially
active markers, are still considered gold-standard. However, there are several
practical challenges in both modalities such as visibility, tracking errors,
and simply the need to keep marker setup convenient wherein movements are
recorded with a reduced marker-set. This implies that certain joint locations
will not even be marked-up, making downstream analysis of full body movement
challenging. To address this gap, we first pose the problem of reconstructing
the unmarked joint data as an ill-posed linear inverse problem. We recover
missing joints for a given action by projecting it onto the manifold of human
actions, this is achieved by optimizing the latent space representation of a
deep autoencoder. Experiments on both mocap and Kinect datasets clearly
demonstrate that the proposed method performs very well in recovering semantics
of the actions and dynamics of missing joints. We will release all the code and
models publicly.
Suhas Lohit, Rushil Anirudh, Pavan Turaga
2020-12-03