Markerless Motion Capture Using Appearance and Inertial Data
Current monitoring techniques for biomechanical analysis typically capture a snapshot of the state of the subject due to challenges associated with long-term monitoring. Continuous long-term capture of biomechanics can be used to assess performance in the workplace and rehabilitation at home. Noninvasive motion capture using small low-power wearable sensors and camera systems have been explored, however, drift and occlusions have limited their ability to reliably capture motion over long durations. In this paper, we propose to combine 3D pose estimation from inertial motion capture with 2D pose estimation from vision to obtain more robust posture tracking. To handle the changing appearance of the human body due to pose variations and illumination changes, our implementation is based upon Least Soft-Threshold Squares Tracking. Constraints on the variation of the appearance model and estimated pose from an inertial motion capture system are used to correct 2D and 3D estimates simultaneously. We evaluate the performance of our method with three state-of-the- art trackers, Incremental Visual Tracking, Multiple Instance Learning, and Least Soft-Threshold Squares Tracking. In our experiments, we track the movement of the upper limbs. While the results indicate an improvement in tracking accuracy at some joint locations, they also show that the result can be further improved. Conclusions and further work required to improve our results are discussed.