Descripción del movimiento humano basado en el marco de Frenet Serret y datos tipo MOCAP
DOI:
https://doi.org/10.33571/rpolitec.v17n34a11Keywords:
Movimiento humano; Dinámica; Kinect; Captura de movimiento; Rehabilitación física; Frenet SerretAbstract
Classify human movement has become a technological necessity, where defining the position of a subject requires identifying the trajectory of the limbs and trunk of the body, having the ability to differentiate this position from other subjects or movements, which generates the need to have data and algorithms that help their classification. Therefore, the discriminant capacity of motion capture data in physical rehabilitation is evaluated, where the position of the subjects is acquired with the Microsoft Kinect and optical markers. Attributes of the movement generated with the Frenet Serret framework. Evaluating their discriminant capacity by means of support vector machines, neural networks, and k nearest neighbors algorithms. The obtained results present an accuracy of 93.5% in the classification with data obtained from the Kinect, and success of 100% for movements where the position is defined with optical markers.
Article Metrics
Abstract: 294 HTML (Español (España)): 50 PDF (Español (España)): 146PlumX metrics
References
Napoli, A., Glass, S., Ward, C., Tucker, C., & Obeid, I. (2017). Performance analysis of a general-ized motion capture system using microsoft kinect 2.0. Biomedical Signal Processing and Control, 38, 265-280.
Galna, B., Barry, G., Jackson, D., Mhiripiri, D., Olivier, P., & Rochester, L. (2014). Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease. Gait & posture, 39(4), 1062-1068.
Liao, Y., Vakanski, A., & Xian, M. (2020). A deep learning framework for assessing physical reha-bilitation exercises. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(2), 468-477.
Napoli, A., Glass, S., Ward, C., Tucker, C., & Obeid, I. (2017). Performance analysis of a general-ized motion capture system using microsoft kinect 2.0. Biomedical Signal Processing and Control, 38, 265-280.
Raman, Natraj; Maybank, Stephen J.: Action classification using a discriminative multilevel HDP-HMM. En: Neurocomputing 154 (2015), p. 149-161.
Devanne, Maxime; Wannous, Hazem; Berretti, Stefano; Pala, Pietro; Daoudi, Mohamed; Del Bim-bo, Alberto: 3-d human action recognition by shape analysis of motion trajectories on riemannian man-ifold. En: IEEE transactions on cybernetics 45 (2015), Nr. 7, p. 1340 - 1352.
Vemulapalli, Raviteja; Arrate, Felipe; Chellappa, Rama: Human action recognition by representing 3d skeletons as points in a lie group. En: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, p. 588-595.
Muller, Meinard; Roder, Tido; Clausen, Michael: Ecient content-based retrieval of motion capture data. En: ACM Transactions on Graphics (ToG) Vol. 24 ACM, 2005, p. 677- 685.
Lei, Q., Du, J. X., Zhang, H. B., Ye, S., & Chen, D. S. (2019). A Survey of Vision-Based Human Ac-tion Evaluation Methods. Sensors, 19(19), 4129.
Chaudhry, Rizwan; Ofli, Ferda; Kurillo, Gregorij; Bajcsy, Ruzena; Vidal, Rene: Bio-inspired dynam-ic 3d discriminative skeletal features for human action recognition. En: Computer Vision and Pattern Recognition Workshops (CVPRW), 2013, IEEE Conference on IEEE, 2013, p. 471-478.
Presti, Liliana L.; La Cascia, Marco: 3D skeleton-based human action classification: A survey. En: Pattern Recognition 53 (2016), p. 130 – 147.
Slama, Rim; Wannous, Hazem; Daoudi, Mohamed; Srivastava, Anuj: Accurate 3D action recogni-tion using learning on the Grassmann manifold. En: Pattern Recognition 48 (2015), Nr. 2, p. 556 – 567.
Shao, Zhanpeng; Li, Youfu: Integral invariants for space motion trajectory matching and recogni-tion. En: Pattern Recognition 48 (2015), Nr. 8, p. 2418-2432.
Wang, W. C., Chung, P. C., Cheng, H. W., & Huang, C. R. (2015, May). Trajectory kinematics de-scriptor for trajectory clustering in surveillance videos. In Circuits and Systems (ISCAS), 2015 IEEE In-ternational Symposium on (pp. 1198-1201). IEEE.
Shao, Z., Li, Y., Guo, Y., & Zhou, X. (2018). Describing Local Reference Frames for 3-D Motion Trajectory Recognition. IEEE Access, 6, 36115-36121.
Wu, S., & Li, Y. F. (2010). Motion trajectory reproduction from generalized signature description. Pattern Recognition, 43(1), 204-221.
Wu, S., & Li, Y. F. (2009). Flexible signature descriptions for adaptive motion trajectory represen-tation, perception, and recognition. Pattern Recognition, 42(1), 194-214.
Arn, R. T., Narayana, P., Emerson, T., Draper, B. A., Kirby, M., & Peterson, C. (2018). Motion segmentation via generalized curvatures. IEEE transactions on pattern analysis and machine intelli-gence, 41(12), 2919-2932.
Vakanski, A., Jun, H. P., Paul, D., & Baker, R. (2018). A data set of human body movements for physical rehabilitation