AR Oriented Pose Matching Mechanism from Motion Capture Data

  • Authors

    • Javid Iqbal
    • Manjit Singh Sidhu
    • Mutahir Bin Mohamed Ariff
    2018-11-30
    https://doi.org/10.14419/ijet.v7i4.35.22749
  • Augmented Reality, Dance training, Human action, Motion capture and Kinect sensor
  • Pose matching and skeletal mapping method are an integral part of Augmented Reality (AR) based learning technology. In this paper a mechanism for pose matching is presented based on extraction of skeletal data from the dance trainer’s physical movements in the form of color defined images snapped by Kinect, where each pose is modelled by a sequence of key movements and continues data frames. In order to extract the exact matched pose, the frame sequence is divided into pose feature frame and skeletal data frame by the use of pose matching dance training movement recognition algorithm (PMDTMR). This proposed algorithm is compared with other published methods in terms of frame level accuracy and learning time of dance session. The experimental results show that the proposed algorithm outperforms the state of art techniques for successful identification and recognition of matched pose between the dance trainer and the expert of the pre-recorded video through the Kinect sensor.

  • References

    1. [1] I. Guyon, V. Athitsos, P. Jangyodsuk, H. J. Escalante, and B. Hamner, “Results and Analysis of the ChaLearn Gesture Challenge 2012,†Springer,WDIA 2012, LNCS, pp. 186–204, 2012.

      [2] M. Eichner and V. Ferrari, “Human pose co-estimation and applications,†IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 11, pp. 2282–2288, 2012.

      [3] I. Kuramoto, Y. Nishimura, K. Yamamoto, Y. Shibuya, and Y. Tsujino, “Visualizing Velocity and Acceleration on Augmented Practice Mirror Self-Learning Support System of Physical Motion,†2013 Second IIAI Int. Conf. Adv. Appl. Informatics, pp. 365–368, 2013.

      [4] F. Anderson, T. Grossman, J. Matejka, and G. Fitzmaurice, “YouMove: enhancing movement training with an augmented reality mirror,†Proc. 26th Annu. ACM Symp. User interface Softw. Technol. - UIST ’13, pp. 311–320, 2013.

      [5] J. Wang, Z. Liu, and Y. Wu, “Learning Actionlet Ensemble for 3D Human Action Recognition,†SpringerBriefs Comput. Sci., vol. 36, no. 9783319045603, pp. 11–40, 2014.

      [6] X. Chang, Z. Ma, M. Lin, Y. Yang, and A. G. Hauptmann, “Feature Interaction Augmented Sparse Learning for Fast Kinect Motion Detection,†IEEE Trans. Image Process., vol. 26, no. 8, pp. 3911–3920, 2017.

      [7] S. Saha, P. Rakshit, and A. Konar, “Ballet E-learning using fuzzy set induced posture recognition by piece-wise linear approximation of connected components,†Appl. Soft Comput. J., vol. 65, pp. 554–576, 2018.

      [8] I. V. Masala and A. Angdresey, “The Real Time Training System with Kinect: Trainer Approach,†IEEE, 2017 Int. Conf. Soft Comput. Intell. Syst. Inf. Technol., 2017.

      [9] M. T. Chau, “A Labanotation based ontology for representing Vietnamese folk dances,†2018 Int. Conf. Digit. Arts, Media Technol., pp. 75–80, 2018.

      [10] Y. Tongpaeng, A. Technology, and T. Dance, “Evaluating Real-Time Thai Dance Using Thai Dance Training Tool,†2018 Int. Conf. Digit. Arts, Media Technol., pp. 185–189, 2018.

      [11] A. Bisht, R. Bora, G. Saini, P. Shukla, and A. Bisht, “Indian Dance Form Recognition from Videos,†2017 13th Int. Conf. Signal-Image Technol. Internet-Based Syst., pp. 123–128, 2017.

      [12] K. Abramova, A. Corradini, and J. Nordentoft, “Real-time motion tracking for dance visualization using Kalman filters,†Ieeexplore.Ieee.Org, pp. 343–347, 2018.

      [13] S. Chernbumroong and K. Tabia, “Thai Dance Training Game-Based Model,†2018 Int. Conf. Digit. Arts, Media Technol., pp. 194–197, 2018.

      [14] J. Iqbal and M. S. Sidhu, “A review on making things see: Augmented reality for futuristic virtual educator,†Cogent Educ., vol. 4, no. 1, pp. 1–14, 2017.

      [15] M. Lee, K. Lee, and J. Park, “Music similarity-based approach to generating dance motion sequence,†Multimed. Tools Appl., vol. 62, no. 3, pp. 895–912, 2012.

      [16] R. Majumdar, “Framework for teaching Bharatanatyam through Digital Medium,†IEEE Fourth Int. Conf. Technol. Educ. Framew., pp. 241–242, 2012.

      [17] G. Zhu, L. Zhang, P. Shen, and J. Song, “An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor,†Sensors, vol. 16, no. 2, p. 161, 2016.

      [18] J. Shan and S. Akella, “3D human action segmentation and recognition using pose kinetic energy,†IEEE Int. Work. Adv. Robot. its Soc. Impacts, pp. 69–75, 2014.

      [19] B. Ni, P. Moulin, and S. Yan, “Order-Preserving Sparse Coding for Sequence Classification,†Proc. Comput. Vision–ECCV, Firenze, Italy, Oct. 2012, pp. 173–187, 2012.

  • Downloads

  • How to Cite

    Iqbal, J., Sidhu, M. S., & Ariff, M. B. M. (2018). AR Oriented Pose Matching Mechanism from Motion Capture Data. International Journal of Engineering & Technology, 7(4.35), 294-298. https://doi.org/10.14419/ijet.v7i4.35.22749