Experimenting Hand-Gesture Image Recognition using Simple Deep Neural Network

  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract

    Traditionally human interacts with a computer by using keyboard and mouse. Considering person with handicapped from the wrist to the fingertip or amputated wrists or fingertips need alternative way; using voice or hand gesture. This work focuses on the use of hand-gesture image recognition. There are two main issues should be considered; less interactivity in static hand gesture recognition, and less accuracy in dynamic hand gesture recognition. This paper attempts to improve the accuracy of hand-gesture image recognition by experimenting simple deep learning neural network (DLNN). As this work uses a simple DLNN, the relation between the hidden layers is not considered. The number of hidden layers in the proposed architecture of the DLNN for the experiments vary from one to five.

    With the aims to understand the effect of the number of neurons in the hidden layers, the DLNN is experimented using different numbers of hidden neurons. Six different types of hand gestures are considered. 800 videos on hand gestures taken from Vision for Intelligent Vehicles and Applications (VIVA) portal are used in the experiment. The data is divided into two; one as training data and another part is for testing. The best result is achieved when the DLNN uses two hidden layers with 250 neurons in the first hidden layer, and 100 neurons in the second hidden layer. The average of the achieved accuracy level is 77.56%. Experimental results also show that the more number of hidden layer causes over-fitting (does not make the recognition better). It is also observed that the increase of hidden layer number and hidden neurons only affect the accuracy of recognition of the trained dataset and does not improve the recognition of untrained dataset. This result is because the interrelation among the hidden layer are not considered.



  • Keywords

    deep learning; hand gesture images; human computer interface; neural network

  • References

      [1] Neto P, Pereira D, Pires JN, & Moreira AP,”Real-time and Continuous Hand Gesture Spotting: An Approach Based On Artificial Neural Networks” , Proceedings of the IEEE International Conference on Robotic and Automation (ICRA) 6-10 May 2013, Karlsruhe, Germany, pp. 178-183, http://dx.doi.org/10.1109/ICRA.2013.6630573.

      [2] Ramjan MR, Sandip RM, Uttam PS. & Srimant WS, (2014), Dynamic hand gesture recognition and detection for real time using human computer interaction. International Journal of Advance Research in Computer Science and Management Studies (IJARCSMS), 2(3): 425-430.

      [3] Deng L, & Yu D, Deep Learning Methods and Applications. 978-1-60198-814-0. Now: Netherland, (2014).

      [4] Tang A, Lu K, Wang Y, Huang J, & Li H, (2013), A real-time hand posture recognition system using deep neural networks. ACM Transactions on Intelligent Systems and Technology, 9(4): 39:1-21.

      [5] Erhan D, Szegedy C, Toshev A, & Anguelov D, ”Scalable Object Detection using Deep Neural Networks”, Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, 23-28 June 2014, Washington DC, USA, pp. 2155 – 2162, http://dx.doi.org/10.1109/CVPR.2014.276.

      [6] Molchanov P, Gupta S, Kim K, & Kautz J, ”Hand Gesture Recognition With 3D Convolutional Neural Networks”, Proceedings of the 28th IEEE Workshop on Computer Vision and Pattern recognition, 7-12 June 2015, Boston MA, USA, pp.1-7, http://dx.doi.org/ 10.1109/CVPRW.2015.7301342.

      [7] Negnevitsky M, Artificial Intelligence: A guide to intelligent systems. 2nd Edition. Pearson Education Limited: Upper Saddle River, (2005).




Article ID: 18403
DOI: 10.14419/ijet.v7i3.32.18403

Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.