Research on Human Activity Identification Based on Image Processing and Artificial Intelligence

  • Authors

    • Praywin Moses Dass Alex
    • Akash Ravikumar
    • Jerritta Selvaraj
    • Arun Sahayadhas
    2018-08-15
    https://doi.org/10.14419/ijet.v7i3.27.17754
  • Behavior Analysis and monitoring, machine learning, HOG descriptor, bag of visual words, local binary pattern, support vector machine, naïve bayes, random forest, MLP
  • Recognizing the activities of humans through computer vision techniques is an important area of research. This area of research leads to various applications such as patient monitoring, fall detection, surveillance and human-computer interface. The capability for recognizing these acts lays foundation for developing highly intelligent and decision making systems. Generally, most of the mentioned applications requires automatic recognition of high-level activities, consisting of simple actions of multiple persons. Usually, the intelligence to the system is delivered only if these activities are properly classified. This paper addresses various machine learning algorithms used in classifying various activities such as Multi-Layer Perceptron, Random Forest, Naïve Bayes and SVM algorithms. This paper provides classification of general to complex human activities through comparison study and performance evaluation of these mentioned algorithms using very large set of images. This review will provide much needed information for further research in more productive areas.

     

  • References

    1. [1] Aggarwal JK & Ryoo MS, “Human Activity Analysisâ€, [Electronics] and Telecommunications Research Institute, ACM Journal Name, (2011).

      [2] Bulling A, Blanke U & Schiele B, “A tutorial on human activity recognition using body-worn inertial sensorsâ€, ACM Computing Surveys (CSUR), Vol.46, No.3,(2014).

      [3] Zhu X, Liu Z & Zhang J, “Human Activity Clustering for Online Anomaly Detectionâ€, Journal of Computer, Vol.6, No.6,(2001), pp.1071-1079.

      [4] Gupta P & Dallas T, “Feature Selection and Activity Recognition System using a Single Tri-axial Accelerometerâ€, IEEE Trans. Biomed. Eng., (2014), pp.1780-1786.

      [5] Kiruthiga S, Kalaiselvi Geetha M & Arunnehru J, “Visual Words for Human Activity Recognition in Surveillance Videoâ€, IOSR Journal of Computer Engineering (IOSR-JCE), pp.37-43.

      [6] Surendar, A., Arun, M., & Basha, A. M. (2016). Micro sequence identification of bioinformatics data using pattern mining techniques in FPGA hardware implementation. Asian Journal of Information Technology, 15(1), 76–81.

      [7] Dubois A & Charpillet F, “Human activities recognition with RGB-Depth camera using HMMâ€, Conf. Proc. IEEE Eng. Med. Biol. Soc., (2013).

      [8] Stuliene A & Paulauskaite-Taraseviciene A, “Research on human activity recognition based on image classification methodsâ€, IVUS International Conference on Information Technology, (2017).

      [9] Samuvel SG & Alex PMD, “Investigation on road-sign recognitionâ€, IVUS International Conference on Information Technology, (2017).

      [10] Ahuja S & Goel A, “Scene Recognition using Bag-of-Wordsâ€, Forest, (2011).

      [11] Cózar JR, González-Linares JM, Guil N, Hernández R & Heredia Y, “Visual words selection for human action classificationâ€, International Conference on High Performance Computing and Simulation (HPCS), (2012), pp.188-194.

      [12] Zhang M & Sawchuk AA, “Motion primitive-based human activity recognition using a bag-of-features approachâ€, ACM symposium on International health informatics (IHI), (2012), pp.631-640.

      [13] Niebles JC & Wang H, “Unsupervised Learning of Human Action Categories Using Spatial-Temporal Wordsâ€, International Journal of Computer Vision, Vol.79, No.3, (2008), pp.299-318.

      [14] De Campos T, Barnard M, Mikolajczyk K, Kittler J, Yan F, Christmas W & Windridge D, “An evaluation of bags-of-words and spatio-temporal shapes for action recognitionâ€, IEEE Workshop on Applications of Computer Vision, (2011), pp.344-351.

      [15] Ullah MM, Parizi SN & Laptev I, “Improving Bag-of-Features Action Recognition with Non-Local Cuesâ€, Proceedings of the British Machine Vision Conference, (2010), pp.1-11.

      [16] Ho TK, “Random Decision Forestsâ€, Proceedings of the 3rd International Conference on Document Analysis and Recognition, Montreal, (1995), pp.278–282.

      [17] Kleinberg E, “Stochastic Discriminationâ€, Annals of Mathematics and Artificial Intelligence, Vol.1, (1990), pp.207–239.

      [18] Kleinberg E, “An Overtraining-Resistant Stochastic Modeling Method for Pattern Recognitionâ€, Annals of Statistics, Vol.24, No.6,(1996), pp.2319–2349.

      [19] Kleinberg E, “On the Algorithmic Implementation of Stochastic Discriminatioâ€, IEEE Transactions on PAMI, Vol.22, No.5, (2000).

      [20] Hsu CW & Lin CJ, “A comparison of methods for multiclass support vector machinesâ€, IEEE Transactions on Neural Networks, Vol.13, No.2,(2002), pp.415-425.

      [21] Talukdar J & Mehta B, “Human Action Recognition System using Good Features and Multilayer Perceptron Networkâ€, ICCSP, (2017).

      [22] Lara OD & Labrador MA, “A survey on human activity recognition using wearable sensorsâ€, IEEE Communications Surveys & Tutorials, Vol.15, No.3,(2013), pp.1192-1209.

      [23] Vrigkas M, Nikou C & Kakadiaris IA, “A Review of Human Activity Recognition Methodsâ€, journal Frontiers in Robotics and AI, Vol.2, (2015).

      [24] B Kassimbekova, G Tulekova, V Korvyakov (2018). Problems of development of aesthetic culture at teenagers by means of the Kazakh decorative and applied arts. Opción, Año 33. 170-186

      [25] M Pallarès Piquer and O Chiva Bartoll (2017). La teoría de la educación desde la filosofía de Xavier Zubiri. Opción, Año 33, No. 82 (2017): 91-113

  • Downloads

  • How to Cite

    Moses Dass Alex, P., Ravikumar, A., Selvaraj, J., & Sahayadhas, A. (2018). Research on Human Activity Identification Based on Image Processing and Artificial Intelligence. International Journal of Engineering & Technology, 7(3.27), 174-178. https://doi.org/10.14419/ijet.v7i3.27.17754