Facial Action Units Analysis using Rule-Based Algorithm

  • Authors

    • Hamimah Ujir
    • Irwandi Hipiny
    • D N.F. Awang Iskandar
    2018-09-01
    https://doi.org/10.14419/ijet.v7i3.20.19167
  • facial action units, temporal analysis, facial expressions, dynamic analysis
  • Most works in quantifying facial deformation are based on action units (AUs) provided by the Facial Action Coding System (FACS) which describes facial expressions in terms of forty-six component movements. AU corresponds to the movements of individual facial muscles. This paper presents a rule based approach to classify the AU which depends on certain facial features. This work only covers deformation of facial features based on posed Happy and the Sad expression obtained from the BU-4DFE database. Different studies refer to different combination of AUs that form Happy and Sad expression. According to the FACS rules lined in this work, an AU has more than one facial property that need to be observed. The intensity comparison and analysis on the AUs involved in Sad and Happy expression are presented. Additionally, dynamic analysis for AUs is studied to determine the temporal segment of expressions, i.e. duration of onset, apex and offset time. Our findings show that AU15, for sad expression, and AU12, for happy expression, show facial features deformation consistency for all properties during the expression period. However for AU1 and AU4, their properties’ intensity is different during the expression period.

     

  • References

    1. [1] Berretti, S., Bimbo, A.D. and Pala, P. 2012. Real-time Expression Recognition from Dynamic Sequences of 3D Facial Scans. In the Proceedings of the 5th Eurographics Conference on 3D Object Retrieval (EG 3DOR'12), pp. 85-92.

      [2] Deng, Z. and Noh, J. 2008. Computer Facial Animation: a Survey in. Data-Driven 3D Facial Animation, London: Springer-Verlag, ch. 1, pp. 1-28.

      [3] Ekman, P. and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto,

      [4] Fasel, L.R. Fortenberry, B. and Movellan, J.R. 2005. A Generative Framework for Real-Time Object Detection and Classification. International Journal of Computer Vision and Image Understanding, Vol.98, No.1, pp. 181—210.

      [5] Gökberk, B., İrfanoğlu, M. O. and Akarun, L., 2006. 3D Shape-based Face Representation and Feature Extraction for Face Recognition. Journal of Image and Vision Computing, 24(8), pp. 857–869.

      [6] Hess, U. and Kleck, R. 1990. Differentiating Emotion Elicited and Deliberate Emotional Facial Expressions. European Journal of Social Psychology, pp. 369-385.

      [7] Hjelmäs, E. and Low, B. K. (2001). Face detection: A Survey. Computer Vision and Image Understanding. pp. 236–274.

      [8] Jiang, B. Valstar, M., Martinez, B. and Pantic, M. (2011). A Dynamic Appearance Descriptor Approach to Facial Actions Temporal Modelling. IEEE Transactions on Biometrics Compendium Systems, Man, and Cybernetics, Vol. 44. pp. 161 - 174.

      [9] Lien, J.J.-J., Kanadea, T., Cohn, J, F., Li, C. C. 2000. Detection, Tracking, and Classification of Action Units in Facial Expression. Journal of Robotics and Autonomous Systems, pp. 131–146.

      [10] Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z. and Matthews, I.,2002. The Extended Cohn-Kanade Dataset (CK+): A Complete Dataset for Action Unit and Emotion-Specified Expression. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94 – 101.

      [11] Maalej, A., Ben Amor, B., Daoudi, M., Srivastava, A. and Berreti, S., 2010. Local 3D Shape Analysis for 3D Facial Expression Recognition. International Conference on Pattern Recognition, pp. 4129 – 4132.

      [12] Mahoor, M.H., Cadavid, S., Messinger, D.S.and Cohn, J.F. 2009. A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units. Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 20-25 June 2009, pp. 74-80.

      [13] Pantic, M. and Rothkrantz, L.J.M. 2000. An Expert System for Recognition of Facial Actions and Their Intensity. Proceedings of the Twelfth Innovative Applications of Artificial Intelligence International Conference on Artificial Intelligence (IAAI-00). Austin, USA, pp. 1026 - 1033, August 2000.

      [14] Raouzaiou, A., Tsapatsoulis, N., Karpouzis, K. and Kollias, S., 2002. Parameterized Facial Expression Synthesis based on MPEG-4. EURASIP Journal Applied Signal Process. pp. 1021-1038.

      [15] Russell, J. A. and Bullock.M., 1986. Fuzzy Concepts and the Perception of Emotion in Facial Expressions. Social Cognition, 4(3), pp. 309 – 341.

      [16] Sandbach, G., Zafeiriou, S. and Pantic, M. 2012. Local Normal Binary Patterns for 3D Facial Action Unit Detection. Proceedings of the IEEE International Conference on Image Processing (ICIP 2012). Orlando, FL, USA, October 2012, pp. 1813 – 1816.

      [17] Savran, A., Sankur, B., Bilge, M. T., 2012. Comparative Evaluation of 3D versus 2D Modality for Automatic Detection of Facial Action Units. Pattern Recognition, 45(2), pp. 767-782.

      [18] Soyel, H. and Demirel, H., 2007. Facial Expression Recognition using 3D Facial Feature Distances. LNCS Book Series, Image and Analysis Recognition, Springer, pp.831 – 838.

      [19] Suja, P., Kalyan Kumar V.P. and Tripathi, S. 2015. Dynamic Facial Emotion Recognition from 4D Video Sequences. The Eighth International Conference on Contemporary Computing (IC3), pp. 348-353.

      [20] Tian, Y., Kanade, T. and Cohn, J.F. 2001. Recognizing Action Units for Facial Expression Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence.Volume 23(2). pp. 97-115.

      [21] Ujir, H. 2013. 3D Facial Expression Classification Using a Statistical Model of Surface Normals and a Modular Approach, Ph.D. Dissertation, University of Birmingham, United Kingdom.

      [22] Ujir, H., Spann,M. and Hipiny, I. 2014. 3D Facial Expression Classification Using 3D Facial Surface Normals. The 8th International Conference on Robotic, Vision, Signal Processing & Power Applications (ROVISP), pp. 245-253.

      [23] Valstar, M. and Pantic, M. 2006. Fully Automatic Facial Action Unit Detection and Temporal Analysis. IEEE Conference on Computer Vision and Pattern Recognition (CVPR'06), pp. 149-157.

      [24] Velusamy, S. Kannan, H. Anand, B. Sharma, A. Navathe, B., 2011. A Method to Infer Emotions from Facial Action Units. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2028 – 2031.

      [25] Wang, J., Yin, L., Wei, X., and Sun, Y., 2006. 3D Facial Expression Based on Primitive Surface Feature Distribution. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1399–1406.

      [26] Yin, L.,Chen, X., Sun, Y., Worm, T, and Reale, M. 2008. A High-Resolution 3D Dynamic Facial Expression Database. The 8th International Conference on Automatic Face and Gesture Recognition, pp.

      [27] Zhang, Y., Ji, Q., Zhu, Z. & Yi, B. 2008. Dynamic Facial Expression Analysis and Synthesis with MPEG-4 Facial Animation Parameters. IEEE Transaction Circuits System Video Technology, 18(10), pp. 1383-1396.

  • Downloads

  • How to Cite

    Ujir, H., Hipiny, I., & N.F. Awang Iskandar, D. (2018). Facial Action Units Analysis using Rule-Based Algorithm. International Journal of Engineering & Technology, 7(3.20), 284-290. https://doi.org/10.14419/ijet.v7i3.20.19167