Performance comparisons of artificial neural network algorithms in facial expression recognition
-
2015-09-13 https://doi.org/10.14419/ijet.v4i4.5069 -
Fisher’s Linear Discriminant Function, Wavelet Gabor Filter, Artificial Neural Network. -
Abstract
This paper presents methods for identifying facial expressions. The objective of this paper is to present a combination of texture oriented method with dimensional reduction and use for training the Single-Layer Neural Network (SLN), Back Propagation Algorithm (BPA) and Cerebellar Model Articulation Controller (CMAC) for identifying facial expressions. The proposed methods are called intelligent methods that can accommodate for the variations in the facial expressions and hence prove to be better for untrained facial expressions. Conventional methods have limitations that facial expressions should follow some constraints. To achieve the expression detection accuracy, Gabor wavelet is used in different angles to extract possible textures of the facial expression. The higher dimensions of the extracted texture features are further reduced by using Fisher’s linear discriminant function for increasing the accuracy of the proposed method. Fisher’s linear discriminant function is used for transforming higher-dimensional feature vector into a two-dimensional vector for training proposed algorithms. Different facial emotions considered are angry, disgust, happy, sad, surprise and fear are used. The performance comparisons of the proposed algorithms are presented.
-
References
[1] Bänziger, T., Mortillaro, M., & Scherer, K. R. (2012). Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception. Emotion, 12, 1161–1179. http://dx.doi.org/10.1037/a0025827.
[2] Cheng, F., Yu, J., & Xiong, H. (2010). Facial expression recognition in jaffe dataset based on Gaussian process classification. IEEE Transactions on Neural Networks, 572 21(10), 1685–1690. http://dx.doi.org/10.1109/TNN.2010.2064176.
[3] Klaus R. Scherer and Ursula Scherer, 2011, Assessing the Ability to Recognize Facial and Vocal Expressions of Emotion: Construction and Validation of the Emotion Recognition Index, J Nonverbal Behav., Vol.35, pp.305–326., DOI 10.1007/s10919-011-0115-4. http://dx.doi.org/10.1007/s10919-011-0115-4.
[4] Oliveira, L. E. S., Koerich, A. L., Mansano, M., &Britto, A. S. Jr., (2011). 2d principal component analysis for face and facial expression recognition. Computing in Science and Engineering, 13(3), 9–13. http://dx.doi.org/10.1109/MCSE.2010.149.
[5] Ravi S., and Mahima S., 2011, Study of the Changing Trends in Facial Expression Recognition, International Journal of Computer Applications (0975-8887), Vol.21. No.5, pp.10-16.
[6] Ruffman, T. (2011). Ecological validity and age-related change in emotion recognition. Journal of Nonverbal Behavior, 35, 297–304. http://dx.doi.org/10.1007/s10919-011-0116-3.
[7] Schlegel, K., Grandjean, D., & Scherer, K. R. (2012). Emotion recognition: Unidimensional ability or a set of modality- and emotion-specific skills? Personality and Individual Differences, 53, 16–21. http://dx.doi.org/10.1016/j.paid.2012.01.026.
[8] Pantic M., and Rothkrantz., 2000, Automatic analysis of facial expressions: The state of the art, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.12, pp.1424-1445. http://dx.doi.org/10.1109/34.895976.
[9] Pantic M., and Rothkrantz L.J.M., 2000, Expert system for Automatic Analysis of facial Expressions, ELSEVIER, Image and Vision Computing, Vol.18, pp.881-905 http://dx.doi.org/10.1016/S0262-8856(00)00034-2.
[10] Yang M. H., Kriegman D. J., and Ahuja N., 2002, Detecting Faces in Images: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pp.34-58. http://dx.doi.org/10.1109/34.982883.
[11] Katja Schlegel, Didier Grandjean, and Klaus R. Scherer, 2014, Introducing the Geneva Emotion Recognition Test: An Example of Rasch-Based Test Development, Psychological Assessment, Vol.26, No.2, pp.666–672. http://dx.doi.org/10.1037/a0035246.
[12] Scott E. Reed andHonglak Lee, Training deep neural networks on noisy labels with bootstrapping, Accepted as a workshop contribution at ICLR 2015, pp.1-11.
-
Downloads
-
How to Cite
Tayfour Ahmed, A., Mohammed, A., & Yahia, M. (2015). Performance comparisons of artificial neural network algorithms in facial expression recognition. International Journal of Engineering & Technology, 4(4), 465-471. https://doi.org/10.14419/ijet.v4i4.5069Received date: 2015-07-16
Accepted date: 2015-09-13
Published date: 2015-09-13