Review on 3D Facial Animation Techniques

  • Authors

    • Noor Adibah Najihah Mat Noor
    • Norhaida Mohd Suaib
    • Muhammad Anwar Ahmad
    • Ibrahim Ahmad
    https://doi.org/10.14419/ijet.v7i4.36.28996
  • facial animation, graphical visualization, facial modeling, data acquirement
  • Generating facial animation has always been a challenge towards the graphical visualization area. Numerous efforts had been carried out in order to achieve high realism in facial animation. This paper surveys techniques applied in facial animation targeting towards realistic facial animation. We discuss the facial modeling techniques from different viewpoints; related geometric-based manipulation (that can be further categorized into interpolations, parameterization, muscle-based and pseudo–muscle-based model) and facial animation techniques involving speech-driven, image-based and data-captured. The paper will summarize and describe the related theories, strength and weaknesses for each technique.

  • References

    1. [1] Parke. F.I.(1972). “ Computer generated animation of faces,†in Proc. the ACM Annual Conference, vol. 1, pp. 451-457.

      [2] Arai. K, Kurihara. T, and Anjyo. K. (1996). “Bilinear interpolation for facial expression and metamorphosis in real time animation,†The Visual Computer, pp. 105-116, 1996.

      [3] Ping H.Y, Abdullah. L. N, Sulaiman. P. S, and Halin. A.A. (2013). "Computer Facial Animation: A Review," International Journal of Computer Theory and Engineering vol. 5, no. 4, pp. 658-662.

      [4] Acquaah, K., Agada, R., & Yan, J. (2015). Example-Based facial animation for blend shape interpolation. 2015 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), 1-7.

      [5] Li, L., Liu, Y., & Zhang, H. (2012). A Survey of Computer Facial Animation Techniques. 2012 International Conference on Computer Science and Electronics Engineering, 3, 434-438

      [6] Cohen, M. M., & Massaro, D. W. (1993). Modeling coarticulation in synthetic visual speech. In Models and techniques in computer animation (pp. 139-156). Springer, Tokyo.

      [7] Radovan. M and Pretorius. L. (2006). “Facial animation in a nutshell: past, present and future,†in Proc. SAICSIT 2006, pp. 71-79.

      [8] Parke. F. I. (1974) “A parametric model for human faces,†Ph.D. dissertation, Utah.

      [9] Waters. K, Frisbie. J. (1995). A Coordinated Muscle Model for Speech Animation, Graphics Interface, 1995 pp. 163 – 170.

      [10] Du. Y, Lin. X. (2008). Mapping Emotional Status to Facial Expressions, Department of Computer Science and Technology, Tsinghua University.

      [11] Arya, G., Kumar, K.S., & Rajasree, R. (2014). Synthesize of emotional facial expressions through manipulating facial parameters. 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 911-916.

      [12] Lee. Y, Terzopulus. D, Watersm.K. (1995). Realistic modeling for facial animation, In: Proceedings of SIGGRAPH’95, pp.55–62.

      [13] Choe. B, Lee.H , Ko.H. (2005). Performance-driven muscle-based facial animation, In: Proceedings of Computer Animation. Volume 12, pp.67–79, 2001.

      [14] Platt.S.M, (1985) A Structural Model of the Human Face, Ph.D. Thesis, University of Pennsylvania, 1985

      [15] Waters. K. (1987). A muscle model for animating three-dimensional facial expression. In Maureen C. Stone, editor, Computer Graphics (Siggraph proceedings, 1987) vol. 21 pp. 17-24.

      [16] Noh. J and Neumann. U.(1998). “A survey of facial modeling and animation techniques,†Technical Report 99-705, University of Southern California.

      [17] Wu. Y, Magnenat-Thalmann. N, Thalmann.D, (1994)A Plastic-Visco-Elastic Model for Wrinkles in Facial Animation and Skin Aging, Proc. 2nd Pacific Conference on Computer Graphics and Applications, Pacific Graphics, 1994

      [18] Kubo, H., Maejima, A., & Morishima, S. (2010). Facial animation reflecting personal characteristics by automatic head modeling and facial muscle adjustment. 2010 10th International Symposium on Communications and Information Technologies, 7-12.

      [19] Platt. S. M and Badler. N. I. (1981). “Animating facial expression,†Computer Graphics, pp. 245-252, Aug.1981.

      [20] Ekman.P, Friesen. W. V. (1978). Facial Action Coding System. Consulting Psychologists Press, Palo Alto, CA.

      [21] Armstrong. E. (2011). “Articulation: Facial Muscles,†Journey of the Voice. http://www.yorku.ca/earmstro/journey/facial.html(current 2 December, 2011)

      [22] Waters. K. (1987). “A muscle model for animating three-dimensional facial expressions,†Comput. Graph., vol. 21, no. 4, pp. 17–24, 1987.

      [23] Thalmann. N. M, Primeau. N. E, and Thalmann. D. (1988). “Abstract muscle actions procedures for human face animation,†Visual Comput., vol. 3, no. 5, pp. 290–297.

      [24] Viad. M. L and H. Yahia. H. (1992). Facial animation with wrinkles. In D. Forsey and G. Hegron, editors, Proceedings of the Third Eurographics Workshop on Animation and Simulation, 1992.

      [25] Wang. C. L. Y, Forsey. D. R. (1994). Langwidere: A New Facial Animation System, proceedings of Computer Animation, 1994, pp. 59-68.

      [26] Singh. K, Fiume. E. (1998). Wires: A Geometric Deformation Technique, Siggraph proceedings, 1998, pp. 405 – 414.

      [27] Zhang. Y, Prakash, E.C. Sung, E. (2004). “A new physical model with multilayer architecture for facial expression animation using dynamic adaptive meshâ€.

      [28] Lewis. 1. P, Cordner. M, and Fong. N. (2000). "Pose space deformation: A unified approach to shape interpolation and skeleton-driven deformation," in Proceedings oj the 27th Annual Conference on Computer Graphics and interactive Techniques, 2000, pp. 165-172.

      [29] Parke. F. I. (1975). “A model for human faces that allows speech synchronized animation,†Journal of Computers and Graphics, vol. 1, no. 1, pp. 1–4.

      [30] Lewis. J. P and Parke. F. I. (1987). “Automated Lip-Synch and Speech Synthesis for Character Animation,†Proc. CHI, pp. 143- 147.

      [31] Massaro D. W and Cohen M. M. (1990). “Perception of synthesized audible and visible speech,†Psychologi- cal Science, vol. 1, pp. 55-63.

      [32] Pelachaud. C, Badler. N .I and Steedman. M. (1996).“Linguistic issues in facial animation,†In Proceedings Computer Animation, Tokyo, pp. 15- 30, April 1991.] [C. Pelachaud, N. Badler, and M. Steedman, “Generating Facial Expressions for Speech,†Cognitive Science, vol. 20, no. 1, pp. 1–46.

      [33] Cao.Y, Faloutsos. P, and Pighin. F.(2005). “Expressive speech-driven facial animation†ACM Trans.on Graph., vol. 24, pp. 1283-1302.

      [34] Chen. Y. M, Huang. F. C, Guan. S. H, and Chen. B Y.(2012). “Animating lip-sync characters with dominated animeme models,†IEEE Trans. on Circuits and Systems for Video Technology, pp. 1344-1253.

      [35] Yehia. H, Rubin. P, and Vatikiotis-Bateson. E. (1998). “Quantitative association of vocal-tract and facial behavior,†Speech Commun., vol. 26, no. 1–2, pp. 23–43.

      [36] Pighin. F, Hecker. J, Lischinski. D, Szeliski. R, and Salesin. D. (1998). “Synthesizing realistic facial expressions from photographs,†SIGGRAPH 98 Conference Proceedings, pp. 75–84.

      [37] Pighin. F.(1999). “Modeling and Animating Realistic Faces from images,†Ph.D. dissertation.

      [38] Bregler. C , Covell. M, and Slaney. M. (1997). “Video rewrite: Driving visual speech with audio,†ACM SIGGRAPH, pp. 353-360.

      [39] Borshukov. G, Piponi. D, Larsen. O, Lewis. J. P, and Lietz. C. T. (2005). “Universal capture - image-based facial animation for „the matrix reloaded‟,†ACM Siggraph 2005 Courses, July 31-Aug. 04, 2005.

      [40] Lewis. J. P, Mooser. J, Deng. Z, and Neumann. U. (2005). “Reducing blendshape interference by selected motion attenuation,†in Proc. Siggraph2005, Symposium on Interactive 3D Graphics and Games, pp. 25- 29, 2005.

      [41] Deng. Z, Chiang. P. Y, Fox. P, and Neumann. U. (2006). “Animating blendshape faces by cross-mapping motion capture data,†ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 43-48.

      [42] Kasat, D., Jain, S., & Thakare, V. (2014). A Survey of Face Morphing Techniques. IJCA RTINFOSEC, 14-18.

      [43] Beier. T, Neely.S, (1992) Feature-based image metamorphosis, Computer Graphics (Siggraph proceedings 1992), vol. 26, pp. 35-42

      [44] Liang. Y, (2009), Image Based Face Replacement in Video‖, Master's Thesis, CSEI Department, National Taiwan University, 2009.

      [45] Cheng. Y. T, Tzeng. V, Liang. Y, Wang.C C, Chen. B. Y, Chuang.Y Y and Ouhyoung. M, (2009), 3d-Model- Based Face Replacement in Video‖, In SIGGRAPH 2009 Poster, ACM.

      [46] Min. F, Sang. N and Wang. Z, (2010), Automatic Face Replacement in Video Based On 2d Morphable Model‖ ,Proceeding ICPR '10 Proceedings Of The 2010 20th International Conference On Pattern Recognition, IEEE Computer Society Washington, Dc, USA 2010. Pages: 2250-2253.

      [47] Dale. K, Sunkavalli. K , Johnson. M J, Vlasic. D, Matusik. W, and Fister. H. P,(2011), Video Face Replacement‖,ACM Transactions On Graphics (Proc. SIGGRAPH Asia) 30, 6, 2011

      [48] Niswar. A, Ong E. P and Huang. Z (2012). Face Replacement in Video from a Single Image‖, In SIGGRAPH Asia 2012 Posters, ACM.

      [49] Afifi, M., Hussain, K. F., Ibrahim, H. M., & Omar, N. M. (2014, December). Video face replacement system using a modified Poisson blending technique. In Intelligent Signal Processing and Communication Systems (ISPACS), 2014 International Symposium on (pp. 205-210). IEEE.

      [50] Parke. F. I and Waters. K. (2008). Computer Facial Animation, Wellesley, MA: AK Peters Ltd, pp. 85-362.

      [51] Williams. L. (1990). “Performance-driven facial animation,†in Proc.17th Computer Graphics and Interactive Techniques, pp. 235-242.

      [52] Guenter. B, Grimm. C, Wood. D, Malvar. H, and Pighin. F. (1998). “Making faces,†in Proc. Siggraph 1998, pp. 55-66.

      [53] Kouadio. C, Poulin. P, and Lachapelle. P. (1998). “Real-time facial animation based upon a bank of 3D facial expressions,†in Proc. Computer Animation, pp. 128-136.

      [54] Arif, M., Khan, M.A., & Kamal, A. (2017). Modeling and compression of motion capture data. 2017 Learning and Technology Conference (L&T) - The MakerSpace: from Imagining to Making!, 7-13.

      [55] Unuma M., Anjyo K. (1995). “Takeuchi R.: Fourier principles for emotion based human figure animation,†In Proceedings of SIGGRAPH 95, pp. 91–96.

      [56] Chai, J., Xiao, J., And Hodgins, J. (2003). “Vision-based control of 3d facial animation,†In Proceeding of ACM SIGGRAPH 03, pp. 193–206.

      [57] Wang. Y, Huang. X, Lee. C. S, Zhang. S, Li. Z, Samaras. D, Metaxas. D, Elgammal. A, and Huang. P. (2004). “High resolution acquisition, learning and transfer of dynamic 3-D facial expressions,†Comput. Graph. Forum vol. 23, no. 3, pp. 677–686.

      [58] Wang. J, Liu. Z, Wu. Y, and Yuan. J.(2012) “Mining actionlet ensemble for action recognition with depth cameras,†in International Conference on Computer Vision and Pattern Recognition (CVPR 2012). IEEE Computer Society, 2012, pp. 1290–1297.

      [59] Chen. X and Koskela. M. (2013). “Classiï¬cation of RGB-D and Motion Capture Sequences Using Extreme Learning Machine,†Image Analysis, pp. 640– 651.

      [60] Sedmidubsky. J, Valcik. J, and Zezula. P. (2013). “A Key-Pose Similarity Algorithm for Motion Data Retrieval,†in Advanced Concepts for Intelligent Vision Systems (ACIVS 2013). Springer, 2013, vol. 8192, pp. 669–681.

      [61] Poppe. R, Van Der Zee. S, Heylen. D. J, and Taylor. P. (2014). “Amab: Automated measurement and analysis of body motion,†Behavior Research Methods, vol. 46, no. 3, pp. 625–633.

      [62] Thanh. T. T, Chen. F, Kotani. K, and Le. B. (2013). “Automatic extraction of semantic action features,†in International Conference on Signal-Image Technology Internet-Based Systems (SITIS 2013), 2013, pp. 148–155.

      [63] Zanï¬r. M, Leordeanu. M, and Sminchisescu. C. (2013). “The moving pose: An efï¬cient 3d kinematics descriptor for low-latency action recognition and detection,†in International Conference on Computer Vision (ICCV 2013), 2013, pp. 2752–2759.

      [64] Liu. M, Liu. H, and Chen. C. (2017). “Enhanced skeleton visualization for view invariant human action recognition,†Pattern Recognition, vol. 68, pp. 346–362.

      [65] Wang. P, Li. Z, Hou. Y, and Li. W. (2016). “Action recognition based on joint trajectory maps using convolutional neural networks,†in Proceedings of the 2016 ACM on Multimedia Conference. ACM, 2016, pp. 102–106.

      [66] Weise. T, Bouaziz. S, Li. H, and Pauly. M. (2011). “Realtime performance-based facial animation,†ACM Transaction on Graphic, vol. 30, issue 4, July 2011.

      [67] Chuang, E. S., & Bregler, C. (2004). Analysis, synthesis, and retargeting of facial expressions (Doctoral dissertation, Stanford University).

      [68] Lee, Y., Terzopoulos, D., & Waters, K. (1995, September). Realistic modeling for facial animation. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques (pp. 55-62). ACM..

      [69] Parke F.I and Waters K.,(2008) Computer Facial Animation, Wellesley, MA: AK Peters Ltd, pp. 85-362.

      [70] Huang, Y., Fan, Y., Liu, W., & Wang, L. (2014, July). 3D human face modeling for facial animation using regional adaptation. In Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on (pp. 394-398). IEEE.

      [71] Fratarcangeli, M. (2013). Computational Models for Animating 3D Virtual Faces (Doctoral dissertation, Linköping University Electronic Press).

      [72] Gunanto, S. G., Hariadi, M., & Yuniarno, E. M. (2016, October). Computer facial animation with synthesize marker on 3D faces surface. In Industrial, Mechanical, Electrical, and Chemical Engineering (ICIMECE), International Conference of(pp. 260-263). IEEE.

  • Downloads

  • How to Cite

    Adibah Najihah Mat Noor, N., Mohd Suaib, N., Anwar Ahmad, M., & Ahmad, I. (2018). Review on 3D Facial Animation Techniques. International Journal of Engineering & Technology, 7(4.36), 1406-1412. https://doi.org/10.14419/ijet.v7i4.36.28996