Facial region detection robust to changing backgrounds

  • Authors

    • Seok WooJang
    • Siwoo Byun
    2018-04-03
    https://doi.org/10.14419/ijet.v7i2.12.11028
  • Intelligent Robot, Facial Region, Dynamic Environment, Changing Background, Ground Truth, Skin Tone, Algorithm Performance.
  • Abstract

    Background/Objectives: These days, many studies have actively been conducted on intelligent robots capable of providing human friendly service. To make natural interaction between humans and robots, it is required to develop the mobile robot-based technology of detecting human facial regions robustly in dynamically changing real backgrounds.

    Methods/Statistical analysis: This paper proposes a method for detecting facial regions adaptively through the mobile robot-based monitoring of backgrounds in a dynamic real environment. In the proposed method, a camera-object distance and a color change in object background are monitored, and thereby the skin color extraction algorithm most suitable for the measured distance and color is applied. In the face detection step, if the searched range is valid, the most suitable skin color detection method is selected so as to detect facial regions.

    Findings: To sum up the experimental results, algorithms have a difference in performance depending on a distance and a background color. Overall, the algorithms using neural network showed stable results. The algorithm using Kismet had a good perception rate for the ground truth part of an original image, and a skin color detection rate was greatly influenced by pink and yellow background colors similar to a skin tone, and consequently an incorrect perception rate of background was considerably high. With regard to each algorithm performance depending on a distance, the closer a distance with an object was to 320cm, the more an incorrect perception rate of a background sharply increased. To analyze the performance of each skin color detection algorithm applied to face detection, we examined how much a skin color of an original image was detected by each algorithm. For a skin color detection rate, after the ground truth for the skin of an original image, the number of pixels of the skin color detected by each algorithm was calculated. In this case, the ground truth means a range of the skin color of an original image to detect.

    Improvements/Applications: We expect that the proposed approach of detecting facial regionsin a dynamic real environment will be used in a variety of application areas related to computer vision and image processing.

     

     

  • References

    1. [1] Li B, ZhangX, FangY, Shi W. Visual servo regulation of wheeled mobile robots with simultaneous depth identification. IEEE Transactions on Industrial Electronics. 2018 January, 65 (1), pp. 460-469.

      [2] DuH, WenG,ChengY,HeY, Jia R. Distributed finite-time cooperative control of multiple high-order nonholonomic mobile robots. IEEE Transactions on Neural Networks and Learning Systems. 2017, 28 (12), pp. 2998-3006.

      [3] Gui K, LiuH, Zhang D. Toward multimodal human-robot interaction to enhance active participation of users in gait rehabilitation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2017 November, 25 (11), pp. 2054-2066.

      [4] NamHS,KohS, KimYJ,Beom J, LeeWH, LeeSU, Kim S. Biomechanical reactions of exoskeleton neurorehabilitationrobots in spastic elbows and wrists. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2017 November, 25 (11), pp. 2196-2203.

      [5] Lotfavar A,HasanzadehS,Janabi-Sharifi F. Cooperative continuum robots: concept, modeling, and workspace analysis. IEEE Robotics and Automation Letters. 2018 January, 3 (1), pp. 426-433.

      [6] Dutta A,Dasgupta P. Ensemble learning with weak classifiers for fast and reliable unknown terrain classification using mobile robots.IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2017 November, 47 (11), pp. 2933-2944.

      [7] MinaryP, PichonF, MercierD, LefevreE, DroitB. Face pixel detection using evidential calibration and fusion. International Journal of Approximate Reasoning. 91,2017December, pp. 202-215.

      [8] ChaudhryS, ChandraR. Face detection and recognition in an unconstrained environment for mobile visual assistive system. Applied Soft Computing. 2017April,53, pp. 168-180.

      [9] WuS, KanM, HeZ, ShanS, ChenX. Funnel-structured cascade for multi-view face detection with alignment-awareness. Neurocomputing. 2017January, 221, pp. 138-145.

      [10] GunasekarS, GhoshJ,Bovik AC. Face detection on distorted images augmented by perceptual quality-aware features. IEEE Transactions on Information Forensics and Security. 2014, 9 (12), pp. 2119-2131.

      [11] Menotti D, Chiachia G, Pinto A, Schwartz WR, Pedrini H, Falcao AX, Rocha A. Deep representations for iris, face, and fingerprint spoofing detection. IEEE Transactions on Information Forensics and Security. 2015, 10 (4), pp. 864-879.

      [12] Hsu RL, Abdel-MottalebM, Jain AK.Face detection in color images.IEEE Transactions on Pattern Analysis and Machine Intelligence.2002May, 24 (5), pp. 696-706.

      [13] FangJ, QiuG.A color histogram-based approach to human face detection.Proceedings of the International Conference on Visual Information Engineering, 2003July, pp. 133-136.

      [14] LeeKM, Component-based face detection and verification.Pattern Recognition Letters. 2008February, 29 (3), pp. 200-214.

      [15] BiancoS, GaspariniF, Schettini R. Adaptive skin classification using face and body detection. IEEE Transactions on Image Processing. 2015, 24 (12), pp. 4756-4765.

      [16] RafaelMS, AguirreE, MiguelGS.People detection and tracking using stereo vision and color.Image and Vision Computing. 2007June, 25 (6), pp. 995-1007.

      [17] HuangP, ChengM, ChenY, LuoH, WangC, Li J. Traffic sign occlusion detection using mobile laser scanning point clouds. IEEE Transactions on Intelligent Transportation Systems. 2017 September, 18 (9), pp. 2364-2376.

      [18] Lei H, JiangG, Quan L. Fast descriptors and correspondence propagation for robust global point cloud registration. IEEE Transactions on Image Processing. 2017 August, 26 (8), pp. 3614-3623.

      [19] Wang Y, Zhu XX. Automatic feature-based geometric fusion of multiviewTomoSARpoint clouds in urban area. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2015 March, 8 (3), pp. 953-965.

      [20] BreazealC,EdsingerA, FitzpatrickP, ScassellatiB.Active vision for sociable robots.IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans. 2001September, 31 (5), pp. 443-453.

      [21] Kovac J, PeerP, SolinaF.Human skin color clustering for face detection.Proceedings of theInternational Conference on Computer as a Tool (EUROCON). 2003April, 2, pp. 144-1448.

      [22] KakumanuP, MakrogiannisS, BourbakisN.A survey of skin-color modeling and detection methods. Pattern Recognition. 2007March, 30 (3), pp. 1106-1122.

      [23] VezhnevetsV, SazonovV, AndreevaA. A survey on pixel-based skin color detection techniques.Proceedings of the International Conference on Computer Graphics and Vision (Graphicon), 2003October, pp. 85-92.

      [24] HanZ, TianJ, QuL, Tang Y. A new intrinsic-lighting color space for daytime outdoor images. IEEE Transactions on Image Processing. 2017 February, 26 (2), pp. 1031-1039.

      [25] LeeH, KimHS, Kim JI. Background subtraction using background sets with image- and color-space reduction. IEEE Transactions on Multimedia. 2016 October, 18 (10), pp. 2093-2103.

      [26] CernadasE, Fernandez-DelgadoM, Gonzalez-Rufino E, CarrionP.Influence of normalization and color space to color texture classification. Pattern Recognition. 2017 January, 61, pp. 120-138.

      [27] MaggioriE, TarabalkaY, CharpiatG, Alliez P.High-resolution aerial image labeling with convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing. 2017 December, 55 (12), pp. 7092-7103.

      [28] KumarB, Dikshit O. Spectral contextual classification of hyperspectralimagery with probabilistic relaxation labeling. IEEE Transactions on Cybernetics. 2017 December, 47 (12), pp. 4380-4391.

      [29] HirakawaT, TamakiT, KuritaT, RaytchevB, KanedaK, WangC, Najman L.Tree-wise discriminative subtreeselection for texture image labeling. IEEE Access. 2017 July, 5, pp. 13617-13634.

      [30] PaisitkriangkraiS, SherrahJ, JanneyP, Hengel A.Semantic labeling of aerial and satellite imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2016 July, 9 (7), pp. 2868-2881.

      [31] ZhouT, IshibuchiH, Wang S. Stacked-structure-based hierarchical takagi-sugeno-kangfuzzy classification through feature augmentation. IEEE Transactions on Emerging Topics in Computational Intelligence. 2017 December, 1 (6), pp. 421-436.

      [32] Nnolim UA. Improved partial differential equation-based enhancement for underwater images using local-global contrast operators and fuzzy homomorphic processes. IET Image Processing. 2017 November,11 (11), pp. 1059-1067.

      [33] ChenX, HuJ, WuM, Cao W. T–S fuzzy logic based modeling and robust control for burning-through point in sintering process. IEEE Transactions on Industrial Electronics. 2017 December, 64 (12), pp. 9378-9388.

      [34] Chen SM, Chen ZJ. Weighted fuzzy interpolative reasoning for sparse fuzzy rule-based systems based on piecewise fuzzy entropies of fuzzy sets. Information Sciences. 2016February, 329, pp. 503-523.

  • Downloads

  • How to Cite

    WooJang, S., & Byun, S. (2018). Facial region detection robust to changing backgrounds. International Journal of Engineering & Technology, 7(2.12), 25-26. https://doi.org/10.14419/ijet.v7i2.12.11028

    Received date: 2018-04-03

    Accepted date: 2018-04-03

    Published date: 2018-04-03