Interrelationship identification between humans from images using two class classifier

  • Authors

    • Amit Verma
    • T Meenpal
    • B Acharya
    2018-04-20
    https://doi.org/10.14419/ijet.v7i2.21.11823
  • Bag of words, SVM, confusion matrix, SURF, FAST.
  • Abstract

    The paper proposes an automatic interrelationship identification algorithm between human beings. The image database contains two interrelationship classes i.e. two people hugging and handshaking each other. The feature detection and feature extraction has been done using bag of words algorithm. SURF features and FAST features are used as feature detectors. Finally, the extracted features have been applied to SVM for classification. We have tested the classifier against a set of test images for both feature detectors.  Finally, the accuracy of the classifier has been calculated and confusion matrix has been plotted.

     

     

  • References

    1. [1] Vajda T & Marton L, “General framework for human object detection and pose estimation in video sequencesâ€, 5th IEEE International Conference on Industrial Informatics, (2007), 467-472.

      [2] Park S & Aggarwal JK, “Segmentation and tracking of interacting human body parts under occlusion and shadowingâ€, Proceedings Workshop on Motion and Video Computing, (2002), pp.105-111.

      [3] Csurka G, Bray C, Dance C & Fan L, “Visual categorization with bags of keypointsâ€, Workshop on Statistical Learning in Computer Vision, ECCV, (2004), pp.1-22.

      [4] Qin L & Gao W, “Image matching based on a local invariant descriptorâ€, IEEE International Conference on Image Processing, Vol.3, (2005).

      [5] Feng Y & Lapata M, “Automatic Caption Generation for News Imagesâ€, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.35, No.4, (2013), pp.797-812.

      [6] You J, Pissaloux E & Cohen HA, “A hierarchical image matching scheme based on the dynamic detection of interesting pointsâ€, International Conference on Acoustics, Speech, and Signal Processing, Vol.4, (1995), pp.2467-2470.

      [7] Bay H, Ess A, Tuytelaars T & Van Gool L, “Speeded-up robust features (surf)â€, Comput. Vis. Image Underst., Vol.110, No.3, (2008), pp.346-359.

      [8] Rosten E & Drummond T, “Machine learning for high-speed corner detectionâ€, Proceedings of the 9th European Conference on Computer Vision-Volume Part I,ser. ECCV'06, (2006), pp.430-443.

      [9] Berg TL, Berg AC, Edwards J, Maire M, White R, Teh YW, Learned-Miller E & Forsyth DA, “Names and faces in the newsâ€, Proceedings of the IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition, Vol.2, (2004), pp. II-848-II-854.

      [10] Barnard K, Duygulu P, Forsyth D, de Freitas N, Blei DM & Jordan MI, “Matching wordsand picturesâ€, J. Mach. Learn. Res., Vol. 3, (2003), pp.1107-1135.

      [11] Aker A & Gaizauskas R, “Generating image descriptions using dependency relational patternsâ€, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ser. ACL '10, (2010), pp.1250-1258.

      [12] Farhadi A, Hejrati M, Sadeghi MA, Young P, Rashtchian C, Hockenmaier J & Forsyth D, “Every picture tells a story: Generating sentences from imagesâ€, Proceedings of the 11th European Conference on Computer Vision: Part IV, ser. ECCV'10, (2010), pp.15-29.

      [13] Yang Y, Teo CL, Daume H & Aloimonos Y, “Corpus-guided sentence generation of natural imagesâ€, Proceedings of the Conference on Empirical Methods in Natural Language Processing, ser. EMNLP '11, (2011), pp.444-454.

      [14] Yao BZ, Yang X, Lin L, Lee MW & Zhu SC, “I2t: Image parsing to text descriptionâ€, Proceedings of the IEEE, Vol.98, No.8, (2010), pp.1485-1508.

      [15] Agrawal S, Verma NK, Tamrakar P & Sircar P, “Content based color image classification usingsvmâ€, Eighth International Conference on Information Technology: New Generations, April (2011), pp.1090-1094.

      [16] Desai C, Ramanan D & Fowlkes C, “Discriminative models for multi-class object layoutâ€, IEEE 12th International Conference on ComputerVision, (2009), 229-236.

      [17] Gupta A & Davis LS, “Beyond nouns: Exploiting prepositions and comparative adjectives for learning visual classifiersâ€, Proceedings of the 10th European Conference on Computer Vision: Part I, ser. ECCV'08, 2008, 16-29.

      [18] Torralba A, Murphy KP & Freeman WT, “Using the forest to see the trees: exploiting context for visual object detection and localizationâ€, Communications of the ACM, Vol.53, No.3, (2010), pp.107-114.

      [19] Boser BE, Guyon IM & Vapnik VN, “Atraining algorithm for optimal margin classifiersâ€, Proceedings of the Fifth Annual Workshop on Computational Learning Theory, ser. COLT '92, (1992), pp.144-152.

      [20] Leibe B, Leonardis A & Schiele B, “Robustobject detection with interleaved categorization and segmentationâ€, Int. J. Comput. Vision, Vol.77, No.1-3, (2008), pp.259-289.

      [21] Weston J & Watkins C, “Multi-class support vector machinesâ€, Technical Report CSD-TR-98-04, Department of Computer Science, Royal Holloway, University of London, (1998).

  • Downloads

  • How to Cite

    Verma, A., Meenpal, T., & Acharya, B. (2018). Interrelationship identification between humans from images using two class classifier. International Journal of Engineering & Technology, 7(2.21), 5-8. https://doi.org/10.14419/ijet.v7i2.21.11823

    Received date: 2018-04-21

    Accepted date: 2018-04-21

    Published date: 2018-04-20