Joint graph regularization based semantic analysis for cross-media retrieval: a systematic review

  • Authors

    • Monelli Ayyavaraiah
    • Dr Bondu Venkateswarlu
    2018-03-18
    https://doi.org/10.14419/ijet.v7i2.7.10592
  • Cross-Media Retrieval, Joint Graph Regularization, MAP, Heterogeneous Data, Single Media Retrieval.
  • The large number of heterogeneous data are rapidly increasing in the internet and most data consist of audio, video, text and images. The searching of the required data from the large database is difficult and time taking process. The single media retrieval is used to get the needed data from the large dataset and it has the drawback, it can only retrieve the single media only. If the query is given as the text and acquired result are present in text. The users demand the cross-media retrieval for their queries and it is very consistent in providing the result. This helps the users to get more information regarding to their queries. Finding the similarities between the heterogeneous data is very complex. Many research is done on the cross-media retrieval with different methods and provide the different result. The aim is to analysis the different cross-media retrieval with the joint graph regularization (JGR) to understand the various technique. The most of researches are using the parameter of MAP, precision and recall for their research.

  • References

    1. [1] O. Allani, H.B. Zghal, N. Mellouli, and H. Akdag, “Pattern graph-based image retrieval system combining semantic and visual featuresâ€, Multimedia Tools and Applications, pp.1-30, 2017.

      [2] Y. Yang, F. Nie, D. Xu, J. Luo, Y. Zhuang, and Y. Pan, “A multimedia retrieval framework based on semi-supervised ranking and relevance feedbackâ€, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp.723-742, 2012.

      [3] Y. Yang, Y.T. Zhuang, F. Wu, and Y.H. Pan, “Harmonizing hierarchical manifolds for multimedia document semantics understanding and cross-media retrievalâ€, IEEE Transactions on Multimedia, vol. 10 no. 3, pp.437-446, 2008.

      [4] Y.T. Zhuang, Y. Yang, and F. Wu, “Mining semantic correlation of heterogeneous multimedia data for cross-media retrievalâ€, IEEE Transactions on Multimedia, vol. 10, no. 2, pp.221-229, 2008.

      [5] M. Li, L. Li, and F. Nie, “Ranking with adaptive neighborsâ€, Tsinghua Science and Technology, vol. 22, no. 6, pp.733-738, 2017.

      [6] Y. Peng, X. Zhai, Y. Zhao, and X. Huang, “Semi-supervised cross-media feature learning with unified patch graph regularizationâ€, IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. (3), pp.583-596, 2016.

      [7] Z. Pan, W. Chen, M. Zhang, J. Liu, and G. Wu, “Virtual reality in the digital Olympic museumâ€, IEEE Computer Graphics and Applications, vol. 29, no. 5, 2009.

      [8] R. Ren, and J. Collomosse, “Visual sentences for pose retrieval over low-resolution cross-media dance collectionsâ€, IEEE Transactions on Multimedia, vol. 14, no. 6, pp.1652-1661, 2012.

      [9] X. Zhai, Y. Peng, and J. Xiao, “Learning cross-media joint representation with sparse and semisupervised regularizationâ€, IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 6, pp.965-978, 2014.

      [10] F. Wu, X. Jiang, X. Li, S. Tang, W. Lu, Z. Zhang, and Y. Zhuang, “Cross-modal learning to rank via latent joint representationâ€, IEEE Transactions on Image Processing, vol. 24, no. 5, pp.1497-1509, 2015.

      [11] Y.X. Peng, W.W. Zhu, Y. Zhao, C.S. Xu, Q.M. Huang, H.Q. Lu, Q.H. Zheng, T.J. Huang, and W. Gao, “Cross-media analysis and reasoning: advances and directionsâ€, Frontiers of Information Technology & Electronic Engineering, vol. 18, no. 1, pp.44-57, 2017.

      [12] F. Sun, Y. Xu, and J. Zhou, “Active learning SVM with regularization path for image classificationâ€, Multimedia Tools and Applications, vol. 75, no. 3, pp.1427-1442, 2016.

      [13] S. Wang, P. Pan, Y. Lu, and L. Xie, “Improving cross-modal and multi-modal retrieval combining content and semantics similarities with probabilistic modelâ€, Multimedia Tools and Applications, vol. 74, no. (6), pp.2009-2032, 2015.

      [14] P. Arulmozhi, and S. Abirami, “A comparative study of hash based approximate nearest neighbor learning and its application in image retrievalâ€, Artificial Intelligence Review, pp.1-33, 2017.

      [15] J. Qi, X. Huang, and Y. Peng, “Cross-media similarity metric learning with unified deep networksâ€, Multimedia Tools and Applications, pp.1-19, 2017.

      [16] Zhai, X., Peng, Y. and Xiao, J., 2012. Effective heterogeneous similarity measure with nearest neighbors for cross-media retrieval. Advances in Multimedia Modeling, pp.312-322.

      [17] X. Zhai, Y. Peng, and J. Xiao, “Cross-media retrieval by intra-media and inter-media correlation mining. Multimedia systems, vol. 19, no. 5, pp.395-406, 2013.

      [18] B. Lu, G.R. Wang, and Y. Yuan, “A novel approach towards large scale cross-media retrievalâ€, Journal of Computer Science and Technology, vol. 27, no. 6, pp.1140-1149, 2012.

      [19] B. Lu, G. Wang, and Y. Yuan, “Towards large scale cross-media retrieval via modeling heterogeneous information and exploring an efficient indexing schemeâ€, In Computational Visual Media pp. 202-209, Springer, Berlin, Heidelberg, 2012.

      [20] H. Zhang, Y.Y. Wang, H. Pan, and F. Wu, “Understanding visual-auditory correlation from heterogeneous features for cross-media retrievalâ€, Journal of Zhejiang University SCIENCE A, vol. 9, no. 2, pp.241-249, 2008.

      [21] H. Zhang, and J. Weng, “Measuring multi-modality similarities via subspace learning for cross-media retrievalâ€, In Pacific-Rim Conference on Multimedia, pp. 979-988, Springer, Berlin, Heidelberg, November 2006

      [22] K. Liu, S. Wei, Y. Zhao, Z. Zhu, Y. Wei, and C. Xu, “Accumulated reconstruction error vector (AREV): a semantic representation for cross-media retrievalâ€, Multimedia Tools and Applications, vol. 74, no. 2, pp.561-576, 2015.

      [23] P. Bellini, D. Cenni, and P. Nesi, “Optimization of information retrieval for cross media contents in a best practice networkâ€, International Journal of Multimedia Information Retrieval, vol. 3, no. 3, pp.147-159, 2014.

      [24] D. Damm, C. Fremerey, V. Thomas, M. Clausen, F. Kurth, and M. Müller, “A digital library framework for heterogeneous music collections: from document acquisition to cross-modal interactionâ€, International Journal on Digital Libraries, pp.1-19, 2012.

      [25] J. Yan, H. Zhang, J. Sun, Q. Wang, P. Guo, L. Meng, W. Wan, and X. Dong, “Joint graph regularization based modality-dependent cross-media retrievalâ€, Multimedia Tools and Applications, pp.1-19, 2017.

      [26] Y. Zhuang, Q. Li, and L. Chen, “A Unified Indexing Structure for Efficient Cross-Media Retrievalâ€, In DASFAA pp. 677-692, 2009, April.

      [27] Y. Zhuang, and Y. Yang, “Boosting cross-media retrieval by learning with positive and negative examplesâ€, In International Conference on Multimedia Modeling pp. 165-174, Springer, Berlin, Heidelberg. 2007, January.

      [28] K. Song, Y. Tian, and T. Huang, “Improving the image retrieval results via topic coverage graphâ€, Advances in Multimedia Information Processing-PCM 2006, pp.193-200, 2006.

      [29] H. Zhang, X. Gao, P. Wu, and X. Xu, “A cross-media distance metric learning framework based on multi-view correlation mining and matchingâ€, World Wide Web, vol. 19, no. 2, pp.181-197, 2016.

      [30] A.K. Smith, K.H. Cheung, K.Y. Yip, M. Schultz, and M.B. Gerstein, “LinkHub: a Semantic Web system that facilitates cross-database queries and information retrieval in proteomicsâ€, BMC bioinformatics, vol. 8, no. 3, p.S5, 2007.

      [31] Y. Peng, X. Zhai, Y. Zhao, and X. Huang, “Semi-supervised cross-media feature learning with unified patch graph regularizationâ€, IEEE Transactions on Circuits and Systems for Video Technology, vol. 26, no. 3, pp.583-596, 2016.

      [32] X. Zhai, Y. Peng, and J. Xiao, “Heterogeneous Metric Learning with Joint Graph Regularization for Cross-Media Retrievalâ€, In AAAI. 2013, June.

      [33] L. Xie, L. Zhu, and G. Chen, “Unsupervised multi-graph cross-modal hashing for large-scale multimedia retrievalâ€, Multimedia Tools and Applications, vol. 75, no. 15, pp.9185-9204, 2016.

      [34] J. Deng, L. Du, and Y.D. Shen, “Heterogeneous Metric Learning for Cross-Modal Multimedia Retrieval†In WISE (1) pp. 43-56, 2013, October.

      [35] J. Wang, G. Li, P. Pan, and X. Zhao, “Semi-supervised semantic factorization hashing for fast cross-modal retrievalâ€, Multimedia Tools and Applications, pp.1-19, 2017.

      [36] X. Cao, H. Zhang, X. Guo, S. Liu, and X. Chen, “Image retrieval and ranking via consistently reconstructing multi-attribute queriesâ€, In European Conference on Computer Vision pp. 569-583, Springer, Cham. 2014, September.

      [37] L. Huang, and Y. Peng, “Cross-Media Retrieval via Semantic Entity Projectionâ€, In International Conference on Multimedia Modeling pp. 276-288, Springer, Cham. 2016, January.

      [38] J. Qi, X. Huang, and Y. Peng, “Cross-Media Retrieval by Multimodal Representation Fusion with Deep Networksâ€, In International Forum of Digital TV and Wireless Multimedia Communication pp. 218-227, Springer, Singapore, 2016, November.

      [39] L. Xie, P. Pan, Y. Lu, and S. Jiang, “Cross-modal self-taught learning for image retrievalâ€, In International Conference on Multimedia Modeling pp. 257-268, Springer, Cham. 2015, January.

      [40] K.I. Kim, J. Tompkin, M. Theobald, J. Kautz, and C. Theobalt, “Match graph construction for large image databasesâ€, In European Conference on Computer Vision pp. 272-285, Springer, Berlin, Heidelberg. 2012, October.

      [41] X. Zhai, Y. Peng, and J. Xiao, “Learning cross-media joint representation with sparse and semisupervised regularizationâ€, IEEE Transactions on Circuits and Systems for Video Technology, vol. 24 no. 6, pp.965-978, 2014.

  • Downloads

  • How to Cite

    Ayyavaraiah, M., & Bondu Venkateswarlu, D. (2018). Joint graph regularization based semantic analysis for cross-media retrieval: a systematic review. International Journal of Engineering & Technology, 7(2.7), 257-261. https://doi.org/10.14419/ijet.v7i2.7.10592