Multi-view videos plus depth assessment using novel saliency detection method

  • Authors

    • M. Sowjanya
    • B. Ramu
    • S. Krihna Priya
    https://doi.org/10.14419/ijet.v7i4.21599
  • Abstract

    Multi-view videos plus depth (MVD) is a popular 3D video representation where pixel depth information is exploited to generate additional views to provide 3D experience. Quality assessment of MVD data is of paramount importance since the latest research results show that existing 2D quality metrics are not suitable for MVD. This paper focuses on depth quality assessment and presents a novel algorithm to estimate the distortion in depth videos induced by compression. A novel saliency detection model is introduced by utilizing low level features obtained from Stationary Wavelet Transform domain. Firstly, wavelet transform is employed to create the multi-scale feature maps which can represent different features from edge to texture. Then, we propose a computational model for the saliency map from these features. This model is aimed to modulate local contrast at a location with its global saliency computed based on likelihood of the features and also considered local centre-surround differences and global contrast in the final saliency map. Experimental evaluation depicts the promising results from the proposed model by outperforming the relevant state of the art saliency detection models.

  • References

    1. [1] M. Tanimoto, “FTV: Free-viewpoint Television,†Signal Process.-Image Common., vol. 27, no. 6, pp. 555 – 570, 2012.https://doi.org/10.1016/j.image.2012.02.016.

      [2] M.P. Tehrani et al., “Proposal to consider a new work item and its use case - rei: An ultra-multiview 3D display, ISO/IEC JTC1/SC29/WG11/m30022, July-Aug 2013.

      [3] C. Fehn, “Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV,†in SPIE Electron. Imaging, 2004, pp. 93–104.

      [4] M. Doman ski et al., “High efficiency 3D video coding using new tools based on viw synthesis,†IEEE Trans. Image Process., vol. 22,no.9,pp.35173527,2013.https://doi.org/10.1109/TIP.2013.2266580.

      [5] M.S. Farid et al., “Panorama view with spatiotemporal occlusion compensation for 3D video coding,†IEEE Trans. Image Process., vol. 24, no. 1, pp. 205–219, Jan 2015.https://doi.org/10.1109/TIP.2014.2374533.

      [6] T. Maguey, A. Ortega, and P. Frossard, “Graph-based representation for multiview image geometry,†IEEE Trans. Image Process., vol. 24, no. 5, pp. 1573–1586, May 2015.https://doi.org/10.1109/TIP.2015.2400817.

      [7] M.S. Farid et al., “A panoramic 3D video coding with directional depth aided in painting,†in Proc. Int. Conf. Image Process. (ICIP), Oct 2014, pp. 3233–3237.

      [8] T. Wiegand et al., “Overview of the H.264/AVC video coding standard,†IEEE Trans. Circuits Syst. Video Technol. ,vol. 13, no. 7, pp. 560–576, July 2003.https://doi.org/10.1109/TCSVT.2003.815165.

      [9] G.J. Sullivan et al., “Overview of the high efficiency video coding (HEVC) standard,†IEEE Trans. Circuits Syst. Video Technol., vol. 22, no. 12, pp. 1649–1668, 2012.

      [10] G.J. Sullivan et al., “Standardized Extensions of High Efficiency Video Coding (HEVC),†IEEE J. Sel. Topics Signal Process., vol. 7, no. 6, pp. 1001–1016, Dec 2013.https://doi.org/10.1109/JSTSP.2013.2283657.

      [11] “Peak Noise to Signal Ratioâ€. [Online]. Available:http://en.wikipedia.org/wiki/Peak_signal-to noise_ ratio

      [12] “The image database of the signal and imaging processing institute (USC-SIPI)â€, The University of Southern California, [online].Available: http://sipi.usc.edu/database/.

      [13] E. Walia, P. Jain, Navdeep, “An Analysis of LSB & DCT based Steganographyâ€, Global Journal of Computer Science and Technology, April 2010, Vol. 10, pp. 4-8.

  • Downloads

  • How to Cite

    Sowjanya, M., Ramu, B., & Priya, S. K. (2018). Multi-view videos plus depth assessment using novel saliency detection method. International Journal of Engineering & Technology, 7(4), 3340-3345. https://doi.org/10.14419/ijet.v7i4.21599

    Received date: 2018-11-25

    Accepted date: 2018-11-25