Classification of Images on Furniture and Household Goods by Using Transfer Leaning and Fine Tuning
-
2018-11-30 https://doi.org/10.14419/ijet.v7i4.25.26933 -
Convolutional neural networks (CNN), texture classification, transfer learning, computer vision. -
Abstract
Automatic product recognition for shoppers in online shopping is a challenging task. The reason behind that is, for the same product, a picture can be taken in different intensities of light, angles, backgrounds and levels of occlusion. This causes the different fine-grained categories look very similar. Many of general-purpose recognition machines used now days, cannot perceive such subtle differences between photos. These differences could be important for shopping decisions. In this paper, a novel approach has been proposed based on deep learning and artificial neural networks (ANN) for pattern recognition, which accurately assigns category labels for furniture and home good images. This is done by classification of textual patterns, to help push state of art in automatic image classification. In deep learning, transfer learning is used, where two pre trained convolutional neural network (CNN) models are retrained. The CNN models used for this experiment are VGG-16 and Inception V3. The experiment is carried on dataset taken from kaggle and classification is made among five items named bed, sofa, table, chair and swivel chair. The experimental results are measured by performance metrics, in terms of training accuracy, validation accuracy, training loss and validation loss. The results demonstrate that the accuracy of Inception V3 transfer learning model with 97.3% is more than VGG-16 and ANN with accuracy of 92% and 86%, respectively.
Â
Â
-
References
[1] iMaterialistic challenge (Furniture) at FGVC5 image classification of furniture and home goods, kaggle. [Online].Available:https://www.kaggle.com/c/imaterialist-challenge-furniture-2018
[2] S. Christodoulidis, M. Anthimopoulos, L. Ebner, A. Christe, and S. Mougiakakou, “Multisource transfer learning with convolutional neural networks for lung pattern analysis ,†IEEE Trans. Med. Imag., vol.21, no.1, pp.76-84, Jan. 2017.
[3] M. Anthimopolous, S. Christodoulidis, L. Ebner, A. Christe, and S.Mougiakakou, “lung pattern classification for interstitial lung diseases using a deep convolutional neural network,†IEEE Trans. Med. Imag., vol. 21, no. 1, pp. 1207-1216, May 2016.
[4] K. Simonyan and A. Zisserman. (Sep. 2014). “Very deep convolutional networks for large-scale image recognition.†[Online]. Available: https://arxiv.org/abs/1409.1556.
[5] J. Krause, B. Sapp, A. Howard, H. Zhou, A. Toshev, T. Duerig, J. Philbin, and L. Fei-Fei. “The unreasonable effectiveness of noisy data for finegrained recognitionâ€. arXiv preprint arXiv:1511.06789, 2015.
[6] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition.†in Icml, vol. 32, 2014, pp. 647–655.
[7] R. B. Rusu, N. Blowdow, Z. Marton, A. Soos, M. Beetz "Towards 3 D object maps for autonomous household robots", International Conference on Intelligent Robots and Systems, pp. 3191-3198, Dec. 2007.
[8] P. Palungsuntikul and W. Premchaiswadi, “Object detection on keeping a mobile robot by using a low cost embedded color vision system ,†2010 Eighth International conference on ICT and knowledge engineering, pp. 70-76 , Jan. 2011.
[9] K. Kungcharoen, P. Palangsantikul and W. Premchaiswadi, “Development of object detection software for a mobile robot using an AForce.Net framework 2011 Ninth International Conference on ICT and Knowledge Engineering, pp. 201-206, Jan. 2012.[10] iMaterialistic challenge (Furniture) at FGVC5 image classification of furniture and home goods, kaggle. [Online].Available:https://www.kaggle.com/c/imaterialist-challenge-furniture-2018
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, pp.1097-1105, 2012,
[12] W. Shen, M. Zhou, F. Yang, C. Yang, and J. Tian, "Multi-scale convolutional neural networks for lung nodule classification," in Information Processing in Medical Imaging, pp. 588-599, 2015.
[13] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition", Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[14] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J.Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe; Convolutional architecture for fast feature embedding,†in Proceedings of the 22nd ACM international conference on Multimedia. ACM, 2014, pp. 675-678.
[15] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition", Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[16] ] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, pp.1097-1105, 2012,
[17] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,†in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2015, pp. 1-9.
[18] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition.†in Icml, vol. 32, pp. 647–655, 2014.
[19] Yosinski, J., Clune, J., Bengio, Y., et al.: “How transferable are features in deep neural networks? â€. Advances in Neural Information Processing Systems (NIPS), 2014, pp. 3320–3328, 2014.
[20] Gross, R., Matthews, I., Cohn, J.F., et al.: ‘Multi-PIE’. IEEE Int. Conf. on Automatic Face and Gesture Recognition (FG), 2008.
-
Downloads
-
How to Cite
Das Bakshi, K., & Gagan Deep, D. (2018). Classification of Images on Furniture and Household Goods by Using Transfer Leaning and Fine Tuning. International Journal of Engineering & Technology, 7(4.25), 250-255. https://doi.org/10.14419/ijet.v7i4.25.26933Received date: 2019-01-31
Accepted date: 2019-01-31
Published date: 2018-11-30