VGG16 for Plant Image Classification with Transfer Learning and Data Augmentation

 
 
 
  • Abstract
  • Keywords
  • References
  • PDF
  • Abstract


    This paper discusses the potential of applying VGG16 model architecture for plant classification. Flower images are used instead of leaves as in other plant recognition model because the structure of shape and color of leaves are similar in nature. This might be disadvantageous when we want to use only leaves images as a sole feature of plants to classify the species. Previous work has demonstrated the effectiveness of using transfer learning, dropout and data augmentation as a method to reduce overfitting problem of convolutional neural network model when applied in limited amount of images data. We have successfully build and train the VGG16 model with 2800 flower images. The model able to achieve a classification accuracy score of 96.25% for training set, 93.93% for validation set and 89.96% for testing set.

     

     


  • Keywords


    convolutional neural network; transfer learning, dropout; data augmentation; deep learning.

  • References


      [1] S. H. Lee, C. S. Chan, P. Wilkin, & P. Remagnino, “Deep-plant: Plant identification with convolutional neural networks,” Proceedings of the IEEE International Conference on Image Processing, pp. 452-456, 2015.

      [2] S. Aich, & I. Stavness, “Leaf counting with deep convolutional and deconvolutional networks,” Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 22-29, 2017.

      [3] I. Heredia, “Large-scale plant classification with deep neural networks,” Proceedings of the Comput. Front. Conf., pp. 259–262, 2017.

      [4] C. Wick, & F. Puppe, “Leaf identification using a deep convolutional neural network,” 2017, https://arxiv.org/pdf/1712.00967.pdf.

      [5] N. Kumar, P. N. Belhumeur, A. Biswas, D. W. Jacobs, W. J. Kress, I. C. Lopez, & J. VB Soares, “Leafsnap: A computer vision system for automatic plant species identification,” Proceedings of the Computer Vision ECCV, pp. 502–516, 2012.

      [6] Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, & A. Y. Ng, “Large scale distributed deep networks,” Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1223–1231, 2012.

      [7] F. Chollet, Deep learning with Python. Manning Publications, 2017.

      [8] L. Torrey, & J. Shavlik, “Transfer learning,” in E. Olivas, J. Guerrero, M. Martinez-Sober, J. Magdalena-Benedito, & A. Serrano López (Eds.), Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods, and Techniques. Pennsylvania: IGI Global, pp. 242-264, 2010.

      [9] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A Simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res., 15, 1929–1958, 2014.

      [10] L. Perez and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” 2017, https://arxiv.org/pdf/1712.04621.pdf.

      [11] S. Aich, A. Josuttes, I. Ovsyannikov, K. Strueby, I. Ahmed, H. S. Duddu, C. Pozniak, S. Shirtliffe, & I. Stavness, “DeepWheat: Estimating phenotypic traits from crop images with deep learning,” Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pp. 323-332, 2017.

      [12] M. Sadeghi, A. Zakerolhosseini, & A. Sonboli, “Architecture-based classification of plant leaf images,” 2018, https://arxiv.org/ftp/arxiv/papers/1801/1801.02121.pdf.

      [13] K. Simonyan, & A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/pdf/1409.1556.pdf%20http://arxiv.org/abs/1409.1556.pdf.

      [14] M. Sokolova, & G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., 45(4), 427–437, 2009.

      [15] J. D. Kelleher, B. Mac Namee, & A. D’Arcy, Fundamentals of machine learning for predictive data analytics, MIT Press, 2015.


 

View

Download

Article ID: 20781
 
DOI: 10.14419/ijet.v7i4.11.20781




Copyright © 2012-2015 Science Publishing Corporation Inc. All rights reserved.