Evaluation of Deep Convolutional Neural Network Architectures for Strawberry Quality Inspection

  • Authors

    • Rika Sustika
    • Agus Subekti
    • Hilman F. Pardede
    • Endang Suryawati
    • Oka Mahendra
    • Sandra Yuwana
    2018-12-16
    https://doi.org/10.14419/ijet.v7i4.40.24080
  • CNN, deep learning, quality inspection, strawberry
  • Fruits quality inspection is important task on agriculture industry. Automated inspection using machine and vision technology have been widely used for increasing accuracy and decreasing working cost. Convolutional Neural Network (CNN) is a type of deep learning that had a great success in large scale image and video recognition. In this research, we investigate the effect of different deep convolutional neural network architectures on its accuracy in strawberry grading system (quality inspection). We evaluate different types of existing deep CNN architectures such as AlexNet, MobileNet, GoogLeNet, VGGNet, and Xception, and we compare them with two layers CNN architecture as our baseline. Here, we have done two experiments, the first is two classes strawberry classification and the second is four classes strawberry classification. Results show that VGGNet achieves the best accuracy, while GoogLeNet achieves the most computational efficient architecture. The results are consistent on both two classes classification and four classes classification.

     

     

  • References

    1. [1] Pardo-Mates, A. Vera, S. Barbosa, M. Hidalgo-Serrano, O. Nez, J. Saurina, S. Hernndez-Cassou, and L. Puignou, “Characterization, classification and authentication of fruit-based extracts by means of hplc-uv chromatographic fingerprints, polyphenolic profiles and chemometric methods,†Food Chemistry, vol. 221, pp. 29 – 38, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0308814616316508

      [2] W. Shao, Y. Li, S. Diao, J. Jiang, and R. Dong, “Rapid classification of chinese quince (chaenomeles speciosa nakai) fruit provenance by near-infrared spectroscopy and multivariate calibration,†Analytical and Bioanalytical Chemistry, vol. 409, no. 1, pp. 115–120, Jan 2017. [Online]. Available: https://doi.org/10.1007/s00216-016-9944-7

      [3] Radi, S. Ciptohadijoyo, W. Litananda, M. Rivai, and M. Purnomo, “Electronic nose based on partition column integrated with gas sensor for fruit identification and classification,†Computers and Electronics in Agriculture, vol. 121, pp. 429 – 435, 2016. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0168169915003622.

      [4] Y. Zhang and L. Wu, “Classification of fruits using computer vision and a multiclass support vector machine,†Sensors, vol. 12, no. 9, pp. 12 489–12 505, 2012. [Online]. Available: http://www.mdpi.com/14248220/12/9/12489

      [5] M. F. Adak and N. Yumusak, “Classification of e-nose aroma data of four fruit types by abc-based neural network,†Sensors, vol. 16, no. 3, 2016. [Online]. Available: http://www.mdpi.com/1424-8220/16/3/304

      [6] Z. Yudong, P. Preetha, W. Shuihua, J. Genlin, Y. Jiquan, and W. Jianguo, “Fruit classification by biogeography-based optimization and feedforward neural network,†Expert Systems, vol. 33, no. 3, pp. 239–253, 2016. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/exsy.12146

      [7] F. Garcia, J. Cervantes, A. Lopez, and M. Alvarado, “Fruit classification by extracting color chromaticity, shape and texture features: Towards an application for supermarkets,†IEEE Latin America Transactions, vol. 14, no. 7, pp. 3434–3443, July 2016.

      [8] S. Wang, Z. Lu, J. Yang, Y. Zhang, J. Liu, L. Wei, S. Chen, P. Phillips, and Z. Dong, “Fractional fourier entropy increases the recognition rate of fruit type detection,†BMC Plant Biology, vol. 16, p. 85, October 2016.

      [9] O. Russakovsky, J. D. H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “Imagenet large scale visual recognition challenge,†International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, December 2015.

      [10] E. A. Smirnov, D. M. Timoshenko, and S. N. Andrianov, “Comparison of regularization methods for imagenet classification with deep convolutional neural networks,†AASRI Procedia, vol. 6, pp. 89 – 94, 2014, 2nd AASRI Conference on Computational Intelligence and Bioinformatics. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S2212671614000146

      [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,†in Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds. Curran Associates, Inc., 2012, pp. 1097–1105. [Online]. Available: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

      [12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,†in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 1–9.

      [13] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,†in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017, pp. 1800–1807.

      [14] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,†CoRR, vol. abs/1409.1556, 2014.

      [15] A. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam., “MobileNets: Efficient convolutional neural networks for mobile vision applications.†arXiv preprint, vol. arXiv:1704.04861, 2017.

      [16] M. Cicero, A. Bilbily, E. Colak, T. Dowdell, K. Gray, Bruce Perampaladas, and J. Barfett, “Training and validating a deep convolutional neural network for computer-aided detection and classification of abnormalities on frontal chest radiographs,†Investigative Radiology: May 2017, vol. 52, no. 5, pp. 281–287, May 2017.

      [17] S. Li, H. Jiang, and W. Pang, “Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading.†Comput Biol Med, no. 84, pp. 156–167, May 2017.

      [18] Y.-D. Zhang, Z. Dong, X. Chen, W. Jia, S. Du, K. Muhammad, and S.-H. Wan, “Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation,†Multimed Tools Appl, September 2017.

      [19] T. Nishi, S. Kurogi, and K. Matsuo, “Grading fruits and vegetables using rgb-d images and convolutional neural network,†in 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Nov 2017, pp. 1–6.

      [20] W. Ouyang, X. Wang, X. Zeng, S. Qiu, P. Luo, Y. Tian, H. Li, S. Yang, Z. Wang, C.-C. Loy et al., “Deepid-net: Deformable deep convolutional neural networks for object detection,†in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2403–2412.

      [21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,†in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.

      [22] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,†in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 4694–4702.

      [23] P. Pawara, E. Okafor, O. Surinta, L. Schomaker, and M. Wiering, “Comparing local descriptors and bags of visual words to deep convolutional neural networks for plant recognition,†in ICPRAM, 2017.

  • Downloads

  • How to Cite

    Sustika, R., Subekti, A., F. Pardede, H., Suryawati, E., Mahendra, O., & Yuwana, S. (2018). Evaluation of Deep Convolutional Neural Network Architectures for Strawberry Quality Inspection. International Journal of Engineering & Technology, 7(4.40), 75-80. https://doi.org/10.14419/ijet.v7i4.40.24080