Digital Weighing Scale with Fruit and Vegetable Identification Using Deep Learning Technique
Main Article Content
Abstract
Weighing is a process before packing fruits and vegetables for sale in supermarkets. A staff will put the product to be weighed on the digital scale and enter the product code. After that the machine will display product name, price, and weight. However, error might occur from entering the wrong product code. This research aims to develop digital weighing scale that an image data is imported from the camera to identify the types of fruits and vegetables instead of entering the product code. The system consists of Raspberry Pi computer board, Pi camera V2.1 for image acquisition, load cell for weighing, HX-711 analog to digital converter module and a touch screen. Software is developed by Python version 3 and the YOLOv3-tiny algorithm is used through the Darknet library to build a model for image recognition. For testing the system, five types of fruit and vegetable were used to train the model. The pictures in training process included 379 pictures of banana, 277 pictures of carrot, 232 pictures of grape and 443 pictures of onion. The model was built by 15000 epochs of training with 0.1623 of an average loss value and the values of precision, recall and F1-score are 1.00, 0.99 and 0.99 respectively. From the experimental results, the system can identify banana, carrot, grape and onion with 100% accuracy when fruits or vegetables are placed without overlapping. However, error may occur if fruit or vegetable are overlapping. The weighing error of the developed weighing scale is less than 10 grams.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The articles published are the opinion of the author only. The author is responsible for any legal consequences. That may arise from that article.
References
J. Balbuena, J. Hilario, I. Vargas, R. Manzanares, and F. Cuellar, “Design of a 2-DOF delta robot for packaging and quality control of processed meat products,” in Latin American Robotic Symposium, Brazilian Symposium on Robotics (SBR) and Workshop on Robotics in Education (WRE), November 2018, pp. 201–206.
L. Pauly, M. V. Baiju, P. Viswanathan, P. Jose, D. Paul, and D. Sankar, “A new gray level based method for visual inspection of frying food items,” in International Conference on Soft Computing Techniques and Implementations (ICSCTI), October 2015, pp. 57–60.
Keshavamurthy, S. J. Mariyam, M. Meghamala, M. Meghashree, and Neha, “Automatized food quality detection and processing system using neural networks,” in 4th International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT), May 2019, pp. 1442–1446.
B. Guanjun, C. Shibo, Q. Liyong, X. Yi, Z. Libin, and Y. Qinghua, “Multi-template matching algorithm for cucumber recognition,” Computers and Electronics in Agriculture, vol. 127, pp. 754–762, 2016.
A.B. Payne, K.B. Walsh, P. Subedi, and D. Jarvis, “Estimation of mango crop yield using image analysis–segmentation method,” Computers and Electronics in Agriculture, vol. 91, pp. 57–64, 2013.
F. Kurtulmuş and I. Kavdir, “Detecting corn tassels using computer vision and support vector machines,” Expert Systems with Applications, vol. 41, no. 16, pp. 7390–7397, 2014.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788.
B. Liu, W. Zhao, and Q. Sun, “Study of object detection based on faster R-CNN,” in Chinese Automation Congress (CAC), 2017, pp. 6233–6236.
H. Kai, L. Feiyu, L. Meixia, D. Zhiliang and L. Yunping, “A marine object detection algorithm based on SSD and feature enhancement,” Complexity, vol. 2020, pp. 1–14, 2020.
Y. Li, K. He, and J. Sun, “R-FCN: Object detection via region-based fully convolutional networks,” in Advances in Neural Information Processing Systems, 2016, pp. 379–387.
J. Redmon. (2019). YOLO v3 [Online]. Available: https://pjreddie.com/darknet/ yolov3
Google. (2019). Google Colaboratory. [Online]. Available: https:// colab.research. google.com
N. Manoi, A. Bunjanda, and C. Rattanapoka, “A system for cooking recipe sharing and cooking recipe finding by an image of ingredients using deep learning technique,” The Journal of Industrial Technology, vol. 15, no. 2, pp. 97–111, 2019 (in Thai).
R. Zhang, X. Li, L. Zhu, M. Zhong, and Y. Gao, “Target detection of banana string and fruit stalk based on YOLOv3 deep learning network,” in International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), 2021, pp. 346–349.