Main Article Content
Disc brake pad identification is a difficult task that requires experiences of disc brake experts. However, disc brake pad retails must identify the part to deliver the right product to customers. In this research, the deep learning algorithms and object detection technologies to help identify disc brake pad in a case study of disc brake pads company are proposed. The goal is to implement disc brake identification system that finds the right brake pad model correctly in an instant time. We select two deep learning algorithms that are well known in object detection which are YOLOv5 and Faster R-CNN. The disc brake pad detection performance of the two algorithms is compared. There are four measurements: precision, detection speed, loss function (regression loss, classification loss), and training time. We use the two algorithms to detect and classify five disc brake pad models. The results show that YOLOv5 has better precision, detection speed, loss function, but Faster R-CNN requires less training time.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Article Accepting Policy
The editorial board of Thai-Nichi Institute of Technology is pleased to receive articles from lecturers and experts in the fields of business administration, languages, engineering and technology written in Thai or English. The academic work submitted for publication must not be published in any other publication before and must not be under consideration of other journal submissions. Therefore, those interested in participating in the dissemination of work and knowledge can submit their article to the editorial board for further submission to the screening committee to consider publishing in the journal. The articles that can be published include solely research articles. Interested persons can prepare their articles by reviewing recommendations for article authors.
Copyright infringement is solely the responsibility of the author(s) of the article. Articles that have been published must be screened and reviewed for quality from qualified experts approved by the editorial board.
The text that appears within each article published in this research journal is a personal opinion of each author, nothing related to Thai-Nichi Institute of Technology, and other faculty members in the institution in any way. Responsibilities and accuracy for the content of each article are owned by each author. If there is any mistake, each author will be responsible for his/her own article(s).
The editorial board reserves the right not to bring any content, views or comments of articles in the Journal of Thai-Nichi Institute of Technology to publish before receiving permission from the authorized author(s) in writing. The published work is the copyright of the Journal of Thai-Nichi Institute of Technology.
J. Laborda and M. J. Moral, “Automotive aftermarket forecast in a changing world: The stakeholders’ perceptions boost!,” Sustainability, vol. 12, no. 18, p. 7817, 2020.
A. Borawski, “Conventional and unconventional materials used in the production of brake pads – review,” Sci. Eng. Compos. Mater., vol. 27, no. 1, pp. 374–396, 2020, doi: 10.1515/secm-2020-0041.
K. U. Sharma and N. V. Thakur, “A review and an approach for object detection in images,” Int. J. Comput. Vision Robot., vol. 7, no. 1/2, pp. 196–237, 2017, doi: 10.1504/ijcvr.2017.081234.
A. Kumar and S. Srivastava, “Object detection system based on convolution neural networks using single shot multi-box detector,” Procedia Comput. Sci., vol. 171, pp. 2610–2617, 2020.
Á. Casado-Garcia and J. Heras, “Ensemble methods for object detection,” in Proc. 24th Eur. Conf. Artif. Intell., Santiago de Compostela, Spain, Jun. 2020, pp. 2688–2695.
M. Noman, V. Stankovic, and A. Tawfik, “Object detection techniques: Overview and performance comparison,” in Proc. IEEE 19th Int. Symp. Signal Process. and Infor. Technol. (ISSPIT), Ajman, United Arab Emirates, Dec. 2019, pp. 489–493.
L. Wang and W. Q. Yan, “Tree leaves detection based on deep learning,” in Geometry and Vision (Communications in Computer and Information Science vol. 1386), M. Nguyen, W. Q. Yan, and H. Ho, Eds., Cham, Switzerland: Springer, 2021, pp. 26–38.
A. Turečková, T. Holík, and Z. Komínková Oplatková, “Dog face detection using YOLO network,” Mendel, vol. 26 no. 2, pp. 17–22, Dec. 2020.
S.-E. Ryu and K.-Y. Chung, “Detection model of occluded object based on YOLO using hard-example mining and augmentation policy optimization,” Appl. Sci., vol. 11, no. 15, p. 7093, Jul. 2021, doi: 10.3390/app11157093.
Y. Yoon et al., “Analyzing basketball movements and pass relationships using realtime object tracking techniques based on deep learning,” IEEE Access, vol. 7, pp. 56564–56576, 2019, doi: 10.1109/ACCESS.2019.2913953.
C. G. Melek, E. B. Sonmez, and S. Albayrak, “Object detection in shelf images with YOLO,” in Proc. IEEE EUROCON 2019 -18th Int. Conf. Smart Technol., Novi Sad, Serbia, Jul. 2019, pp. 1–5.
B. Yan, P. Fan, X. Lei, Z. Liu, and F. Yang, “A real-time apple targets detection method for picking robot based on improved YOLOv5,” Remote Sens., vol. 13, no. 9, p. 1619, Apr. 2021, doi: 10.3390/rs13091619.
A. Malta, M. Mendes, and T. Farinha, “Augmented reality maintenance assistant using YOLOv5,” Appl. Sci., vol. 11, no. 11, p. 4758, May 2021, doi: 10.3390/app11114758.
F. Shi, H. Zhou, C. Ye, and J. Mai, “Faster detection method of driver smoking based on decomposed YOLOv5,” J. Phys.: Conf. Ser., vol. 1993, 2021, Art. no. 012035, doi: 10.1088/1742-6596/1993/1/012035.
F. Jubayer et al., “Mold detection on food surfaces using YOLOv5,” Curr. Res. Food Sci., vol. 4, pp. 724–728, 2021.
Z. Ning, X. Wu, J. Yang, and Y. Yang, “MT-YOLOv5: Mobile terminal table detection model based on YOLOv5,” J. Phys.: Conf. Ser., vol. 1978, 2021, Art no. 012010, doi: 10.1088/1742-6596/1978/1/012010.
J. Cui, F. Chen, D. Shi, and L. Liu, “Eye detection with faster R-CNN,” in Proc. Int. Conf. Adv. Comput. Technol., Inf. Sci. and Commun., Xiamen, China, Mar. 2019, pp. 111–116.
S. Wan and S. Goudos, “Faster R-CNN for multi-class fruit detection using a robotic vision system,” Comput. Netw., vol. 168, Feb. 2020, Art. no. 107036.
W. Li, “Analysis of object detection performance based on Faster R-CNN,” J. Phys.: Conf. Ser., vol. 1827, 2021, Art. no. 012085, doi: 10.1088/1742-6596/1827/1/012085.
R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Imaging, vol. 9, pp. 611–629, Jun. 2018.
D. Thuan, “Evolution of YOLO algorithm and YOLOV5: the state-of-the-art object detection algorithm,” B.S. thesis, Dept. Inf. Technol., Oulu Univ. Appl. Sci., Oulu, Finland, 2021. [Online]. Available: https://www.theseus.fi/handle/10024/452552
P. Punyato and N. Sidahao, “Implementation of a low-cost real-time people counting system on raspberry pi based on applied tiny YOLO V3,” (in Thai), Eng. Trans.: A Res. Publication Mahanakorn Univ. Technol., vol. 22, no. 2, pp. 72–78, 2019.
J. Du, “Understanding of object detection based on CNN family and YOLO,” J. Phys.: Conf. Ser., vol., 1004, 2018, Art. no. 012029, doi: 10.1088/1742-6596/1004/1/012029.
X. Yao, X. Wang, S. H. Wang, and Y. D. Zhang, “A comprehensive survey on convolutional neural network in medical image analysis,” Multimed. Tools Appl., vol. 81, pp. 41361–41405, 2020, doi: 10.1007/s11042-020-09634-7.
A. Rácz, D. Bajusz, and K. Héberger, “Effect of dataset size and train/test split ratios in QSAR/QSPR multiclass classification,” Molecules, vol. 26, no. 4, p. 1111, Feb. 2021, doi: 10.3390/molecules26041111.
P. Henderson and V. Ferrari, “End-to-end training of object class detectors for mean average precision,” in Proc. 13th Asian Conf. Comput. Vis., Taipei, Taiwan, Nov. 2016, pp. 198–213.
S. Visa, B. Ramsay, A. Ralescu, and E. van der Knaap, “Confusion matrix-based feature selection,” in Proc. 22nd Midwest Artif. Intell. and Cogn. Sci. Conf., Cincinnati, OH, USA, Apr. 2011, pp. 120–127.