Method Enhancement of Quality Control in Brake Pads Manufacturing
Main Article Content
Abstract
The disc brake production is coupled with a defect detection process to control quality. The current manufacturing quality is high. However, detection relies on visuals by the operator, posing challenges in terms of accuracy and time. In a high-quality production, using human labor for detection is time-consuming and labor-intensive. This research is aiming at exploring the possibility of applying object recognition using deep learning approach for the in-line defects detection on disc brake pads. Faster R-CNN, Scaled YOLOv4, and YOLOv5s were compared for the detection of two major commonly defective brake pads of brake pad model A. The main criteria for the detection are (a) the detection time must be shorter than residence time of brake pads on the detection station on the conveyor, at 498 milliseconds, and (b) the precision must be higher than 70%. The detection time and precision of YOLOv5s, Scaled YOLOv4, and Faster R-CNN are at 13.9 milliseconds and 83%, 20.0 milliseconds and 83%, and 20.2 milliseconds and 92%, respectively. The detection time of all algorithms investigated in this study is far shorter than the residence time at the checking station with the precision exceeding the criteria. The training time for Faster R-CNN, 220 minutes, is five times longer than that of YOLOv5s (49 min) and Scaled YOLOv4 (41 min). All three algorithms are capable of real-time detection and yield a consistent result on both splits and poorly consolidated friction material workpieces. Faster R-CNN is chosen because it has the highest precision.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Article Accepting Policy
The editorial board of Thai-Nichi Institute of Technology is pleased to receive articles from lecturers and experts in the fields of business administration, languages, engineering and technology written in Thai or English. The academic work submitted for publication must not be published in any other publication before and must not be under consideration of other journal submissions. Therefore, those interested in participating in the dissemination of work and knowledge can submit their article to the editorial board for further submission to the screening committee to consider publishing in the journal. The articles that can be published include solely research articles. Interested persons can prepare their articles by reviewing recommendations for article authors.
Copyright infringement is solely the responsibility of the author(s) of the article. Articles that have been published must be screened and reviewed for quality from qualified experts approved by the editorial board.
The text that appears within each article published in this research journal is a personal opinion of each author, nothing related to Thai-Nichi Institute of Technology, and other faculty members in the institution in any way. Responsibilities and accuracy for the content of each article are owned by each author. If there is any mistake, each author will be responsible for his/her own article(s).
The editorial board reserves the right not to bring any content, views or comments of articles in the Journal of Thai-Nichi Institute of Technology to publish before receiving permission from the authorized author(s) in writing. The published work is the copyright of the Journal of Thai-Nichi Institute of Technology.
References
Z. Shu, Z. Yan, and X. Xu, “Pavement crack detection method of street view images based on deep learning,” J. Phys.: Conf. Ser., vol. 1952, no. 2, Jun. 2021, Art. no. 022043, doi: 10.1088/1742-6596/1952/2/022043.
S. Zhao, Y. Peng, J. Liu, and S. Wu, “Tomato leaf disease diagnosis based on improved convolution neural network by attention module,” Agriculture, vol. 11, no. 7, p. 651, Jul. 2021.
U. Nepal and H. Eslamiat, “Comparing YOLOv3, YOLOv4 and YOLOv5 for autonomous landing spot detection in faulty UAVs,” Sensors, vol. 22, no. 2, p. 464, Jan. 2022, doi: 10.3390/s22020464.
I. Ahmad et al., “Deep learning based detector YOLOv5 for identifying insect pests,” Appl. Sci., vol. 12, no. 19, p. 10167, Oct. 2022, doi: 10.3390/app121910167.
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, 2015, doi: 10.1109/TPAMI.2016.2577031.
L. Du, R. Zhang, and X. Wang, “Overview of two-stage object detection algorithms,” J. Phys.: Conf. Ser., vol. 1544, no. 1, Jun. 2020, Art. no. 012033, doi: 10.1088/1742-6596/1544/1/012033.
C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “Scaled-YOLOv4: Scaling Cross Stage Partial Network,” in Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit. (CVPR), Nashville, TN, USA, Jun. 2021, pp. 13024–13033.
A. H. N. Hidayah, S. A. Radzi, N. A. Razak, W. H. M. Saad, Y. C. Wong, and A. A. Naja, “Disease detection of solanaceous crops using deep learning for robot vision,” J. Robot. Control, vol. 3, no. 6, pp. 790–799, Nov. 2022.
G. X. Hu, B. L. Hu, Z. Yang, L. Huang, and P. Li, “Pavement crack detection method based on deep learning models,” Wireless Commun. Mobile Comput., vol. 2021, no. 1, Jan. 2021, Art. no. 573590, doi: 10.1155/2021/5573590.
Z. Zhao, X. Yang, Y. Zhou, Q. Sun, Z. Ge, and D. Liu, “Real-time detection of particleboard surface defects based on improved YOLOV5 target detection,” Scientific Rep., vol. 11, Nov. 2021, Art. no. 21777, doi: 10.1038/s41598-021-01084-x.
F. Jubayer et al., “Detection of mold on the food surface using YOLOv5,” Current Res. Food Sci., vol. 4, pp. 724–728, 2021, doi: 10.1016/j.crfs.2021.10.003.
C. Dewi, R.-C. Chen, Y.-C. Zhuang, and H. J. Christanto, “Yolov5 series algorithm for road marking sign identification,” Big Data Cogn. Comput., vol. 6, no. 4, p. 149, Dec. 2022.
Y. Fang, X. Guo, K. Chen, Z. Zhou, and Q. Ye, “Accurate and automated detection of surface knots on sawn timbers using YOLO-V5 model,” BioResources, vol. 16, no. 3, pp. 5390–5406, Aug. 2021, doi: 10.15376/biores.16.3.5390-5406.
F. Zhou, H. Zhao, and Z. Nie, “Safety helmet detection based on YOLOv5,” in Proc. IEEE Int. Conf. Power Electron., Comput. Appl., Shenyang, China, Jan. 2021, pp. 6–11, doi: 10.1109/ICPECA51329.2021.9362711.
M. Wang, B. Fu, J. Fan, Y. Wang, L. Zhang, and C. Xia, “Sweet potato leaf detection in a natural scene based on faster R-CNN with a visual attention mechanism and DIoU-NMS,” Ecological Inform., vol. 73, Mar. 2023, Art. no. 101931, doi: 10.1016/j.ecoinf.2022.101931.
S. Singh, U. Ahuja, M. Kumar, K. Kumar, and M. Sachdeva, “Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment,” Multimedia Tools Appl., vol. 80, no. 13, pp. 19753–19768, Mar. 2021, doi: 10.1007/s11042-021-10711-8.
M. Sozzi, S. Cantalamessa, A. Cogato, A. Kayad, and F. Marinello, “Automatic bunch detection in white grape varieties using YOLOv3, YOLOv4, and YOLOv5 deep learning algorithms,” Agronomy, vol. 12, no. 2, p. 319, Jan. 2022.
Q. Song et al., “Object detection method for grasping robot based on improved YOLOv5,” Micromachines, vol. 12, no. 11, p. 1273, Oct. 2021.
X. Shi, J. Hu, X. Lei, and S. Xu, “Detection of flying birds in airport monitoring based on improved YOLOv5,” in Proc. 6th Int. Conf. Intell. Comput. and Signal Process., Xi'an, China, Apr. 2021, pp. 1446–1451, doi: 10.1109/ICSP51882.2021.9408797.
Y. Jing, Y. Ren, Y. Liu, D. Wang, and L. Yu, “Automatic extraction of damaged houses by earthquake based on improved YOLOv5: A case study in Yangbi,” Remote Sens., vol. 14, no. 2, p. 382, Jan. 2022.
P. Krishnadas, K. Chadaga, N. Sampathila, S. Rao, K. S. Swathi, and S. Prabhu, “Classification of Malaria using object detection models,” Informatics, vol. 9, no. 4, p. 76, Sep. 2022.
S. Divakaruni, D. Yuhas, C. Vorres, and R. Kaatz, “Objective method for crack detection in brake friction material,” presented at Brake Colloq. and Exhib. - 37th Annu., Orlando, FL, USA, Sep. 2019, doi: 10.4271/2019-01-2113.