Rubber Tree Tapping Position Detection on Trunk-Covered RGB-D Images for Automation Platform

Main Article Content

Rattachai Wongtanawijit
Thanate Khaorapapong

Abstract

Recently, an autonomous vehicle with 3D sensors for the rubber tree (Hevea brasiliensis) plantation navigation has been presented. Therefore, we present a machine vision for detecting the tapping position and the rubber juice collecting cup in the images, which can be deployed onto an autonomous platform. Firstly, we show an RGB-D image acquisition technique using artificial lights for capturing a rubber tree in the low-light. Then, we present two tapping position detection algorithms which are the color-feature based with sliding window algorithm and the novel deep object detector. To perform the detection on our custom dataset, we build a Faster-RCNN with the pre-trained MobileNetV2 using the fine-tuning technique. The results show that the deep detector outperforms our conventional detector which gives 0.92 average precision on our dataset.

Article Details

How to Cite
[1]
R. Wongtanawijit and T. Khaorapapong, “Rubber Tree Tapping Position Detection on Trunk-Covered RGB-D Images for Automation Platform”, ECTI-CIT Transactions, vol. 15, no. 1, pp. 13–23, Dec. 2020.
Section
Research Article

References

P. Thala, N. Kaewhgam, K. Kumnornaew, and K. Satjawattana, “ผลของช่วงเวลาการกรีดและระบบการกรีดต่อการให้ผลผลิต น้ำยางของยางพารา มหาวิทยาลัยพะเยา จังหวัดพะเยา ประเทศไทย,” 2014.

S. W. and W. L. Chunlong Zhang, Liyun YongOrcID, Ying Chen *, Shunlu Zhang, Luzhen GeOrcID, “A Rubber-Tapping Robot Forest Navigation and Information Collection System Based on 2D LiDAR and a Gyroscope,” Sensors 2019, 2019.

“Automatic integrated rubber tapping and collecting method based on image identification and automatic integrated rubber tapping and collecting device based on image identification,” CN105494031A, 02-Feb-2016.

“Rubber garden cable type automatic rubber cutting and harvesting device and method,” CN108029500A, 07-Dec-2017.

L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel(R) RealSense(TM) Stereoscopic Depth Cameras,” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2017-July, no. 1, pp. 1267–1276, 2017.

R. Wongtanawijit and T. Kaorapapong, “Rubber Tapped Path Detection using K-means Color Segmentation and Distance to Boundary Feature,” in 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2018, pp. 126–129.

M. R. Sethuraj and N. M. Mathew, Natural rubber : biology, cultivation, and technology. Elsevier, 1992.

S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.

R. Socher et al., “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255.

H. Caesar, J. R. R. Uijlings, and V. Ferrari, “COCO-Stuff: Thing and Stuff Classes in Context,” CoRR, vol. abs/1612.0, 2016.

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018.

R. Fernández, H. Montes, J. Surdilovic, D. Surdilovic, P. Gonzalez-De-Santos, and M. Armada, “Automatic detection of field-grown cucumbers for robotic harvesting,” IEEE Access, vol. 6, pp. 35512–35526, 2018.

A. Silwal, J. R. Davidson, M. Karkee, C. Mo, Q. Zhang, and K. Lewis, “Design, integration, and field evaluation of a robotic apple harvester,” J. F. Robot., vol. 34, no. 6, pp. 1140–1159, 2017.

A. Leu et al., “Robotic green asparagus selective harvesting,” IEEE/ASME Trans. Mechatronics, vol. 22, no. 6, pp. 2401–2410, 2017.

L. Shao, X. Chen, B. Milne, and P. Guo, “A novel tree trunk recognition approach for forestry harvesting robot,” Proc. 2014 9th IEEE Conf. Ind. Electron. Appl. ICIEA 2014, pp. 862–866, 2014.

J. Lv, D. A. Zhao, W. Ji, Y. Chen, and H. Shen, “Design and research on vision system of apple harvesting robot,” in Proceedings - 2011 3rd International Conference on Intelligent Human-Machine Systems and Cybernetics, IHMSC 2011, 2011, vol. 1, pp. 177–180.

Y. Zhao, L. Gong, Y. Huang, and C. Liu, “A review of key techniques of vision-based control for harvesting robot,” Comput. Electron. Agric., vol. 127, pp. 311–323, 2016.

C. Lehnert, I. Sa, C. McCool, B. Upcroft, and T. Perez, “Sweet pepper pose detection and grasping for automated crop harvesting,” Proc. - IEEE Int. Conf. Robot. Autom., vol. 2016-June, pp. 2428–2434, 2016.

K. Kusumam, T. Krajník, S. Pearson, T. Duckett, and G. Cielniak, “3D-vision based detection, localization, and sizing of broccoli heads in the field,” J. F. Robot., vol. 34, no. 8, pp. 1505–1518, 2017.

K. R. Vijayakumar et al., “Revised international notation for latex harvest technology,” J. Rubber Res., vol. 12, no. November, pp. 103–115, 2009.

S. J. Soumya, R. S. Vishnu, R. N. Arjun, and R. R. Bhavani, “Design and testing of a semi automatic rubber tree tapping machine (SART),” in IEEE Region 10 Humanitarian Technology Conference 2016, R10-HTC 2016 - Proceedings, 2017, pp. 1–4.

T. Veena and R. Kadadevaramath, “DESIGN AND FABRICATION OF AUTOMATIC RUBBER TAPPING MACHINE,” IOP Conf., 2016.

J. V. C. Maliackal, K. A. Asif, P. A. Sajith, and S. K. Joseph, “Advanced Rubber Tree Tapping Machine,” vol. 2, no. 5, pp. 253–263, 2017.

S. Simon, “Autonomous navigation in rubber plantations,” ICMLC 2010 - 2nd Int. Conf. Mach. Learn. Comput., pp. 309–312, 2010.

R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-Based Convolutional Networks for Accurate Object Detection and Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 142–158, Jan. 2016.

R. Girshick, “Fast R-CNN,” in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1440–1448.

J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 779–788.

W. Liu et al., “SSD: Single shot multi box detector,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp. 21–37, 2016.

S. Bianco, R. Cadene, L. Celona, and P. Napoletano, “Benchmark Analysis of Representative Deep Neural Network Architectures,” pp. 1–8, 2018.

J. Huang et al., “Speed/accuracy trade-offs for modern convolutional object detectors.”