Using the Convolution Neural Network to Predict Human Body Languages for Two Wheel Drive Robot Control

Main Article Content

Dumrongsak Kijdech
Weerapun Duangthongsuk

Abstract

Currently, artificial intelligent (AI) has been used to contribute in many applications. In several time, a camera is used together with the AI to achieve the working target. In general, the robots are controlled by means of automatic control or remote control system which the degrees of freedom are quite small. Thus, this research presented a convolutional neural network (YOLOv5) to predict the human body languages in order to control the robots. Convolution neural network can be used to predict the human postures. After that, all signals were sent to the robots via wireless computer system for working on order. For experiments, the images from several cameras were used for the AI training. After that, the system was approved for the real-time operation. The results indicated that the each prediction time was averaged about 0.05 second and the accuracy of 80 percent.

Downloads

Download data is not yet available.

Article Details

How to Cite
Kijdech, D., & Duangthongsuk, W. (2022). Using the Convolution Neural Network to Predict Human Body Languages for Two Wheel Drive Robot Control. SAU JOURNAL OF SCIENCE & TECHNOLOGY, 8(1), 28–39. Retrieved from https://ph01.tci-thaijo.org/index.php/saujournalst/www.tci-thaijo.org/index.php/saujournalst/article/view/248434
Section
Research Article

References

Kijdech D. (2020). Weed Classification by Using Convolution Neural Network for Studying and

Weed Eliminate Robot, The 7th SAU National Interdisciplinary Conference 2020, 29-30 May 2020.

Kijdech D. and Vongbunyong S. (2021). Artificial Intelligence in Localization and Classification of Cactus for Automatic Watering Works, The 8th SAU National Interdisciplinary Conference 2021, 4-5 June 2021.

Kuznetsova A., Maleva T. and Soloviev V., (2020). “Detecting Apples in Orchards Using YOLOv3 and YOLOv5 in General and Close-Up Images,” Advances in Neural Networks – ISNN, pp. 233-243.

Yang G., Feng W., Jin J., Lei Q., Li X., Gui G. and Wang W., (2020). “Face Mask Recognition System with YOLOV5 Based on Image Recognition,” IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, pp. 1398-1404.

Benjdira B., Khursheed T., Koubaa A., Ammar A. and Ouni K., (2019). “Car Detection using Unmanned Aerial Vehicles: Comparison between Faster R-CNN and YOLOv3,” Proceedings of the 1st International Conference on Unmanned Vehicle Systems (UVS), Muscat, Oman, 5-7 February.

Tian Y., Yang G., Wang Z., Wang H., Li E. and Liang Z., (2019). “Apple detection during different growth stages in orchards using the improved YOLO-V3 model,” Computers and Electronics in Agriculture 157, pp. 417-426.

Jiang Z., Zhao L., Li S. and Jia Y., (2020). “Real-time object detection method based on improved YOLOv4-tiny,” Computer Vision and Pattern Recognition.

He K., Gkioxari G., Dollar P. and Girshick R. (2021). Mask R-CNN, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2961-2969

Myint C., and Nu Win N., (2016). “Position and Velocity control for Two-Wheel Differential Drive Mobile Robot,” International Journal of Science, Engineering and Technology Research (IJSETR) Volume 5, Issue 9, September 2016.

Mondada F., Franzi E. and Ienne P., (2005) “Mobile robot miniaturisation: A tool for investigation in control algorithms,” Experimental Robotics III, pp. 501–513.

Desouza G. N. and Kak A. C., (2002). “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(2), pp. 237–267.

Yang G., Feng W., Jin J., Lei Q., Li X., Gui G. and Wang W., “Face Mask Recognition System with YOLOV5 Based on Image Recognition,” IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 2020, pp. 1398-1404.

Dlužnevskij D., Stefanovič P. and Ramanauskaite S., (2021). “Investigation of YOLOv5 Efficiency in iPhone Supported Systems,” Baltic Journal of Modern Computing , 9(3), pp. 333-344.

Most read articles by the same author(s)