Fast obstacle detection system for the blind using depth image and machine learning

Main Article Content

Surapol Vorapatratorn
Atiwong Suchato
Proadpran Punyabukkana

Abstract

Our research proposes a novel obstacle detection and navigation system for the blind using stereo cameras with machine learning techniques. The obstacle classification result will navigate users through a difference directional sound patterns via bone conductive stereo headphones. In the first stage, the Semi-Global block-matching technique was used to transform stereo images to depth image which can be used to identify the depth level of each image pixel. Next, fast 2D-based ground plane estimation which was separate obstacle image from the depth image with our Horizontal Depth Accumulative Information (H-DAI). The obstacle image will be then converted to our Vertical Depth Accumulative Information (V-DAI) which was extracted by a feature vector to train the obstacle model. Our dataset consists of 34,325 stereo-gray images in 7 different obstacle class. Our experiment compared various machine learning algorithms (ANN, SVM, Naïve Bayes, Decision Tree, k-NN and Deep Learning) performance between classification accuracy and prediction speed. The results show that using ANN with our H-DAI and V-DAI reaches 96.45% in obstacle classification accuracy and 23.76 images per second for processing time which is 6.75 times faster than the recently ground plane estimate technique.

Downloads

Download data is not yet available.

Article Details

How to Cite
Vorapatratorn, S., Suchato, A., & Punyabukkana, P. (2021). Fast obstacle detection system for the blind using depth image and machine learning. Engineering and Applied Science Research, 48(5), 593-603. Retrieved from https://ph01.tci-thaijo.org/index.php/easr/article/view/242952
Section
ORIGINAL RESEARCH

References

[1] WHO. Draft action plan for the prevention of avoidable blindness and visual impairment 2014-2019: universal eye health: a global action plan 2014-2019. Geneva: World Health Organization; 2013.

[2] Clark-Carter DD, Heyes AD, Howarth CI. The efficiency and walking speed of visually impaired people. Ergon. 1986;29(6):779-89.

[3] Loomis JM, Golledge RG, Klatzky RL. GPS-Based navigation systems for the visually impaired. In: Barfield W, Caudell T, editors. Fundamentals of wearable computers and augmented reality. New Jersey: Lawrence Erlbaum Associates; 2001. p. 429-46.

[4] Ritz M, Konig L. Laser technique improves safety for the blind. MST NEWS. 2005;5:39.

[5] Inc NRI. The laser cane, model N-2000 [Internet]. 2020 [cited 2020 Sep 20]. Available from: http://www.nurion.net.

[6] Ulrich I, Borenstein J. The guide cane-applying mobile robot technologies to assist the visually impaired. IEEE Trans Syst Man Cybern Syst Hum. 2001;31(2):131-6.

[7] Research G. The miniguide mobility aid [Internet]. 2020 [cited 2020 Sep 20]. Available from: http://www.gdp-research.com.au/minig_1.htm.

[8] Ito K, Okamoto M, Akita J, Ono T, Gyobu I, Takagi T, et al. CyARM: an alternative aid device for blind persons. Extended abstracts proceedings of the 2005 conference on human factors in computing systems; 2005 Apr 2-7; Portland, USA. New York: Association for Computing Machinery; 2005. p. 1483-8.

[9] Vorapatratorn S, Nambunmee K. ISonar: An obstacle warning device for the totally blind. J Assist Rehabil Ther Tech. 2014;2(1):23114.

[10] Damaschini R, Legras R, Leroux R, Farcy R. Electronic travel aid for blind people. In: Pruski A, Knops H, editors. Assistive technology: from virtuality to reality. Amsterdam: IOS Press; 2005. p. 251-5.

[11] Vorapatratorn S, Suchato A, Punyabukkana P. Real-time obstacle detection in outdoor environment for visually impaired using RGB-D and disparity map. Proceedings of the international convention on rehabilitation engineering & assistive technology; 2016. p. 1-4.

[12] Kumar AD, Karthika R, Soman K. Stereo camera and LIDAR sensor fusion-based collision warning system for autonomous vehicles. In: Jain S, Sood M, Paul S, editors. Advances in computational intelligence techniques. Singapore: Springer; 2020. p. 239-52.

[13] Gao H, Cheng B, Wang J, Li K, Zhao J, Li D. Object classification using CNN-based fusion of vision and Lidar in autonomous vehicle environment. IEEE Trans Ind Informat. 2018;14(9):4224-31.

[14] Catapang AN, Ramos M. Obstacle detection using a 2D LIDAR system for an autonomous vehicle. 2016 6th IEEE International conference on control system, computing and engineering (ICCSCE); 2016 Nov 25-27; Penang, Malaysia. New York: IEEE; 2016. p. 441-5.

[15] Li B, Zhang X, Munoz JP, Xiao J, Rong X, Tian Y. Assisting blind people to avoid obstacles: an wearable obstacle stereo feedback system based on 3D detection. 2015 IEEE International conference on robotics and bio mimetics (ROBIO); 2015 Dec 6-9; Zhuhai, China. New York: IEEE; 2015. p. 2307-11.

[16] Rodriguez A, Yebes JJ, Alcantarilla PF, Bergasa LM, Almazan J, Cela A. Assisting the visually impaired: obstacle detection and warning system by acoustic feedback. Sensors. 2012;12(12):17476-96.

[17] Correal Tezanos R. Stereo vision-based perception, path planning and navigation strategies for autonomous robotic exploration. Madrid: Universidad Complutense de Madrid; 2015.

[18] Emani S, Soman K, Variyar VS, Adarsh S. Obstacle detection and distance estimation for autonomous electric vehicle using stereo vision and DNN. In: Wang J, Reddy G, Prasad V, Reddy V, editor. Soft computing and signal processing. Singapore: Springer; 2019. p. 639-48.

[19] Guindel C, Martin D, Armingol JM. Traffic scene awareness for intelligent vehicles using Conv Nets and stereo vision. Robot Autonom Syst. 2019;112:109-22.

[20] Stereolabs. ZED stereo camera [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://www.stereolabs.com/zed/.

[21] Inc A. iOS Device Capture Format Specifications [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://developer.apple.com/library/archive/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html.

[22] AZOROBOTICS. PCI nDepth™ Vision System from Focus Robotics [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://www.azorobotics.com/equipment-details.aspx?EquipID=467.

[23] Ltd VSP. The Bumblebee®2 stereo vision camera [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://www.voltrium.com.sg/en/cameras/machine-vision-cameras/stereo-products/bumblebee2.

[24] Design V. MEGA-DCS stereo head [Internet]. 2020 [cited 2020 Sep 20]. Available from: http://users.rcn.com/mclaughl.dnai/sthmdcs.htm.

[25] Cochran SD, Medioni G. 3-D surface description from binocular stereo. IEEE Trans Pattern Anal Mach Intell. 1992;(10):981-94.

[26] The MathWorks I. Jet colormap array [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://www.mathworks.com/help/matlab/ref/jet.html.

[27] The MathWorks I. Classification Learner [Internet]. 2020 [cited 2020 Sep 20]. Available from: https://www.mathworks.com/help/stats/classificationlearner-app.html.

[28] Jain AK, Mao J, Mohiuddin KM. Artificial neural networks: a tutorial. Comput.1996;29(3):31-44.

[29] Hearst MA, Dumais ST, Osuna E, Platt J, Scholkopf B. Support vector machines. IEEE Intell Syst appl. 1998;13(4):18-28.

[30] Murphy KP. Naive bayes classifiers. Vancouver: University of British Columbia; 2006.

[31] Safavian SR, Landgrebe D. A survey of decision tree classifier methodology. IEEE Trans Syst Man Cybern. 1991;21(3):660-74.

[32] Keller JM, Gray MR, Givens JA. A fuzzy k-nearest neighbor algorithm. IEEE Trans Syst Man Cybern. 1985;(4):580-5.

[33] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv neural inform process syst. 2017;60(6):84-90.