Improving the Background Awareness for Identifying Child Begging by Integrating Foreground and Background Dual Image Classifiers
Main Article Content
Abstract
Image recognition technologies have found widespread application in domains such as object detection, medical imaging, and autonomous driving. Beyond these applications, they offer promising potential in tackling social challenges, such as identifying children involved in begging activities through intelligent analysis of visual data. The existing data on child begging is scarce and often presents outdated information. Our research demonstrates that image classification models, such as CNN, VGG16, and EfficientNet, can be effectively trained on images captured from public cameras to identify children engaged in begging. This data can be used for quicker and more effective interventions. To further improve detection, we integrated background learning into our approach. The classification models may struggle to distinguish between similar features across environments (for example, misidentifying a poorly dressed child in a ghetto as a beggar). Incorporating background learning can help mitigate such errors by providing contextual understanding. We further proposed an Integrated Dual Image Classifier to learn the background and foreground separately and then subsequently combine both model prediction probabilities. In this method, background understanding is incorporated with the foreground prediction for recognition. The test accuracy results from the integrated dual model approach showed a reduction in false negatives and false positives (Failure to detect actual instances and incorrect identification of false instances as accurate, respectively), with test accuracy above 70%.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Government of India, Press Information Bureau, Delhi, “Child Begging,” [Press release], Dec. 7, 2021. [Online]. Available: https://pib.gov.in/PressReleaseIframePage.aspx?PRID=1778853.
M. A. Al Helal and K. S. Kabir, “Exploring Cruel Business of Begging: The Case of Bangladesh,” Asian Journal of Business and Economics, vol. 3, no. 3.1, 2013.
World Health Organization, “Child maltreatment,” Sep. 9, 2022. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/child-maltreatment.
M. V and K.T. Geetha, “Socio-Economic Causes of Begging,” International Research Journal of Human Resources and Social Sciences, vol. 3, no. 2, pp. 243-258, Aug. 2014.
F. T. Progga, M. T. Shahria, A. Arisha and M. U. A. Shanto, “A Deep Learning Based Approach to Child Labour Detection,” 2020 6th Information Technology International Seminar (ITIS), Surabaya, Indonesia, pp. 24-29, 2020.
N. Paredes, E. Caicedo-Bravo and B. Bacca, “Emotion Recognition in Individuals with Down Syndrome: A Convolutional Neural Network-Based Algorithm Proposal,” Symmetry, vol. 15, no. 7, pp. 1435, 2023.
B. Oluwalade, S. Neela, J. Wawira, T. Adejumo and S. Purkayastha, “Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data,” in Proc.14th Int. Conf. Health Informatics (HEALTH-INF), 2021. [Online]. Available: https://arxiv.org/pdf/2103.03836.
D. Mekala, K. Anand, A. Ghosh and S. Kataria, “Foreground-background classification and ROI detection in surveillance videos,” 2016.
K. Y. Xiao, L. Engstrom, A. Ilyas and A. Madry, “Noise or Signal: The Role of Image Backgrounds in Object Recognition,” ICLR 2021, 2020.
M. Mayank, “Convolutional Neural Networks, Explained,” Towards Data Science, Aug. 2020.
Y. Yu and W. Liu, “Ethical Issues of Child Abuse-A Cultural Comparison,” 2020 International Conference on Public Health and Data Science (ICPHDS), Guangzhou, China, pp. 52-55, 2020.
S. M. Ayoob, “Beggary in the Society: A Sociological Study in the Selected Villages in Sri Lanka,” Journal of Xi’an University of Architecture & Technology, vol. 11, no. 12, pp. 1725, 2019.
M. Moayeri, P. Pope, Y. Balaji and S. Feizi, “A Comprehensive Study of Image Classification Model Sensitivity to Foregrounds, Backgrounds, and Visual Attributes,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, pp. 19065-19075, 2022.
O. Pronina and O. Piatykop, “The recognition of speech defects using a convolutional neural network,” CoSinE 2022, CITEd Kyiv, Ukraine, Dec. 2022.