Facial Expression Recognition using Convolutional Neural Networks Based on Half Facial Sections

Main Article Content

Panamet Yanthitirat
Charun Sanrach
Soradech Krootjohn

Abstract

This research presented Facial Expression Recognition by using Convolutional Neural Networks. The hypothesis for the research was that if the position of human facial organs is near symmetry with the nose actually centered, using half facial sections can analyze emotional expression recognition of the face. Facial recognition was divided into three emotional groups: negative emotion, normal emotion, and positive emotion which used data set from the competition FER2013 for training set and test set. The faces within the data set were divided into left and right half images. The researcher started with the selection of three different convoluted neural network structures, namely LeNet-5, Mememoji, and 3CNNs which were developed and compared the models to suit the classify of the left and right halves. Upon obtaining efficient models, the researcher blended models to make them suitable for classification of both images. The results of this merged model had the highest accuracy at 82.77%.

Article Details

Section
บทความวิจัย

References

P. McCorduck, and C. Cfe. Machines who think: A personal inquiry into the history and prospects of artificial intelligence. CRC Press, 2004.

A. Dzedzickis, A. Kaklauskas, and V. Bucinskas, “Human Emotion Recognition: Review of Sensors and Methods.” Sensors (Basel), Vol. 20, No. 3, pp. 592, January 2020.

J. Yang, F. Zhang, B. Chen, and S. U. Khan, “Facial Expression Recognition Based on Facial Action Unit.” in 2019 Tenth International Green and Sustainable Computing Conference (IGSC), VA, USA, pp. 1–6, 2019.

C. Darwin. The expression of the emotions in man and animals. London, England: John Murray, 1872.

P. Ekman, and W. V. Friesen, "Facial Action Coding System: A Technique for the Measurement of Facial Movement." Consulting Psychologists Press, Vol. 1, 1978.

S. Z. Li, and A. K. Jain. Handbook of Face Recognition, 2nd ed. Springer Publishing Company, Incorporated, 2011.

R. E. Kleck, and M. Mendolia, “Decoding of profile versus full-face expressions of affect.” J. Nonverbal Behav., Vol. 14, No. 1, pp. 35–49, 1990.

M. Pantic, and L. Rothkrantz, “Expert System for Automatic Analysis of Facial Expression.” Image Vis. Comput., Vol. 18, pp. 881–905, August 2000.

Z. Sufyanu, F. Mohamad, A. Yusuf, and A. Nuhu, “FEATURE EXTRACTION METHODS FOR FACE RECOGNITION.” Int. J. Appl. Eng. Res., Vol. 5, pp. 5658–5668, October 2016.

C. Wang, “Human Emotional Facial Expression Recognition.” March 2018.

N. Mousavi, H. Siqueira, P. Barros, B. Fernandes, and S. Wermter, “Understanding how deep neural networks learn face expressions.” in 2016 International Joint Conference on Neural Networks (IJCNN), pp. 227–234, July 2016.

X. Chen, X. Yang, M. Wang, and J. Zou, “Convolution neural network for automatic facial expression recognition.” in 2017 International Conference on Applied System Innovation (ICASI), pp. 814–817, May 2017.

S. Xie, H. Hu, and Y. Wu, “Deep multi-path convolutional neural network joint with salient region attention for facial expression recognition.” Pattern Recognit., Vol. 92, pp. 177–191, 2019.

A. T. Lopes, E. de Aguiar, and T. Oliveira-Santos, “A Facial Expression Recognition System Using Convolutional Networks.” in 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 273–280, August 2015.

P. Yanthitirat, C. Sanrach, and S. Krootjohn, “Facial Expression Recognition using Convolutional Neural Networks Based on 18 Variations of Image Processing.” Inf. Technol. J., vol. 16, No. 1, pp. 98–109, 2020.

B. S. Bloom, D. R. Krathwohl, and B. B. Masia, “Bloom taxonomy of educational objectives” in Allyn and Bacon, Pearson Education, 1984.

P. L. Carrier, and A. Courville, kaggle, Challenges in Representation Learning: Facial Expression Recognition Challenge, 2013, Available Online at https://www. kaggle.com, accessed on April 2018.

Whissel, C. M, "The dictionary of affect in language. In R.Plutchnik, & H. Kellerman, Emotion: Theory, research and experience." The measurement of emotions, Vol.4, New York Academic Press, 1989.

D. H. Kim, W. J. Baddar, J. Jang, and Y. M. Ro, “Multi-Objective Based Spatio-Temporal Feature Representation Learning Robust to Expression Intensity Variations for Facial Expression Recognition.” IEEE Trans. Affect. Comput., Vol. 10, No. 2, pp. 223–236, April 2019.

L. Nwosu, H. Wang, J. Lu, I. Unwala, X. Yang, and T. Zhang, “Deep Convolutional Neural Network for Facial Expression Recognition Using Facial Parts,” in 2017 IEEE 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Scienc eandTechnologyCongress(DASC/PiCom/DataCom/CyberSciTech), pp. 1318–1321, 2017.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks." in NIPS'12 Proceedings of the 25th International Conference on Neural Information, Nevada, 2012.

G. Wang, and J. Gong, “Facial Expression Recognition Based on Improved LeNet-5 CNN.” in 2019 Chinese Control And Decision Conference (CCDC), pp. 5655–5660, 2019.

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression.” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, pp. 94–101, 2010.

Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition.” Proc. IEEE, Vol. 86, No. 11, pp. 2278–2324, 1998.

J. Ho, JostineHo, Mememoji, 2016. Available Online at https://github.com/JostineHo, accessed on February 2018.

U. Ozbulak, A. Eliasson, A. Hayrabedian, L.Weiss , and A. Young, "utkuozbulak." Facial Expression Recognition, 2016. Available Online at https://github. com/utkuozbulak, accessed on 7 February 2018.

C. Sagonas, G. Tzimiropoulos, and S. Zafeiriou. "A Semi-Automatic Methodology for Facial Landmark Annotation," in 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Portland, 2013.

E. Kuramoto, S. Yoshinaga, H. Nakao, S. Nemoto, and Y. Ishida. Characteris t ics of facial muscle activity during voluntary facial expressions: Imaging analysis of facial expressions based on myogenic potential data. Neuropsychopharmacology Reports, Vol. 39, No. 3, pp.183–193, 2019.