Facial Expression Recognition using Convolutional Neural Networks Based on 18 Variations of Image Processing

Main Article Content

พนเมษ ญาณฐิติรัตน์
จรัญ แสนราช
สรเดช ครุฑจ้อน


This paper proposes the design of a Facial Expression Recognition using convolutional neural network based on 18 variations of image processing adapted for the FER2013 Challenge dataset. The dataset is divided into three emotions: negative emotion, neutral emotion, and positive emotion. The network is trained for classification using the training subset and tested using the testing subset. The dataset is divided into 18 variations of image processing: Original, Alignment, Crop20-80, Crop30-70, Crop40-60, Crop50-50, Crop60-40, Crop-70-30, Crop80-20, Original with Flip, Alignment with Flip, Crop20-80 with Flip, Crop30-70 with Flip, Crop-40-60 with Flip, Crop50-50 with Flip, Crop60-40 with Flip, Crop70-30 with Flip, and Crop80-20 with Flip. Eight convolutional neural network models applied to this experiment are as follows: 3CNNs, Mememoji CNNs, DenseNet121, Dense-Net169, DenseNet 201, InceptionResNetV2, MobileNet, and MobileNet-V2. The model is compared with datasets of 144 models. The proposed model, 3CNNs on the alignment with Flip dataset, yields highest accuracy at 82.41%. It is suitable for online learning.

Article Details