Facial thermal imaging has in recent years shown to be an efficient modality for facial emotion recognition. However, the use of deep learning in this field is still not fully exploited given the small number and size of the current datasets. The goal of this work is to improve the performance of the existing deep networks in thermal facial emotion recognition by generating new synthesized thermal images from images in the visual spectrum (RGB). To address this challenging problem, we propose an emotion-guided thermal CycleGAN (ET-CycleGAN). This Generative Adversarial Network (GAN) regularizes the training with facial and emotion priors by extracting features from Convolutional Neural Networks (CNNs) trained for face recognition and facial emotion recognition, respectively. To assess this approach, we generated synthesized images from the training set of the USTC-NVIE dataset, and included the new data to the training set as a data augmentation strategy. By including images generated using the ET-CycleGAN, the accuracy for emotion recognition increased by 10.9%. Our initial findings highlight the importance of adding priors related to training set image attributes (in our case face and emotion priors), to ensure such attributes are maintained in the generated images.

, ,
doi.org/10.1145/3395035.3425258
ACM International Conference on Multimodal Interaction
Distributed and Interactive Systems

Pons, G, El Ali, A, & César Garcia, P.S. (2020). ET-CycleGAN: Generating thermal images from images in the visible spectrum for facial emotion recognition. In ICMI 2020 Companion - Companion Publication of the 2020 International Conference on Multimodal Interaction (pp. 87–91). doi:10.1145/3395035.3425258