Multi-modal fusion methods for robust emotion recognition using body-worn physiological sensors in mobile environments
High-accuracy physiological emotion recognition typically requires participants to wear or attach obtrusive sensors (e.g., Electroencephalograph). To achieve precise emotion recognition using only wearable body-worn physiological sensors, my doctoral work focuses on researching and developing a robust sensor fusion system among different physiological sensors. Developing such fusion system has three problems: 1) how to pre-process signals with different temporal characteristics and noise models, 2) how to train the fusion system with limited labeled data and 3) how to fuse multiple signals with inaccurate and inexact ground truth. To overcome these challenges, I plan to explore semi-supervised, weakly supervised and unsupervised machine learning methods to obtain precise emotion recognition in mobile environments. By developing such techniques, we can measure the user engagement with larger amounts of participants and apply the emotion recognition techniques in a variety of scenarios such as mobile video watching and online education.
|Emotion recognition, Machine learning, Mobile Environments, Physiological sensors|
|International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction|
|Organisation||Distributed and Interactive Systems|
Zhang, T. (2019). Multi-modal fusion methods for robust emotion recognition using body-worn physiological sensors in mobile environments. In ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction (pp. 463–467). doi:10.1145/3340555.3356089