Representation learning for emotion recognition from smartphone keyboard interactions
Characteristics of typing on smartphone keyboards among different individuals can elicit emotion, similar to speech prosody or facial expressions. Existing works on typing based emotion recognition rely on feature engineering to build machine learning models, while recent speech and facial expression based techniques have shown the efficacy of learning the features automatically. Therefore, in this work, we explore the effectiveness of such learning models in keyboard interaction based emotion detection. In this paper, we propose an end-to-end framework, which first uses a sequence-based encoding method to automatically learn the representation from raw keyboard interaction pattern and subsequently uses this representation to train a multi-task learning based neural network (MTL-NN)to identify different emotions. We carry out a 3-week in-the-wild study involving 24 participants using a custom keyboard capable of tracing users' interaction pattern during text entry. We collect interaction details like touch speed, error rate, pressure and self-reported emotions (happy, sad, stressed, relaxed) during the study. Our analysis on the collected dataset reveals that the representation learnt from the interaction pattern has an average correlation of 0.901 within the same emotion and 0.811 between different emotions. As a result, the representation is effective in distinguishing different emotions with an average accuracy (AUCROC)of 84%.
|, , ,|
|International Conference on Affective Computing & Intelligent Interaction|
|Organisation||Distributed and Interactive Systems|
Ghosh, S, Goenka, S, Ganguly, N, Mitra, B, & De, P. (2019). Representation learning for emotion recognition from smartphone keyboard interactions. In Proceedings of the International Conference on Affective Computing & Intelligent Interaction (pp. 704–710). doi:10.1109/ACII.2019.8925518