Collecting accurate and precise emotion ground truth labels for mobile video watching is essential for ensuring meaningful predictions. However, video-based emotion annotation techniques either rely on post-stimulus discrete self-reports, or allow real-time, continuous emotion annotations (RCEA) only for desktop settings. Following a user-centric approach, we designed an RCEA technique for mobile video watching, and validated its usability and reliability in a controlled, indoor (N=12) and later outdoor (N=20) study. Drawing on physiological measures, interaction logs, and subjective workload reports, we show that (1) RCEA is perceived to be usable for annotating emotions while mobile video watching, without increasing users' mental workload (2) the resulting time-variant annotations are comparable with intended emotion attributes of the video stimuli (classification error for valence: 8.3%; arousal: 25%). We contribute a validated annotation technique and associated annotation fusion method, that is suitable for collecting fine-grained emotion annotations while users watch mobile videos.

, , , , , ,
doi.org/10.1145/3313831.3376808
ACM CHI Conference on Human Factors in Computing Systems
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Zhang, T., El Ali, A., Wang, C., Hanjalic, A., & César Garcia, P. S. (2020). RCEA: Real-time, Continuous Emotion Annotation for collecting precise mobile video ground truth labels. In Proceedings of the Conference on Human Factors in Computing Systems (pp. 1–15). doi:10.1145/3313831.3376808