Precise emotion ground truth labels for 360◦ virtual reality (VR) video watching are essential for fne-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, con- tinuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360◦ VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users’ workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels. Our work contributes usable and efective techniques for collecting fne-grained viewport-dependent emotion labels in 360◦ VR.

doi.org/10.1145/3411764.3445487
CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
Distributed and Interactive Systems

Xue, T., El Ali, A., Zhang, T., Ding, G., & César Garcia, P. S. (2021). RCEA-360VR: Real-time, continuous emotion annotation in 360◦ VR videos for collecting precise viewport-dependent ground truth labels. In Proceedings of the Conference on Human Factors in Computing Systems (pp. 1–15). doi:10.1145/3411764.3445487