2010-09-01
Creating and Sharing Personalized Time-Based Annotations of Videos on the Web
Publication
Publication
Presented at the
ACM Symposium on Document Engineering , Manchester
This paper introduces a multimedia document model that can structure community comments about media. In particular, we describe a set of temporal transformations for multimedia documents that allow end-users to create and share personalized timed-text comments on third party videos. The benefit over current approaches lays in the usage of a rich captioning format that is not embedded into a specific video encoding format. Using as example a Web-based video annotation tool, this paper describes the possibility of merging video clips from different video providers into a logical unit to be captioned, and tailoring the annotations to specific friends or family members. In addition, the described transformations allow for selective viewing and navigation through temporal links, based on end-users' comments. We also report on a predictive timing model for synchronizing unstructured comments with specific events within a video(s). The contributions described in this paper bring significant implications to be considered in the analysis of rich media social networking sites and the design of next generation video annotation tools.
Additional Metadata | |
---|---|
, , , , | |
, | |
, , , | |
ACM | |
doi.org/10.1145/1860559.1860567 | |
Together Anywhere, Together Anytime | |
ACM Symposium on Document Engineering | |
Organisation | Distributed and Interactive Systems |
Guimarães, R., César Garcia, P. S., & Bulterman, D. (2010). Creating and Sharing Personalized Time-Based Annotations of Videos on the Web. In Proceedings of the ACM Symposium on Document Engineering (ACM DocEng 2010) (pp. 27–36). ACM. doi:10.1145/1860559.1860567 |