Using SMIL to Encode Interactive, Peer-Level Multimedia Annotations
Presented at the ACM Symposium on Document Engineering , Grenoble, France
This paper discusses applying facilities in SMIL 2.0 to the problem of annotating multimedia presentations. Rather than viewing annotations as collections of (abstract) meta-information for use in indexing, retrieval or semantic processing, we view annotations as a set of peer-level content with temporal and spatial relationships that are important in presenting a coherent story to a user. The composite nature of the collection of media is essential to the nature of peer-level annotations: you would typically annotate a single media item much differently than that same media item in the context of a total presentation. This paper focuses on the document engineering aspects of the annotation system. We do not consider any particular user interface for creating the annotations or any back-end storage architecture to save/search the annotations. Instead, we focus on how annotations can be represented within a common document architecture and we consider means of providing document facilities that meet the requirements of our user model. We present our work in the context of a medical patient dossier example.
|Proceedings of the ACM Symposium on Doc Engineering|
|ACM Symposium on Document Engineering|
|Organisation||Distributed and Interactive Systems|
Bulterman, D.C.A. (2007). Using SMIL to Encode Interactive, Peer-Level Multimedia Annotations. In Proceedings of the ACM Symposium on Doc Engineering 2003 (pp. 32–41). ACM.