Advances in Generative Artificial Intelligence (AI) are resulting in AI-generated media output that is (nearly) indistinguishable from human-created content. This can drastically impact users and the media sector, especially given global risks of misinformation. While the currently discussed European AI Act aims at addressing these risks through Article 52's AI transparency obligations, its interpretation and implications remain unclear. In this early work, we adopt a participatory AI approach to derive key questions based on Article 52's disclosure obligations. We ran two workshops with researchers, designers, and engineers across disciplines (N=16), where participants deconstructed Article 52's relevant clauses using the 5W1H framework. We contribute a set of 149 questions clustered into five themes and 18 sub-themes. We believe these can not only help inform future legal developments and interpretations of Article 52, but also provide a starting point for Human-Computer Interaction research to (re-)examine disclosure transparency from a human-centered AI lens.

, , , , , , ,
doi.org/10.1145/3613905.3650750
2024 CHI Conference on Human Factors in Computing Sytems, CHI EA 2024
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

El Ali, A., Venkatraj, K., Morosoli, S., Naudts, L., Klocke, P., & César Garcia, P. S. (2024). Transparent AI disclosure obligations: Who, what, when, where, why, how. In CHI EA: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 342:1–342:11). doi:10.1145/3613905.3650750