Fact-checking is a common journalistic practice adopted to verify the truthfulness of claims and information items. Because of the demanding nature of fact-checking, a significant amount of research has been devoted to the use of crowdsourcing to scale up this practice. The idea to use laypeople to fact-check information allows accessing a vast amount of human computation resources, but introduces an issue of reliability: when these tasks are performed by laypeople instead of experts, their quality might be questioned. In this paper, we introduce an ontology for modeling crowdsourced datasets of information quality assessments. We emphasize that we allow modeling information about the items evaluated as well as important metadata such as the authors of such assessments. The goal of this model is to favor interoperability among different datasets of the same kind, as well as to support internal analyses of the dataset themselves in terms of bias and reliability of the collected assessments.

, ,
CEUR Workshop Proceedings
23rd International Conference on Knowledge Engineering and Knowledge Management, EKAW-C 2022
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Ceolin, D., van Kuppevelt, D., & Qi, J. (2022). CrowdIQ: An ontology for crowdsourced information quality assessments. In EKAW-C 2022 - EKAW 2022 Companion Volume.