The algorithmic detection of disinformation online is currently based on two strategies: on the one hand, research focuses on automated fact-checking; on the other hand, models are being developed to assess the trustworthiness of information sources, including both empirical and theoretical research on credibility and content quality. For debates among experts, in particular, it might be hard to discern (less) reliable information, as all actors by definition are qualified. In these cases, the use of trustworthiness metrics on sources is a useful proxy for establishing the truthfulness of contents. We introduce an algorithmic model for automatically generating a dynamic trustworthiness hierarchy among information sources based on several parameters, including fact-checking. The method is novel and significant, especially in two respects: first, the generated hierarchy represents a helpful tool for laypeople to navigate experts’ debates; second, it also allows to identify and overcome biases generated by intuitive rankings held by agents at the beginning of the debates. We provide an experimental analysis of our algorithmic model applied to the debate on the SARS-CoV-2 virus, which took place among Italian medical specialists between 2020 and 2021.

, ,
Journal of Experimental & Theoretical Artificial Intelligence
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Primiero, G, Ceolin, D, & Doneda, F. (2023). A computational model for assessing experts’ trustworthiness. Journal of Experimental & Theoretical Artificial Intelligence. doi:10.1080/0952813X.2023.2183272