Fostering explainable online review assessment through computational argumentation
Explainable methods have received increased attention within artificial intelligence. Wherever an automated system makes a decision an explanation is required to convince a user about the decision. Furthermore, online information quality assessment is crucial to help users navigate information. However, explaining the assessment of online information had not been clarified well. The current work provides explanations to a user about the assessment of online information and specific, provides explanations for the quality assessments of online reviews. We construct an abstract argumentation framework (AF) based on a set of given reviews. We consider the grounded semantics of AFs to assess each topic. Then, we discuss the question of why a score can be assigned to a topic of a product. Furthermore, we indicate a proper score of a review based on the scores of topics within the review in question. We also collect arguments that can support the chosen score of a review.
|CEUR Workshop Proceedings|
|1st International Workshop on Argumentation for eXplainable AI, ArgXAI 2022|
Keshavarzi Zafarghandi, A, & Ceolin, D. (2022). Fostering explainable online review assessment through computational argumentation. In ArgXAI-22: Argumentation for eXplainable AI.