Search-based AI agents are state of the art in many challenging sequential decision-making domains. However, contemporary approaches lack the ability to explain, summarize, or visualize their plans and decisions, and how they are derived from traversing complex spaces of possible futures, contingencies, and eventualities, spanned by the available actions of the agent. This limits human trust in high-stakes scenarios, as well as effective human-AI collaboration. In this paper, we propose and motivate the new research direction of explainable search. We discuss its differences to existing approaches in explainable AI, and outline important related research challenges with concrete examples, focusing in particular on online interactions and the resulting understanding of explanations in an ongoing process of mutual collaboration towards human goals.
|IJCAI-PRICAI 2020 Workshop on Explainable Artificial Intelligence (XAI)|
|Organisation||Intelligent and autonomous systems|
Baier, H.J.S, & Kaisers, M. (2021). Explainable search. In Proceedings of the International Conference on Artificial Intelligence.