2019-05-13
Preference Learning in Automated Negotiation Using Gaussian Uncertainty Models
Publication
Publication
Presented at the
The 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '19) (May 2019), Montreal QC, Canada
In this paper, we propose a general two-objective Markov Decision Process (MDP) modeling paradigm for automated negotiation with incomplete information, in which preference elicitation alternates with negotiation actions, with the objective to optimize negotiation outcomes. The key ingredient in our MDP framework is a stochastic utility model governed by a Gaussian law, formalizing the agent's belief (uncertainty) over the user's preferences. Our belief model is fairly general and can be updated in real time as new data becomes available, which makes it a fundamental modeling tool.
Additional Metadata | |
---|---|
doi.org/https://dl.acm.org/doi/10.5555/3306127.3332019 | |
Representing Users in a Negotiation (RUN): An Autonomous Negotiator Under Preference Uncertainty | |
The 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '19) | |
Organisation | Intelligent and autonomous systems |
Leahu, H., Kaisers, M., & Baarslag, T. (2019). Preference Learning in Automated Negotiation Using Gaussian Uncertainty Models. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (pp. 2087–2089). doi:https://dl.acm.org/doi/10.5555/3306127.3332019 |