In this paper we study space debris removal from a game-theoretic perspective. In particular we focus on the question whether and how self-interested agents can cooperate in this dilemma, which resembles a tragedy of the commons scenario. We compare centralised and decentralised solutions and the corresponding price of anarchy, which measures the extent to which competition approximates cooperation. In addition we investigate whether agents can learn optimal strategies by reinforcement learning. To this end, we improve on an existing high fidelity orbital simulator, and use this simulator to obtain a computationally efficient surrogate model that can be used for our subsequent game-theoretic analysis. We study both single- and multi-agent approaches using stochastic (Markov) games and reinforcement learning. The main finding is that the cost of a decentralised, competitive solution can be significant, which should be taken into consideration when forming debris removal strategies.

, , , ,
doi.org/10.3389/frobt.2018.00054
Frontiers in Robotics and AI
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Klima, R., Bloembergen, D., Savani, R., Tuyls, K., Wittig, A., Sapera, A., & Izzo, D. (2018). Space debris removal: Learning to cooperate and the price of anarchy. Frontiers in Robotics and AI, 5, 54:1–54:22. doi:10.3389/frobt.2018.00054