Welcome to tidnabbil! Here you can find algorithms for learning in structured bandit models in the fixed confidence and regret settings. These methods are based on iterated saddle point solvers, and they come with guarantees that in particular imply asymptotic optimality. This repository is made available in the hope that this library is useful to others. The code for the experiments in our structured bandit papers is included to ensure reproducibility, and to provide examples to get you started. Cheers!