2025
Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning
Publication
Publication
Transactions on Machine Learning Research , Volume 10/2025 p. 1- 22
We investigate neural ordinary and stochastic differential equations (neural ODEs and SDEs) to model stochastic dynamics in fully and partially observed environments within a modelbased reinforcement learning (RL) framework. Through a sequence of simulations, we show that neural SDEs more effectively capture transition dynamics’ inherent stochasticity, enabling high-performing policies with improved sample efficiency in challenging scenarios. We leverage neural ODEs and SDEs for efficient policy adaptation to changes in environment dynamics via inverse models, requiring only limited interactions with the new environment. To address partial observability, we introduce a latent SDE model that combines an ODE and a GAN-trained stochastic component in latent space. Policies derived from this model offer a strong baseline, outperforming or matching general model-based and model-free approaches across stochastic continuous-control benchmarks. This work illustrates the applicability of action-conditional latent SDEs for RL planning in environments with stochastic transitions. Our code is available at: https://github.com/ChaoHan-UoS/NeuralRL.
| Additional Metadata | |
|---|---|
| Transactions on Machine Learning Research | |
|
Han, C., Ioannou, S., Manneschi, L., Hayward, T. J., Mangan, M., Gilra, A., & Vasilaki, E. (2025). Neural ODE and SDE Models for Adaptation and Planning in Model-Based Reinforcement Learning. Transactions on Machine Learning Research, 10/2025, 1–22. |
|