Continuous-time spike-based reinforcement learning for working memory tasks
As the brain purportedly employs on-policy reinforcement learning compatible with SARSA learning, and most interesting cognitive tasks require some form of memory while taking place in continuous-time, recent work has developed plausible reinforcement learning schemes that are compatible with these requirements. Lacking is a formulation of both computation and learning in terms of spiking neurons. Such a formulation creates both a closer mapping to biology, and also expresses such learning in terms of asynchronous and sparse neural computation. We present a spiking neural network with memory that learns cognitive tasks in continuous time. Learning is biologically plausibly implemented using the AuGMeNT framework, and we show how separate spiking forward and feedback networks suffice for learning the tasks just as fast the analog CT-AuGMeNT counterpart, while computing efficiently using very few spikes: 1–20 Hz on average.
|Reinforcement learning, Spiking neurons, Working memory|
|Lecture Notes in Computer Science|
|Deep Spiking Vision: Better, Faster, Cheaper|
|International Conference on Artificial Neural Networks|
Karamanis, M, Zambrano, D, & Bohte, S.M. (2018). Continuous-time spike-based reinforcement learning for working memory tasks. In Lecture Notes in Computer Science/Lecture Notes in Artificial Intelligence (pp. 250–262). doi:10.1007/978-3-030-01421-6_25