As the brain purportedly employs on-policy reinforcement learning compatible with SARSA learning, and most interesting cognitive tasks require some form of memory while taking place in continuous-time, recent work has developed plausible reinforcement learning schemes that are compatible with these requirements. Lacking is a formulation of both computation and learning in terms of spiking neurons. Such a formulation creates both a closer mapping to biology, and also expresses such learning in terms of asynchronous and sparse neural computation. We present a spiking neural network with memory that learns cognitive tasks in continuous time. Learning is biologically plausibly implemented using the AuGMeNT framework, and we show how separate spiking forward and feedback networks suffice for learning the tasks just as fast the analog CT-AuGMeNT counterpart, while computing efficiently using very few spikes: 1–20 Hz on average.

, ,
doi.org/10.1007/978-3-030-01421-6_25
Lecture Notes in Computer Science
Deep Spiking Vision: Better, Faster, Cheaper
International Conference on Artificial Neural Networks

Karamanis, M., Zambrano, D., & Bohte, S. (2018). Continuous-time spike-based reinforcement learning for working memory tasks. In ICANN 2018: Artificial Neural Networks and Machine Learning (pp. 250–262). doi:10.1007/978-3-030-01421-6_25