Can we enable biologically plausible neural networks to learn complex cognitive functions? In this thesis, I developed novel learning rules and training methods that are inspired from neuroscience and investigated how well artificial neural networks trained with these techniques align with the behaviour and neural activity from animals performing the same tasks. In the second chapter, I designed a simplified model trained with a local learning rule to perform tasks that require the flexible use of memory, both within trials and across learning experiences through meta-learning. I demonstrated that these networks exhibit important characteristics also observed in animals trained on these tasks. In the third chapter, I extended this learning rule to deeper architectures to investigate how memories are represented and maintained through the different layers of the network. In the fourth chapter, I accelerated and improved the learning dynamics of networks trained with reinforcement learning so they can scale to larger, more complex problems such as ImageNet. Finally, I summarise these findings and discuss how to place them in a broader context, delineating which challenges and opportunities remain for the field. Overall, the chapters in this thesis contribute to the advancement of more flexible and scalable biologically plausible neural networks for deep cognitive control.

S.M. Bohte (Sander) , P.R. Roelfsema (Pieter)
Universiteit van Amsterdam
hdl.handle.net/11245.1/6f887b16-813b-4909-be94-3d2a040619d9
Machine Learning

van den Berg, S. (2025, July). Biologically plausible reinforcement learning of deep cognitive processing. Retrieved from http://hdl.handle.net/11245.1/6f887b16-813b-4909-be94-3d2a040619d9