Humans have a remarkable capacity for learning, yet neuronal learning is constrained to locality in time and space and limited feedback. While neural learning rules have been designed that adhere to these principles and constraints, they exhibit difficulty in scaling to deep networks and complicated datasets. BrainProp is a biologically plausible learning rule, learning from trial-and-error feedback through reinforcement learning, that does generalise to deep networks and achieves good performance on traditional machine learning benchmarks. It does however falter on problems with a large number of output categories, such as the classical ImageNet vision benchmark: while standard BrainProp eventually succeeds, learning is not robust and highly sensitive to hyper-parameter optimisation and proper initialisation. Here, we leverage insights from behavioural science by developing a curriculum that structures how samples are presented to a network to optimise learning. The key features of the curriculum involve progressively introducing new classes to the dataset based on performance metrics, and using a recency bias to protect recently acquired classes. We demonstrate that our curriculum approach makes BrainProp-style learning robust and more rapid, while substantially improving classification accuracy. We also show the curriculum similarly improves performance for networks trained using error-backpropagation. We thus establish a new state-of-the-art performance for large-scale deep reinforcement learning. Our results show the potential of curriculum learning in local learning settings with limited feedback and further bridges the gap between biologically plausible learning rules and error-backpropagation.

, , , ,
doi.org/10.1109/IJCNN64981.2025.11229171
Perceptive acting under uncertainty:\r\nsafety solutions for autonomous systems , Human Brain Project - SGA3
2025 International Joint Conference on Neural Networks (IJCNN)
Machine Learning

van den Berg, A., Roelfsema, P., & Bohte, S. (2025). Curriculum design for scalable biologically plausible deep reinforcement learning. In Proceedings of the IEEE International Joint Conference on Neural Networks (pp. 1–8). doi:10.1109/IJCNN64981.2025.11229171