Backpropagation through time (BPTT) is the de facto standard for training recurrent neural networks (RNNs), but it is non-causal and non-local. Real-time recurrent learning is a causal alternative, but it is highly inefficient. Recently, e-prop was proposed as a causal, local, and efficient practical alternative to these algorithms, providing an approximation of the exact gradient by radically pruning the recurrent dependencies carried over time. Here, we derive RTRL from BPTT using a detailed notation bringing intuition and clarification to how they are connected. Furthermore, we frame e-prop within in the picture, formalising what it approximates. Finally, we derive a family of algorithms of which e-prop is a special case.

, , , ,
Lecture Notes in Computer Science
31st International Conference on Artificial Neural Networks

Martín-Sánchez, G, Bohte, S.M, & Otte, A.S.E. (2022). A taxonomy of recurrent learning rules. In Proceedings of the International Conference of Artificial Neural Networks (pp. 478–490). doi:10.1007/978-3-031-15919-0_40