2012
Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks
Publication
Publication
Network: Computation in Neural Systems , Volume 23 - Issue 4 p. 183- 211
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facil- ities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of mem- brane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU’s architecture has a large pay-o↵: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50,000 neurons, processing over 35 million spiking events per second.
Additional Metadata | |
---|---|
, | |
Informa Healthcare | |
doi.org/10.3109/0954898X.2012.733842 | |
Network: Computation in Neural Systems | |
Organisation | Evolutionary Intelligence |
Slazynski, L., & Bohte, S. (2012). Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks. Network: Computation in Neural Systems, 23(4), 183–211. doi:10.3109/0954898X.2012.733842 |