The spatially embedded
Recurrent Neural Network
Recurrent Neural Network
A model to reveal widespread links between structural and functional neuroscience findings
work by Jascha Achterberg*, Danyal Akarca*, DJ Strouse, Cornelia Sheeran, Andrew Siyoon Ham, John Duncan, Duncan E. Astle
work by Jascha Achterberg*, Danyal Akarca*, DJ Strouse, Cornelia Sheeran, Andrew Siyoon Ham, John Duncan, Duncan E. Astle
Main project:
Achterberg, J.*, Akarca, D.*, Strouse D., Duncan, J., & Astle, D. (2023). Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence. https://www.nature.com/articles/s42256-023-00748-9
University of Cambridge press release with a summary of the project: https://www.cam.ac.uk/research/news/ai-system-self-organises-to-develop-features-of-brains-of-complex-organisms
New investigations on spiking networks and low entropy modularity in seRNNs:
Sheeran, C.*,Ham, S.*, Astle, D., Achterberg, J., Akarca, D. (2024). arXiv. Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics. https://arxiv.org/abs/2409.17693
____________________
Summary of key findings from all projects below
Github repository with example implementations of rate-based and spiking networks: https://github.com/8erberg/spatially-embedded-RNN
We are continuously expanding our model with additional biophysical constraints. Please be in touch if you are interested in collaborating on seRNN related work.
Main Project overview
Summary
RNNs faced with task control, structural costs and communication constraints configure themselves to exhibit brain-like structural and functional properties.
These spatially embedded RNNs show:
A sparse modular small-world connectome
Spatial organization of their units according to their function
An energy-efficient mixed selective code
Convergence in parameter space
Online Lecture
Background & question
Due to being exposed to the same basic forces and optimization problems, brains commonly converge on similar features in their structural topology and function [1].
Can we observe this process in a recurrent neural network’s (RNN) optimization process of task control (one-choice inference task) under structural cost and communication constraints?
Our approach
Spatially embedded recurrent neural networks (seRNNs) are characterized by a special regularization function which embeds them in a 3D box space with local communication constraints.
We trained a population of 1000 seRNNs (10 epochs) and compared them to 1000 L1-regularised RNNs (baseline models). In both populations we varied the regularization strength systematically.
Structural findings
As in empirical brain networks, seRNNs configured themselves to exhibit a sparse modular small-world topology [2].
Structure-function findings
Mirroring neural tuning [3], functionally similar seRNN units clustered in space. Like the brain, task-related information has an organized spatial configuration.
Functional findings
seRNNs exhibit a mixed selective [4] and low-energy demand code for solving the task.
Convergent outcomes
Findings emerge in unison in subgroup of seRNNs within a ”sweet spot” [5] in the regularization strength and training duration parameter space.
Conclusions
Seemingly unrelated neuroscientific findings can be attributed to the same optimization process.
seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding and AI forward.
van den Heuvel, MP, et al. Trends in Cognitive Sciences. 2016.
Bullmore & Sporns. Nature Reviews Neuroscience. 2012.
Thompson & Frannson. Scientific Reports. 2018.
Fusi, et al. Current Opinions in Neurobiology. 2016.
Akarca, et al. Nature Communications. 2021.
We thank UKRI MRC (JA, DA, DEA, JD), Gates Cambridge Scholarship (JA), Cambridge Vice Chancellor’s Scholarship (DA) and DeepMind (DS) for funding.
New investigations at CCN2023
The below findings were presented by Andrew Siyoon Ham and Cornielia Sheeran at CCN 2023. Since then we have published them as a preprint on: https://arxiv.org/abs/2409.17693
Using Spiking Neural Network
Replication of original findings: We implemented spatially embedded recurrent spiking neural networks (seRSNNs) solving a diverse set of neuromorphic tasks and show that the key structural findings from original seRNNs replicate in spiking neural networks.
Topologically distributed cell types: When investigation the time constant as a cell type characteristic across the network we find that in seRSNNs cell-types are uniquely distributed across the networks structuctural topology. As such we find neurons which accumulate information across the task duration distributed across the entire recurrent network.
Lead by Andrew Siyoon Ham: https://2023.ccneuro.org/view_paper.php?PaperNum=1139
Using Entropy-based measures
By measuring the network's structural complexity via the Shannon Entropy we find that the optimisation of the network's internal communicative structure creates a more robust and regular structure compared to L1 baselines. This likely creates a more ordered flow of information within seRNNs.
These findings hold true for both rate-based and spiking networks.
Lead by Cornelia Sheeran: https://2023.ccneuro.org/view_paper.php?PaperNum=1318