- [Not loaded yet]
- To compare pRNN function, we introduce a set of multisensory navigation tasks we call *multimodal mazes*. In these tasks, we simulate networks as agents with noisy sensors, which provide local clues about the shortest path through each maze. We add complexity by removing cues or walls. 🧵4/9
- We trained over 25,000 pRNNs on these tasks. And measured their: 📈 Fitness (task performance) 💹 Learning speed 📉 Robustness to various perturbations (e.g. increasing sensor noise) From these data, we reach three main conclusions. 🧵5/9
- First, across tasks and functional metrics, many pRNN architectures perform as well as the fully recurrent architecture. Despite having less pathways and as few as ¼ the number of parameters. This shows that pRNNs are efficient, yet performant. 🧵6/9
- Second, to isolate how each pathway changes network function, we compare pairs of circuits which differ by one pathway. Across pairs, we find that pathways have context dependent effects. E.g. here hidden-hidden connections decrease learning speed in one task but accelerate it in another. 🧵7/9
- Third, to explore why different circuits function differently, we measured 3 traits from every network. We find that different architectures learn distinct sensitivities and memory dynamics which shape their function. E.g. we can predict a network’s robustness to noise from its memory. 🧵8/9
- We’re excited about this work as it: ⭐ Explores a fundamental question: how does structure sculpt function in artificial and biological networks? ⭐ Provides new models (pRNNs), tasks (Multimodal mazes) and tools, in a pip-installable package: github.com/ghoshm/Multi... 🧵9/9Aug 1, 2025 08:27