Top-down feedback is ubiquitous in the brain and computationally distinct, but rarely modeled in deep neural networks. What happens when a DNN has biologically-inspired top-down feedback? 🧠📈
Our new paper explores this:
elifesciences.org/reviewed-pre...Top-down feedback matters: Functional impact of brainlike connectivity motifs on audiovisual integration
Apr 15, 2025 20:11What does it mean to have “biologically-inspired top-down feedback”? In the brain, feedback does not drive pyramidal neurons directly, but it modulates the feedforward signal (both multiplicatively and additively), as described in Larkum et al 2004.
To model top-down feedback in neocortex, we built a freely available codebase that can be used to construct multi-input, topological, top-down and laterally recurrent DNNs that mimic neural anatomy. (
github.com/masht18/conn... )
Each brain region is a recurrent convolutional network, and can receive two different types of input: driving feedforward and modulatory feedback. With this code, users can input macroscopic connectivity to build anatomically constrained DNNs.
As an initial test, we wanted to see how using modulatory feedback could impact computation. To do this, we built an audio-visual model, based on human anatomy from the BigBrain and MICA-MICs datasets, and trained it to classify ambiguous stimuli.
To test the impact of different anatomies of modulatory feedback, we compared the performance of a model based on human anatomy with identically sized models with different configurations of feedback/feedforward connectivity.
Interestingly, compared to other models, the human brain-based model was particularly proficient at ignoring irrelevant audio stimuli that didn’t help to resolve ambiguities.
Conversely, when trained on a similar set of auditory categorization tasks, the human brain-based model was the best at integrating helpful visual information to resolve auditory ambiguity.
We found that the brain-based model still had a visual bias even after being trained on auditory tasks. But, this bias didn’t hamper the model’s overall performance, and it mimics a consistently observed human visual bias (Posner et al 1974)
The models were then trained to identify either the auditory or visual stimuli based on an attention cue. The visual bias not only persisted, but helped the brainlike model learn to ignore distracting audio more quickly than other models.
To summarize, we built a codebase for creating DNNs with top-down feedback, and we used it to examine the impact of top-down feedback on audio-visual integration tasks.
We found that top-down feedback, as implemented in our models, helps to determine the set of solutions available to the networks and the regional specializations that they develop.
These results show that modulatory top-down feedback has unique computational implications. As such, we believe that top-down feedback should be incorporated into DNN models of the brain more often. Our code base makes that easy!
We'd like to thank
@elife.bsky.social and the reviewers for a very constructive review experience. As well, thanks to our funders, in particular HIBALL, CIFAR, and NSERC. This work was supported with computational resources by
@mila-quebec.bsky.social and the Digital Research Alliance of Canada.