Gabriel Béna 🌻
PhD Student at Imperial College with Dan Goodman. Pretending to be a neuro guy. Modularity, structure-function, resource-constrained ANNs/SNNs, neuromorphic + fun stuff like Neural Cellular Automatas 😎 Also working w/ SpiNNCloud on SpiNNaker2.
- Finneas is better at political messaging than 99% of the Democratic consultants in DC.
- These guys are awesome
- L'article le plus attendu en 5 ans sur Bon Pote est enfin en ligne ! “Fonds durables”, “fonds verts”, “fonds éthiques”. Dans un travail inédit, nous montrons que le marché des fonds “durables” n’est rien d’autre qu’un grand enfumage. A lire en accès libre : bonpote.com/les-fonds-du...
- [Not loaded yet]
- Ok je vois, dur à saisir quand on ne maîtrise pas le sujet. Merci pour votre travail 😊
- Super article merci. Comment est ce que ces conclusions affectent les assurance vie "vertes" type goodvest / greengot ??
- How does the structure of a neural circuit shape its function? @neuralreckoning.bsky.social & I explore this in our new preprint: doi.org/10.1101/2025... 🤖🧠🧪 🧵1/9
- All hail the T-wings fleet of pRNNs
- Need a new book Need an author suggestion Need to read something I cant put down and read all day and night til im crying ive read them all. I have needs
- Becky Chambers
- [Not loaded yet]
- [Not loaded yet]
- We might very well have to make do without it
- Yeah I figured. Glad to hear hope is back, kinda hoping for a similar dynamic in France to happen although I also don't only want to be waiting for a providential candidate to emerge
- Let's gooo. What was the first if I may ask 😇?
- [Not loaded yet]
- [Not loaded yet]
- Damn you Aπόλλων
- Fair warning, you might cry reading book 4 🫠
- Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. 👇🤖🧠🧪 arxiv.org/abs/2510.27434
- [Not loaded yet]
- Maybe 6 bits will do the trick
- But can it run doom on 22kb
- I visited Grand Street Senior Housing on the Lower East Side today and tried a little Bomba y Plena…
- You cannot not love this guy
- [Not loaded yet]
- [Not loaded yet]
- Thanks!
- What's the status of the original neuralink? I've not seen anything about it in ages. With their first patient I gotta admit it, despite it being Musk's thing, and all it was genuinely impressive stuff
- We'll be presenting this at #GECCO2025!! Come say hi if you're around ☀️
- New #Preprint Alert!! 🤖 🧠 🧪 What if we could train neural cellular automata to develop continuous universal computation through gradient descent ?! We have started to chart a path toward this goal in our new preprint: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be... 🧵⬇️
- [Not loaded yet]
- [Not loaded yet]
- Was easy to guess Dan's least fav if you knew of his anarchist tendencies #CommunityOfMind 😁
- NEWSOM: "It's a vulgar display. It's the kind of thing you see with Kim Jong Un, Putin, dictators that are weak… How weak do you have to be to commandeer the military for your birthday?“
- Dayuuuum
- [Not loaded yet]
- Postdoc swagger ✨
- The REAL question on everyone's lips though... Blog: gabrielbena.github.io/blog/2025/be... Thread: bsky.app/profile/sola...
- New #Preprint Alert!! 🤖 🧠 🧪 What if we could train neural cellular automata to develop continuous universal computation through gradient descent ?! We have started to chart a path toward this goal in our new preprint: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be... 🧵⬇️
- New #Preprint Alert!! 🤖 🧠 🧪 What if we could train neural cellular automata to develop continuous universal computation through gradient descent ?! We have started to chart a path toward this goal in our new preprint: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be... 🧵⬇️
- [Not loaded yet]
- Thanks Anand !
- Thank you very much to my co-author Maxence Faldor (maxencefaldor.github.io) and to our supervisors @neuralreckoning.bsky.social and Antoine Cully from Imperial College !! And again: arXiv: arxiv.org/abs/2505.13058 Blog: gabrielbena.github.io/blog/2025/be...
- We will also be present at GECCO 2025, specifically at the EvoSelf Workshop, to present this work : evolving-self-organisation-workshop.github.io See you there, I hope !
- Taking it even further: We're developing a graph-based "Hardware Meta-Network"! Users define tasks as intuitive graphs (nodes = regions, edges = operations), and a GNN + coordinate-MLP generates the hardware configuration! It's literally a compiler from human intent → NCA computation! 🤖
- In conclusion: continuous cellular automata could be universal computers when trained right. This might change how we should think about: - What can compute ? - How to design computers ? - The future of efficient AI hardware. 🚀 Let's train physics-based computers !🚀
- Our approach also enables task composition, meaning we can chain operations together! Example: Distribute matrix → Multiply → Rotate → Return to original position It's like programming, but the "execution" is continuous dynamics! We're building a neural compiler!
- I quite like this idea of a compiler ! Think of it like having dual timescale: - FAST: State / neuronal dynamics (computation happens) - SLOW: Hardware reconfiguration (program flow) This separation mirrors classical computer architecture but within a continuous, differentiable substrate !
- More on the MNIST demo: We pre-train a linear classifier, decompose the 784×10 matrix multiplication into smaller blocks, and let the NCA process them in PARALLEL! Emulated accuracy: 60% (vs 84%), not perfect due to error accumulation, but it WORKS! This is a neural network running inside a CA! 🤯
- Btw, this isn't just academic curiosity. We're talking about: 🔸 Analogue computers that could be more efficient than digital ones (without the need to revert to binary-level operations). 🔸 #Neuromorphic computing that mimics how brains actually work. 🔸 Bypassing the von Neumann bottleneck ?
- Through this framework, we are able to successfully train on a variety of computational primitives of matrix arithmetics. Here is an example of the NCA performing Matrix Translation + Rotation directly in its computational state (and, by design, only using local interactions to do so) !
- But here's where it gets REALLY wild... We didn't just train on computational primitives... We then used our pre-trained NCA to emulate a small neural network and solve MNIST digit classification! The entire neural network "lives" inside the CA state space!
- Think of it like having a computing substrate: - Some universal laws of physics apply to every unit of a motherboard / of a brain. - These units are (usually) setup in a fixed, meaningful manner... - But their evolving state (electrical charges / neurochemical patterns) govern the computation.
- But how do we "instruct" the NCA what to do, what task to perform, on which data ? Basically, how do we interface with this dynamical substrate to "make" it do interesting computation ? This is the role of the hardware ! This acts as a translation between human intent, and the dynamical substrate.
- For those of you who've missed it, quick NCA primer: - Traditional cellular automata = hand-crafted rules (like Conway's Game of Life). - Neural Cellular Automata = local rule learned by a neural network through gradient descent! distill.pub/2020/growing...
- We propose a novel framework that disentangles the concepts of “hardware” and “state” within the NCA. For us: - Rules = "Physics" dictating state transitions. - Hardware = Immutable + heterogeneous scaffold guiding the CA behaviour. - State = Dynamic physical & computational substrate.
- Here's the gist: Traditional CAs (think Conway's Game of Life) have been mathematically proven Turing-complete... but designing them is HARD. You have to hand-craft rules, at the cost of arduous efforts. What if instead we could just... train them to compute, offloading the burden? Enter #NCA !
- [Not loaded yet]
- I'd be 58 by now 😁
- Just discovered King Gizzard at Capocaccia 2025 #neuromorphic workshop and it's freaking awesomene
- [Not loaded yet]
- Fabien la saucisse a encore frappé
- [Not loaded yet]
- [Not loaded yet]
- Wow shit good to know
- [Not loaded yet]
- Yeah fr, was just wondering whether some existing tools for scientific citations would be able to not suck at this
- [Not loaded yet]
- Hahahaha
- [Not loaded yet]
- What about tools specifically made to cite sources, ie perplexity or other? Chat gpt != all AI
- Really cool work by Dan and Yang !! Thinking of algorithms is as important as new architectures 🔊👂
- How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning. New preprint from @yang-chu.bsky.social. 🤖🧠🧪 arxiv.org/abs/2001.10605
- [Not loaded yet]
- [Not loaded yet]
- I would actually have said the opposite: life is local pockets of computational reducability, like a glider in the game of life
- The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn washingtonspectator.org/understandin...
- Much needed to understand the chaos unfolding today 🙏 Older articles from Gebru/Torres are a must read as well
- LFI est désormais calomniée quasiment toutes les heures sur les chaînes d'info. Et maintenant, ce n'est pas simplement sur Cnews, mais aussi sur BFM et France Info que se produit ce lynchage... Nous sommes dans la 1ère phase de fascisation de notre pays. Soyons-en conscients.
- [Not loaded yet]
- Ah ça... La guerre c'est la paix
- [Not loaded yet]
- Oui oui je suis d'accord surtout que les accusations sont calomnieuses, et on a bien vu ce que faire une corbynn et s'excuser apporte mais là il braque tout le monde. Surtout que y'a une incohérence, pourquoi avoir retiré l'affiche si il n'y avait même pas au moins une maladresse à reconnaître...
- Je suis d'accord d'habitude mais la dernière réponse du vieux sur l'affiche IA était vraiment catastrophique, l'indignation comme dernier recours