Arna Ghosh
Research Scientist at Google Research, working on Bio-inspired AI • PhD at Mila & McGill University, Vanier scholar • Ex-RealityLabs, Meta AI • Comedy+Cricket enthusiast
- Reposted by Arna GhoshNew paper out at PNAS: www.pnas.org/doi/10.1073/... Revisiting the high-dimensional geometry of population responses in the visual cortex with @jpillowtime.bsky.social. The review took forever because a reviewer was doubtful our new estimator can infer eigenvalues beyond the rank of the data! (1/6)
- Reposted by Arna GhoshAre you thinking about doing neuroscience outreach but want to make it more exciting or hands on? Check out RetINaBox! (A collab led by the Trenholm lab) We tried to bring the experience of experimental neuroscience to a classroom setting: www.eneuro.org/content/13/1... #neuroscience 🧪
- Whoaaa!! This is a fantastic effort, and an amazing resource. Huge congratulations to the authors! 🎉
- Need more fMRI data (beyond the amazing NSD)? Introducing MOSAIC! Incredible effort led expertly by Ben Lahner, with help from grad student Mayukh Deb. Work in collaboration with the amazing Aude Oliva! @neurosky.bsky.social. More below..
- Reposted by Arna GhoshLast day of poster sessions and presentations at @neuripsconf.bsky.social. Full schedule featuring Mila-affiliated researchers presenting their work at #NeurIPS2025 here mila.quebec/en/news/foll...
- In San Diego attending #NeurIPS2025? Come to our poster to talk more about representation geometry in LLMs. 😃 🗓️ Friday 4:30-7:30 pm session 📍 Exhibit Hall C, D, E 🏁 Poster # 2502
- LLMs are trained to compress data by mapping sequences to high-dim representations! How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔 Announcing our #NeurIPS2025 📄 that dives into this. 🧵below #AIResearch #MachineLearning #LLM
- Reposted by Arna Ghosh(1/n) We are excited to share our new paper in Nature Communications, by Hagar Lavian (@hlavian.bsky.social) and team, revealing how the zebrafish brain integrates visual navigation signals! www.nature.com/articles/s41...
- Reposted by Arna Ghosh1/ Why does RL struggle with social dilemmas? How can we ensure that AI learns to cooperate rather than compete? Introducing our new framework: MUPI (Embedded Universal Predictive Intelligence) which provides a theoretical basis for new cooperative solutions in RL. Preprint🧵👇 (Paper link below.)
- Population coding 🙌
- “I will die on the hill that population coding is the relevant level of encoding information in the brain.” In the latest “This paper changed my life,” Nancy Padilla-Coreano discusses a paper on mixed selectivity neurons. #neuroskyence www.thetransmitter.org/this-paper-c...
- Reposted by Arna GhoshHow I contributed to rejecting one of my favorite papers of all times, Yes, I teach it to students daily, and refer to it in lots of papers. Sorry. open.substack.com/pub/kording/...
- This is an excellent blueprint on a very fascinating use of AI scientist! And the results and super cool and interesting! 🤩 I have been asked this when talking about our work on using powerlaws to study representation quality in deep neural networks, glad to have a more concrete answer now! 😃
- 1. New preprint resolving a conundrum in systems neuroscience with an AI scientist, and humans Reilly Tilbury, Dabin Kwon, @haydari.bsky.social, @jacobmratliff.bsky.social, @bio-emergent.bsky.social, @carandinilab.net, @kevinjmiller.bsky.social, @neurokim.bsky.social www.biorxiv.org/content/10.1...
- Reposted by Arna Ghosh
- Reposted by Arna GhoshI’m looking for interns to join our lab for a project on foundation models in neuroscience. Funded by @ivado.bsky.social and in collaboration with the IVADO regroupement 1 (AI and Neuroscience: ivado.ca/en/regroupem...). Interested? See the details in the comments. (1/3) 🧠🤖
- Reposted by Arna GhoshA tad late (announcements coming) but very happy to share the latest developments in my previous preprint! Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
- Here’s our latest work at @glajoie.bsky.social and @mattperich.bsky.social ‘s labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. www.biorxiv.org/content/10.1...
- LLMs are trained to compress data by mapping sequences to high-dim representations! How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔 Announcing our #NeurIPS2025 📄 that dives into this. 🧵below #AIResearch #MachineLearning #LLM
- 📐We measured representation complexity using the #eigenspectrum of the final layer representations. We used 2 spectral metrics: - Spectral Decay Rate, αReQ: Fraction of variance in non-dominant directions. - RankMe: Effective Rank; #dims truly active. ⬇️αReQ ⇒ ⬆️RankMe ⇒ More complex! 🧵1/9
- Very cool study, with interesting insights about theta sequences and learning!
- 1/ 🚨 New preprint! 🚨 Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇 📄 www.biorxiv.org/content/10.1... 💻 code + data 🔗 below 🤩 #neuroskyence
- Reposted by Arna GhoshTogether with @repromancer.bsky.social, I have been musing for a while that the exponentiated gradient algorithm we've advocated for comp neuro would work well with low-precision ANNs. This group got it working! arxiv.org/abs/2506.17768 May be a great way to reduce AI energy use!!! #MLSky 🧪
- This looks like a very cool result! 😀 Can't wait to read in detail.
- Does the brain learn by gradient descent? It's a pleasure to share our paper at @cp-cell.bsky.social, showing how mice learning over long timescales display key hallmarks of gradient descent (GD). The culmination of my PhD supervised by @laklab.bsky.social, @saxelab.bsky.social and Rafal Bogacz!
- Fantastic work on Multi-agent RL from @dvnxmvlhdf5.bsky.social & @tyrellturing.bsky.social! 🤩
- Reposted by Arna Ghosh[Not loaded yet]
- Also, big shoutout to @quentin-garrido.bsky.social+gang and @aggieinca.bsky.social+gang for developing Rankme and Lidar, respectively. Reptrix incorporates these representation quality metrics. 🚀 Let's make it easier to select good SSL/foundation models. 💪
- Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪 🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep... 🧵👇[1/6] #DeepLearning
- Are you training self-supervised/foundation models, and worried if they are learning good representations? We got you covered! 💪 🦖Introducing Reptrix, a #Python library to evaluate representation quality metrics for neural nets: github.com/BARL-SSL/rep... 🧵👇[1/6] #DeepLearning
- Inspired by conversations after our α-ReQ paper (NeurIPS 2022) and subsequent work, we created Reptrix as an open-source library for assessing representation quality across models of vision, language… and more. Check out our @mila-quebec.bsky.social blogpost: mila.quebec/en/article/a... [2/6]
- Super cool paper! It formalizes a lot of ideas I have been mulling over the past year, and connects tons of historical ideas neatly. Definitely worth a read if you are working/interested in mechanistic interp and neural representations.
- Just over a week since I defended my 🤖+🧠PhD thesis, and the feeling is just sinking in. Extremely grateful to @tyrellturing.bsky.social for supporting me through this amazing journey! 🙏 Big thanks to all members of the LiNC lab, and colleagues at mcgill University and @mila-quebec.bsky.social. ❤️😁
- Reposted by Arna GhoshJust giving this a boost for those who may not have seen it yet... we have a PI position (molecular and cellular basis of cognition) at The Hospital for Sick Children (Toronto). The position comes with an appointment at Assist/Assoc Prof level at U of T. Share widely! can-acn.org/scientist-se...
- Come say hi at the noon poster session today in the East hall, poster #2201. 🚀 #NeurIPS2024
- The problem with current SSL? It's hungry. Very hungry. 🤖 Training time: Weeks Dataset size: Millions of images Compute costs: 💸💸💸 Our #NeurIPS2024 poster makes SSL pipelines 2x faster and achieves similar accuracy at 50% pretraining cost! 💪🏼✨ 🧵 1/8