Guillaume Lajoie
Professor at Université de Montréal & Mila -- Québec AI Institute
mathematics - neuroscience - artificial intelligence
- Reposted by Guillaume Lajoie🧠 Lancement du Semestre thématique #IVADO sur le Raisonnement et l’IA avec Aaron Courville, @glajoie.bsky.social, @alisongopnik.bsky.social qui ont accueilli ce matin les très nombreux(ses) participant(e) du 1er atelier sur Les bases cognitives du raisonnement (dans l’esprit et l’IA).
- Reposted by Guillaume Lajoie🧠 Launch of the #IVADO Thematic Semester on Reasoning and AI with Aaron Courville, @glajoie.bsky.social and @alisongopnik.bsky.social who welcomed a large number of participants this morning to the first workshop on The Cognitive Basis of Reasoning (in Minds and AI).
- Come join us for this first worshop of a three part series on the computational ingredients of reasoning in minds and AI. Reasoning is a complex term, especially in light of an exploding category of methods in LLMs. These workshops will explore reasoning’s multiple facets.
- IVADO unveils the schedule of the first workshop "Cognitive Basis of #Reasoning (in Minds and #AI)", Jan 27-29, 2026, spearheaded by @taylorwwebb.bsky.social and Dhanya Sridhar. 🗓️ Schedule and speakers: ivado.ca/en/events/co... 📥 Registration: event.fourwaves.com/thematicseme...
- New eLife paper is out! We explore the link btw 2-phase perception/generation learning methods like wake-sleep, and what may happen in the brain under on psychedelics. Turns out hallucinations are consistent with hijacking phasic learning, essentially running both wake and sleep phases at once.
- Our paper on the "Oneirogen hypothesis" is now up in its revised form on eLife! This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs. elifesciences.org/reviewed-pre... 🧠📈 🧪
- Reposted by Guillaume LajoieOur paper on the "Oneirogen hypothesis" is now up in its revised form on eLife! This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs. elifesciences.org/reviewed-pre... 🧠📈 🧪
- When we learn complex tasks, we chunk them into sub-tasks that our brains orchestrate into action sequences. How we do this is not entirely understood. This work explores how to learn and internally control temporally abstracted sub-tasks in RL/AI with sequence models. arxiv.org/abs/2512.20605
- work done with amazing colleagues at Google's Paradigms of Intelligence team.
- @tyrellturing.bsky.social does a wonderful breakdown of our new theoretical results in multi-agent cooperation. I’m especially excited for the formalization of mechanisms akin to theory-of-mind and other processes that guide how agents model each other. At Paradigms of Intelligence team, Google.
- Incredibly proud of lab members and collaborators for having presented this work at #NeurIPS2025. As flexible sequence models are rapidly developed for neural data, this work demonstrates that they can be used online and substantially benefit from hybrid SSM architectures.
- Excited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
- Reposted by Guillaume LajoieExcited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
- Reposted by Guillaume LajoieThe CTRL-Labs decoding model paper is out! Saw this presented at Cosyne this year, very cool to see it out. I would say this is the clearest demonstration of scaling laws in neural decoding to-date. www.nature.com/articles/s41... 🧠📈 🧪
- Reposted by Guillaume Lajoiewww.programmablemutter.com/p/large-lang... Gopnikism, interactionism, structuralism and role play.
- Compositionality is a central desideratum for intelligent systems...but it's a fuzzy concept and difficult to quantify. In this blog post, lab member @ericelmoznino.bsky.social outlines ideas toward formalizing it & surveys recent work. A must-read for interested researchers in AI and Neuro
- Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...
- Reposted by Guillaume Lajoie🎉 We’re featured by @mila-quebec.bsky.social for our work on immersive, real-world cognitive science. With LABO, researchers can run full XR experiments—no code needed, real behaviour captured. Special thanks to @tyrellturing.bsky.social for being with us from the start! tinyurl.com/yc4wpp3t
- Excited to share recent progress on foundation-like models for neural data. As many use cases for generalizable models demand flexible online deployment, here we focus on a design enabling low latency real time use. We use hybrid SSM architecture & demonstrate various transfer learning capabilities
- Reposted by Guillaume LajoieNew preprint! 🧠🤖 How do we build neural decoders that are: ⚡️ fast enough for real-time use 🎯 accurate across diverse tasks 🌍 generalizable to new sessions, subjects, and even species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! 🧵1/7
- Reposted by Guillaume LajoieReserve your free tickets to SFI's upcoming Community Lecture! lensic.org/events/blais... Blaise Agüera y Arcas’ presents 'Computing, Life, and Intelligence' at the Lensic on 🗓️ May 20, 7:30pm MT in-person or online.
- Reposted by Guillaume LajoieCheck out our new paper! Vision models often struggle with learning both transformation-invariant and -equivariant representations at the same time. @hafezghm.bsky.social shows that self-supervised prediction with proper inductive biases achieves both simultaneously. (1/4) #MLSky #NeuroAI
- Reposted by Guillaume Lajoiecome see @glajoie.bsky.social presenting our poster this week @iclr-conf.bsky.social ! It will be poster #56 in poster session #3. This work was a collaboration between a bunch of us from @mila-quebec.bsky.social and Luca Mazzucato @neuroai-uoregon.bsky.social :)
- 1/7: Super excited to share our new paper! This one should be of interest to neuroscientists and deep learning theory folks. This paper was a collaboration with Alexandre Payeur, @averyryoo.bsky.social, Thomas Jiralerspong, @mattperich.bsky.social, Luca Mazzucato, @glajoie.bsky.social
- Reposted by Guillaume Lajoie🚨 New Preprint! 🚨 We explore Amortized In-Context Bayesian Posterior Estimation with Niels, @glajoie.bsky.social, Priyank Jaini & @marcusabrubaker.bsky.social ! 🔥 Amortized Conditional Modeling = key to success in large-scale models! We use it to estimate posteriors 🔑 📄 arxiv.org/abs/2502.06601
- Reposted by Guillaume Lajoie🚀 New Preprint! 🚀 In-Context Parametric Inference: Point or Distribution Estimators? Thrilled to share our work on inferring probabilistic model parameters explicitly conditioned on data, in collab with @yoshuabengio.bsky.social, Nikolay Malkin & @glajoie.bsky.social! 🔗 arxiv.org/abs/2502.11617
- Reposted by Guillaume LajoieNew preprint! Excited to share our latest work “Accelerated learning of a noninvasive human brain-computer interface via manifold geometry” ft. outstanding former undergraduate Chandra Fincke, @glajoie.bsky.social, @krishnaswamylab.bsky.social, and @wutsaiyale.bsky.social's Nick Turk-Browne 1/8
- Reposted by Guillaume LajoieVery late, but had a 🔥 time at my first Cosyne presenting my work with @nandahkrishna.bsky.social, Ximeng Mao, @mattperich.bsky.social, and @glajoie.bsky.social on real-time neural decoding with hybrid SSMs. Keep an eye out for a preprint (hopefully) soon 👀 #Cosyne2025 @cosynemeeting.bsky.social
- Fresh updates on our efforts to understand the effects of online error manipulation during learing. Turns out learning a task with assistive devices (think training wheels) changes how credit assignment mechanisms shapes neural representations in the brain.
- Hot off the presses: big update to our work looking at how adaptive decoders influence neural representations. We added heroic analyses to show in both experiments & models that the structure of what the brain learns is altered by adaptive decoders. Check it out: www.biorxiv.org/content/10.1...
- Reposted by Guillaume Lajoie1/7: Super excited to share our new paper! This one should be of interest to neuroscientists and deep learning theory folks. This paper was a collaboration with Alexandre Payeur, @averyryoo.bsky.social, Thomas Jiralerspong, @mattperich.bsky.social, Luca Mazzucato, @glajoie.bsky.social
- If you'll be at COSYNE workshops, we got a capstone party planned !
- Coming to the #Cosyne2025 workshops? Wanna dance on the final night? We got you covered. @glajoie.bsky.social and I have organized a party in Tremblant. Come and get on the dance floor y'all. 🕺 April 1st 10PM-3AM Location: Le P'tit Caribou DJs Mat Moebius, Xanarelle, and Prosocial Please share!
- Reposted by Guillaume LajoieComing to the #Cosyne2025 workshops? Wanna dance on the final night? We got you covered. @glajoie.bsky.social and I have organized a party in Tremblant. Come and get on the dance floor y'all. 🕺 April 1st 10PM-3AM Location: Le P'tit Caribou DJs Mat Moebius, Xanarelle, and Prosocial Please share!
- As sequence models and in-context conditioning for inference are being developed to perform all kinds of ML tasks, we make systematic and tracktable evaluations to compare point v.s. distributional estimates . imo a key step to scale predictive modeling for general ML
- 🚀 New Preprint! 🚀 In-Context Parametric Inference: Point or Distribution Estimators? Thrilled to share our work on inferring probabilistic model parameters explicitly conditioned on data, in collab with @yoshuabengio.bsky.social, Nikolay Malkin & @glajoie.bsky.social! 🔗 arxiv.org/abs/2502.11617
- Reposted by Guillaume LajoieThis week, we’re unveiling two members for the AI Insights for Policymakers program: @glajoie.bsky.social (Mila) and Laleh Seyyed-Kalantari (York University). Register here to partner with them and overcome your AI and policy-related challenges: mila.quebec/en/ai4humani...
- Reposted by Guillaume LajoieVoici deux nouveaux experts du programme Perspectives sur l’IA pour les responsables des politiques : @glajoie.bsky.social (Mila) et Laleh Seyyed-Kalantari (York Univ.) Échangez avec eux et relevez vos défis liés à l'IA et aux politiques. Inscrivez-vous ici mila.quebec/fr/ia-pour-l...
- Reposted by Guillaume LajoieThe earliest studies on necessary and sufficient neural populations were performed on simple invertebrate circuits. In her latest column, @neurograce.bsky.social asks if this logic still serves us as we tackle more sophisticated outputs. www.thetransmitter.org/systems-neur...
- Reposted by Guillaume Lajoie@theguardian.com has produced an excellent recap of some of the key points of the International AI Safety Report. Full article below: www.theguardian.com/technology/2...
- Reposted by Guillaume LajoieTalk by Guillaume Lajoie at the Montreal AI and Neuroscience (MAIN) Conference on credit assignment in neural networks without plasticity. #neuroscience #neuroAI #AI #compneuro @glajoie.bsky.social www.youtube.com/watch?v=CvCq...
- Long time coming. A very cool project that showcases the advantages of single neuron adaptation in RNNs. #PLOSCompBio: Neural networks with optimized single-neuron adaptation uncover biologically plausible regulari ... dx.plos.org/10.1371/jour... Props to V. Geadah and co-authors!
- Reposted by Guillaume LajoieMila has a booth at @neuripsconf.bsky.social! Come chat with us if you want to join our research institute or to meet with some of our researchers at #104, West Exhibition Hall A.
- Reposted by Guillaume LajoieExcited to release what we’ve been working on at Amaranth Foundation, our latest whitepaper, NeuroAI for AI safety! A detailed, ambitious roadmap for how neuroscience research can help build safer AI systems while accelerating both virtual neuroscience and neurotech. 1/N
- Compositional representations are a key attributes of intelligent systems that generalize well. An issue is that there is no robust way to quantify compositionality. Below is our attempt at such a quantifiable measurement. arxiv.org/abs/2410.148... w/ E Elmoznino & T Jiralerspong & Y Bengio
- In-context learnin (ICL) is one of the most exciting part of the LLM boom. Sequence models (not just LLMs) implement on-the-fly models conditionned on inputs w/o weight updates! Q: are ICL models better than «in-weights» ones? A: some times ICL is better than standard opt. tinyurl.com/jbzzfyey
- How continuous neural activity learns and support discrete, symbolic & compositional processes remains an important question for cog. sci. and AI. In this preprint we explore ways in which both symbolic and sub-symbolic processing could be achieved using attractor dynamics. arxiv.org/abs/2310.01807
- shout out to an amazing co-author gang: Andrew Nam, Eric Elmoznino, Nikolay Malkin, James McClelland, Yoshua Bengio
- New preprint where we ask if the psychedelic-induced hallucinations can be explained by the role of dendrites in learning mechanisms in the brain. In short: classical psychedellics might hijack physiological gating mechanisms in generative learning.
- 1. Hi all: I’m here to advertise our new preprint: www.biorxiv.org/content/10.1..., with Fabrice Normandin, @tyrellturing.bsky.social, and @glajoie.bsky.social!
- Investigating the experimentally-verifiable impact of different credit assignment mechanisms for learning in the brain is a crucial endeavor for computational neuroscience. Here is our take for motor learnning and the RL/SL question when looking at neural representations in cortex.
- Here’s our latest work at @glajoie.bsky.social and @mattperich.bsky.social ‘s labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. www.biorxiv.org/content/10.1...
- Reposted by Guillaume LajoieAs an aside, I also just learned a new word from this paper! It is ultracrepidarianism, which is offering opinions beyond one's knowledge. Man, I know a lot of ultracrepidarian people out there... 😅😘