Guillaume Lajoie
Professor at Université de Montréal & Mila -- Québec AI Institute
mathematics - neuroscience - artificial intelligence
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Come join us for this first worshop of a three part series on the computational ingredients of reasoning in minds and AI. Reasoning is a complex term, especially in light of an exploding category of methods in LLMs. These workshops will explore reasoning’s multiple facets.
- IVADO unveils the schedule of the first workshop "Cognitive Basis of #Reasoning (in Minds and #AI)", Jan 27-29, 2026, spearheaded by @taylorwwebb.bsky.social and Dhanya Sridhar. 🗓️ Schedule and speakers: ivado.ca/en/events/co... 📥 Registration: event.fourwaves.com/thematicseme...
- New eLife paper is out! We explore the link btw 2-phase perception/generation learning methods like wake-sleep, and what may happen in the brain under on psychedelics. Turns out hallucinations are consistent with hijacking phasic learning, essentially running both wake and sleep phases at once.
- Our paper on the "Oneirogen hypothesis" is now up in its revised form on eLife! This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs. elifesciences.org/reviewed-pre... 🧠📈 🧪
- Reposted by Guillaume LajoieOur paper on the "Oneirogen hypothesis" is now up in its revised form on eLife! This is the hypothesis that psychedelics induce a dream-like state, which we show via modelling could explain a variety of perceptual and learning effects from such drugs. elifesciences.org/reviewed-pre... 🧠📈 🧪
- When we learn complex tasks, we chunk them into sub-tasks that our brains orchestrate into action sequences. How we do this is not entirely understood. This work explores how to learn and internally control temporally abstracted sub-tasks in RL/AI with sequence models. arxiv.org/abs/2512.20605
- work done with amazing colleagues at Google's Paradigms of Intelligence team.
- @tyrellturing.bsky.social does a wonderful breakdown of our new theoretical results in multi-agent cooperation. I’m especially excited for the formalization of mechanisms akin to theory-of-mind and other processes that guide how agents model each other. At Paradigms of Intelligence team, Google.
- Incredibly proud of lab members and collaborators for having presented this work at #NeurIPS2025. As flexible sequence models are rapidly developed for neural data, this work demonstrates that they can be used online and substantially benefit from hybrid SSM architectures.
- Excited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
- Reposted by Guillaume LajoieExcited to share that POSSM has been accepted to #NeurIPS2025! See you in San Diego 🏖️
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Compositionality is a central desideratum for intelligent systems...but it's a fuzzy concept and difficult to quantify. In this blog post, lab member @ericelmoznino.bsky.social outlines ideas toward formalizing it & surveys recent work. A must-read for interested researchers in AI and Neuro
- Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...
- Reposted by Guillaume Lajoie[Not loaded yet]
- Excited to share recent progress on foundation-like models for neural data. As many use cases for generalizable models demand flexible online deployment, here we focus on a design enabling low latency real time use. We use hybrid SSM architecture & demonstrate various transfer learning capabilities
- Reposted by Guillaume LajoieNew preprint! 🧠🤖 How do we build neural decoders that are: ⚡️ fast enough for real-time use 🎯 accurate across diverse tasks 🌍 generalizable to new sessions, subjects, and even species? We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes! 🧵1/7
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume LajoieCheck out our new paper! Vision models often struggle with learning both transformation-invariant and -equivariant representations at the same time. @hafezghm.bsky.social shows that self-supervised prediction with proper inductive biases achieves both simultaneously. (1/4) #MLSky #NeuroAI
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume LajoieNew preprint! Excited to share our latest work “Accelerated learning of a noninvasive human brain-computer interface via manifold geometry” ft. outstanding former undergraduate Chandra Fincke, @glajoie.bsky.social, @krishnaswamylab.bsky.social, and @wutsaiyale.bsky.social's Nick Turk-Browne 1/8
- Reposted by Guillaume LajoieVery late, but had a 🔥 time at my first Cosyne presenting my work with @nandahkrishna.bsky.social, Ximeng Mao, @mattperich.bsky.social, and @glajoie.bsky.social on real-time neural decoding with hybrid SSMs. Keep an eye out for a preprint (hopefully) soon 👀 #Cosyne2025 @cosynemeeting.bsky.social
- Fresh updates on our efforts to understand the effects of online error manipulation during learing. Turns out learning a task with assistive devices (think training wheels) changes how credit assignment mechanisms shapes neural representations in the brain.
- Reposted by Guillaume Lajoie1/7: Super excited to share our new paper! This one should be of interest to neuroscientists and deep learning theory folks. This paper was a collaboration with Alexandre Payeur, @averyryoo.bsky.social, Thomas Jiralerspong, @mattperich.bsky.social, Luca Mazzucato, @glajoie.bsky.social
- If you'll be at COSYNE workshops, we got a capstone party planned !
- Coming to the #Cosyne2025 workshops? Wanna dance on the final night? We got you covered. @glajoie.bsky.social and I have organized a party in Tremblant. Come and get on the dance floor y'all. 🕺 April 1st 10PM-3AM Location: Le P'tit Caribou DJs Mat Moebius, Xanarelle, and Prosocial Please share!
- Reposted by Guillaume LajoieComing to the #Cosyne2025 workshops? Wanna dance on the final night? We got you covered. @glajoie.bsky.social and I have organized a party in Tremblant. Come and get on the dance floor y'all. 🕺 April 1st 10PM-3AM Location: Le P'tit Caribou DJs Mat Moebius, Xanarelle, and Prosocial Please share!
- As sequence models and in-context conditioning for inference are being developed to perform all kinds of ML tasks, we make systematic and tracktable evaluations to compare point v.s. distributional estimates . imo a key step to scale predictive modeling for general ML
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Long time coming. A very cool project that showcases the advantages of single neuron adaptation in RNNs. #PLOSCompBio: Neural networks with optimized single-neuron adaptation uncover biologically plausible regulari ... dx.plos.org/10.1371/jour... Props to V. Geadah and co-authors!
- Reposted by Guillaume Lajoie[Not loaded yet]
- Reposted by Guillaume Lajoie[Not loaded yet]
- Compositional representations are a key attributes of intelligent systems that generalize well. An issue is that there is no robust way to quantify compositionality. Below is our attempt at such a quantifiable measurement. arxiv.org/abs/2410.148... w/ E Elmoznino & T Jiralerspong & Y Bengio
- In-context learnin (ICL) is one of the most exciting part of the LLM boom. Sequence models (not just LLMs) implement on-the-fly models conditionned on inputs w/o weight updates! Q: are ICL models better than «in-weights» ones? A: some times ICL is better than standard opt. tinyurl.com/jbzzfyey
- How continuous neural activity learns and support discrete, symbolic & compositional processes remains an important question for cog. sci. and AI. In this preprint we explore ways in which both symbolic and sub-symbolic processing could be achieved using attractor dynamics. arxiv.org/abs/2310.01807
- New preprint where we ask if the psychedelic-induced hallucinations can be explained by the role of dendrites in learning mechanisms in the brain. In short: classical psychedellics might hijack physiological gating mechanisms in generative learning.
- 1. Hi all: I’m here to advertise our new preprint: www.biorxiv.org/content/10.1..., with Fabrice Normandin, @tyrellturing.bsky.social, and @glajoie.bsky.social!
- Investigating the experimentally-verifiable impact of different credit assignment mechanisms for learning in the brain is a crucial endeavor for computational neuroscience. Here is our take for motor learnning and the RL/SL question when looking at neural representations in cortex.
- Here’s our latest work at @glajoie.bsky.social and @mattperich.bsky.social ‘s labs! Excited to see this out. We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties. www.biorxiv.org/content/10.1...
- Reposted by Guillaume Lajoie[Not loaded yet]