Samuel Liebana
Research Fellow at the Gatsby Unit, UCL
Q: How do we learn?
- Reposted by Samuel Liebana[Not loaded yet]
- Honored to have a research highlight featuring our work! A comprehensive overview of our results and their impact for future research and applications: www.nature.com/articles/s41...
- Reposted by Samuel LiebanaI’m super excited to finally put my recent work with @behrenstimb.bsky.social on bioRxiv, where we develop a new mechanistic theory of how PFC structures adaptive behaviour using attractor dynamics in space and time! www.biorxiv.org/content/10.1...
- Reposted by Samuel LiebanaIn our Learning Club @cmc-lab.bsky.social today (Aug 18, Thu, 2pm CET), Samuel Liebana will tell us about his paper (www.cell.com/cell/fulltex... [joint work w/ @saxelab.bsky.social & @laklab.bsky.social]. Want to attend, send an empty email to virtual-talk-link-request@cmclab.org to get the link!
- Reposted by Samuel Liebana🚨Our preprint is online!🚨 www.biorxiv.org/content/10.1... How do #dopamine neurons perform the key calculations in reinforcement #learning? Read on to find out more! 🧵
- Reposted by Samuel LiebanaBeautiful and clear results showing that temporal difference error calculation is hardwired in the dopamine/striatum mircocircuits: www.biorxiv.org/content/10.1... from @malcolmgcampbell.bsky.social and @naoshigeuchida.bsky.social
- Reposted by Samuel Liebana[Not loaded yet]
- Reposted by Samuel Liebana[Not loaded yet]
- Reposted by Samuel LiebanaHow does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper @icmlconf.bsky.social: Training Dynamics of In-Context Learning in Linear Attention arxiv.org/abs/2501.16265 Led by Yedi Zhang with @aaditya6284.bsky.social and Peter Latham
- Reposted by Samuel LiebanaExcited to share new work @icmlconf.bsky.social by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 15 Jul West Exhibition Hall C: icml.cc/virtual/2025... Paper: openreview.net/forum?id=3go... (1/11)
- Does the brain learn by gradient descent? It's a pleasure to share our paper at @cp-cell.bsky.social, showing how mice learning over long timescales display key hallmarks of gradient descent (GD). The culmination of my PhD supervised by @laklab.bsky.social, @saxelab.bsky.social and Rafal Bogacz!