@dataonbrainmind.bsky.social starting now in Room 10 with opening remarks from
@crji.bsky.social and the first invited talk from
@dyamins.bsky.social!
2️⃣ days until
#NeurIPS2025 Data on the Brain & Mind workshop! 🧠💭🤖 Join us on Dec 7 for a full-day interactive session 8am-5pm PT.
Authors, please remember to RSVP for our mentorship lunch 🥙 generously supported by
@kavlifoundation.org and
@simonsfoundation.org (
@flatironinstitute.org)
Thrilled to start 2026 as faculty in Psych & CS
@ualberta.bsky.social +
Amii.ca Fellow! 🥳 Recruiting students to develop theories of cognition in natural & artificial systems 🤖💭🧠. Find me at
#NeurIPS2025 workshops (speaking
coginterp.github.io/neurips2025 & organising
@dataonbrainmind.bsky.social)
[Not loaded yet]
Hoping you find out and share! 🤗
I've been waiting some years to make this joke and now it’s real:
I conned somebody into giving me a faculty job!
I’m starting as a W1 Tenure-Track Professor at Goethe University Frankfurt in a week (lol), in the Faculty of CS and Math
and I'm recruiting PhD students 🤗
Congrats Richard!!
I’m recruiting committee members for the Technical Program Committee at
#CCN2026.
Please apply if you want to help make submission, review & selection of contributed work (Extended Abstracts & Proceedings) more useful for everyone! 🌐
Helps to have: programming/communications/editorial experience.
Are similar representations in neural nets evidence of shared computation? In new theory work w/ Lukas Braun (
lukasbraun.com) &
@saxelab.bsky.social, we prove that representational comparisons are ill-posed in general, unless networks are efficient.
@icmlconf.bsky.social @cogcompneuro.bsky.social[Not loaded yet]
many thanks to my collaborators,
@saxelab.bsky.social and especially Lukas :)
Definitely! Task constraints certainly play a role in determining representational structure, which might interact with what we consider here (efficiency of implementation). We don't explicitly study it. Someone should!
I like the how Rosa Cao (
sites.google.com/site/luosha) &
@dyamins.bsky.social speculated about task constraints here (
doi.org/10.1016/j.co...). I think the Platonic Representation hypothesis is a version of their argument, for multi-modal learning.
Our theory predicts that representational alignment is consistent with *efficient* implementation of similar function. Comparing representations is ill-posed in general, but becomes well-posed under minimum-norm constraints, which we link to computational advantages (noise robustness).
Main takeaway: Valid representational comparison relies on implicit assumptions (task-optimization *plus* efficient implementation). ⚠️ More work to do on making these assumptions explicit!
🧠 CCN poster (today):
2025.ccneuro.org/poster/?id=w...
📄 ICML paper (July):
icml.cc/virtual/2025/poster/44890ICML Poster Not all solutions are created equal: An analytical dissociation of functional and representational similarity in deep linear neural networksICML 2025
We demonstrate that representation analysis and comparison is ill-posed, giving both false negatives and false positives, unless we work with *task-specific representations*. These are interpretable *and* robust to noise (i.e., representational identifiability comes with computational advantages).
Function-representation dissociations and the representation-computation link persist in deep nonlinear networks! Using function-invariant reparametrisations (
@bsimsek.bsky.social), we break representational identifiability but degrade generalization (a computational consequence).
To analyse this dissociation in a tractable model of representation learning, we characterize *all* task solutions for two-layer linear networks. Within this solution manifold, we identify a solution hierarchy in terms of what implicit objectives are minimized (in addition to the task objective).
We parametrised this solution hierarchy to find differences in handling of task-irrelevant dimensions: Some solutions compress away (creating task-specific, interpretable representations), while others preserve arbitrary structure in null spaces (creating arbitrary, uninterpretable representations).
Deep networks have parameter symmetries, so we can walk through solution space, changing all weights and representations, while keeping output fixed. In the worst case, function and representation are *dissociated*.
(Networks can have the same function with the same or different representation.)
-
View full thread
Want to contribute to this debate at
#CCN2025? Please come to our session today, fill out the anonymous survey (
forms.gle/yDBBcBZybGjogksC8), and comment on the GAC page (
sites.google.com/ccneuro.org/gac2020/gacs-by-year/2025-gacs/2025-1)! Your perspectives will shape our eventual GAC paper. 👥
Cognitive science aims for more than mere prediction: We aim to build theories. Yet, evaluations in cognitive science tend to be narrow tests of a specific theory. How can we create benchmarks to make empirical validation more systematic, while preserving our goal of theory-driven cognitive science?
This GAC focuses on three debates/questions around benchmarks in cognitive science (the what, why, and how): (1) Should data or theory come first? (2) Should we focus on replication or exploration? (3) What incentives should we build up, if we choose to invest effort as a community?
Cognitive science met computational methods sooner than many scientific domains, but hasn’t yet fully embraced *benchmarks*: Shared evaluation challenges that focus on open data and reproducible methods (
doi.org/10.1162/99608f92.b91339ef). How could we get benchmarking right for cognitive science? 🤔

Data Science at the Singularity
[Not loaded yet]
[Not loaded yet]
How about controlling sparsity of the code via task alone:
doi.org/10.1073/pnas... and our follow-up
arxiv.org/abs/2501.17284? Though the loop to experiment is not yet closed :)

Data-driven emergence of convolutional structure in neural networks | PNAS
Exploiting data invariances is crucial for efficient learning in both artificial and
biological neural circuits. Understanding how neural networks ...
Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at
@yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!
My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵
Congrats, Dan!!
The afternoon session is continuing now!
@iclr-conf.bsky.social✨ Come join us at the Second edition of the Re-Align workshop
@iclr_conf
! 🚀🧠 The workshop explores the fascinating question of how artificial and biological systems align in their representations of the world.
#ReAlign #ICLR2025
Deadline extended till Feb. 5th!
Our representational alignment workshop returns to
#ICLR2025! Submit your work on how ML/cogsci/neuro systems represent the world & what shapes these representations 💭🧠🤖
w/
@thisismyhat.bsky.social @dotadotadota.bsky.social,
@sucholutsky.bsky.social @lukasmut.bsky.social @siddsuresh97.bsky.social
Last year, we funded 250 authors and other contributors to attend
#ICLR2024 in Vienna as part of this program. If you or your organization want to directly support contributors this year, please get in touch! Hope to see you in Singapore at
#ICLR2025!
Financial Assistance applications are now open! If you face financial barriers to attending ICLR 2025, we encourage you to apply. The program offers prepay and reimbursement options. Applications are due March 2nd with decisions announced March 9th.
iclr.cc/Conferences/...ICLR 2024 Financial Assistance
Our representational alignment workshop returns to
#ICLR2025! Submit your work on how ML/cogsci/neuro systems represent the world & what shapes these representations 💭🧠🤖
w/
@thisismyhat.bsky.social @dotadotadota.bsky.social,
@sucholutsky.bsky.social @lukasmut.bsky.social @siddsuresh97.bsky.social🚨Call for Papers🚨
The Re-Align Workshop is coming back to
#ICLR2025
Our CfP is up! Come share your representational alignment work at our interdisciplinary workshop at
@iclr-conf.bsky.social
Deadline is 11:59 pm AOE on Feb 3rd
representational-alignment.github.io
If you missed it at the
#NeurIPS2024 posters! Work led by
@leonlufkin.bsky.social on analytical dynamics of localization in simple neural nets, as seen in real+artificial nets and distilled by
@aingrosso.bsky.social @sebgoldt.bsky.social.
Leon is a fantastic collaborator + looking for PhD positions!
[This post could not be retrieved]
[Not loaded yet]
Thank you, Sebastian!!
For the Blueskyers interested in
#NeuroAI 🧠🤖,
I created a starter pack! Please comment on this if you are not on the list and working in this field 🙂
go.bsky.app/CscFTArat://did:plc:cen7snsmmrec7psa2jillvj2/app.bsky.graph.starterpack/3l6qamzwif32f
🙋♀️
Lightweight poster submissions for this stellar RS program end tomorrow (Sept. 17th)! I hear from Marta Kwiatkowska that Rich Sutton coined the title, so expect: bitter lessons alongside the longstanding debate between top-down vs. bottom-up approaches in CogSci :)
royalsociety.org/science-even...
Beyond the symbols vs signals debate | Royal Society
Discussion meeting organised by Professor Marta Kwiatkowska FRS, Professor Peter Dayan FRS, Professor Tom Griffiths and Professor Doina Precup