Noga Zaslavsky
Computational cognitive scientist, developing integrative models of language, perception, and action. Assistant Prof at NYU.
More info: nogsky.com
- So excited that our paper got accepted to ICLR!! Check it out 👇
- Can LLMs evolve human-like semantic categories? CDS-affiliated @nogazs.bsky.social and PhD student Nathaniel Imel show that, via simulated cultural transmission, LLMs reorganize color categories toward efficient compression. 🔗 arxiv.org/abs/2509.08093
- Super excited about our new paper!! Check this out 👇🧵
- Reposted by Noga Zaslavsky[Not loaded yet]
- Reposted by Noga ZaslavskyFiguring out how the brain uses information from visual neurons may require new tools, writes @neurograce.bsky.social. Hear from 10 experts in the field. #neuroskyence www.thetransmitter.org/the-big-pict...
- Reposted by Noga Zaslavsky[Not loaded yet]
- If you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧵 1/n
- Super excited to have the #InfoCog workshop this year at #CogSci2025! Join us in SF for an exciting lineup of speakers and panelists, and check out the workshop's website for more info and detailed scheduled sites.google.com/view/infocog...
- Reposted by Noga Zaslavsky[Not loaded yet]
- 📣 I'm looking for a postdoc to join my lab at NYU! Come work with me on a principled, theory-driven approach to studying language, learning, and reasoning, in humans and AI agents. Apply here: apply.interfolio.com/170656 And come chat with me at #CogSci2025 if interested!
- Reposted by Noga ZaslavskyNathaniel Imel, Jennifer Culbertson, @simonkirby.bsky.social & @nogazs.bsky.social: Iterated language learning is shaped by a drive for optimizing lossy compression (Talks 37: Language and Computation 3, 1 August @ 16:22; blurb below) (2/)
- This month I'm celebrating a decade (!!) since my first paper was published, which now has over 2,000 citations 🥹 "Deep learning and the information bottleneck principle" with the late, great Tali Tishby ieeexplore.ieee.org/document/713...
- Reposted by Noga Zaslavsky🔆 I'm hiring! 🔆 There are two open positions: 1. Summer research position (best for master's or graduate student); focus on computational social cognition. 2. Postdoc (currently interviewing!); focus on computational social cognition and AI safety. sites.google.com/corp/site/sy...
- Reposted by Noga ZaslavskyBecause we must build good things while we scream about the bad, I have started a "Data for Good" team @data-for-good-team.bsky.social that partners with organizations needing short-term data science help. We have three projects ongoing & will add more as our capacity grows. data-for-good-team.org
- Excited to share our new paper "Towards Human-Like Emergent Communication via Utility, Informativeness, and Complexity" direct.mit.edu/opmi/article... @rplevy.bsky.social And looking forward to speaking about this line of work tomorrow at @nyudatascience.bsky.social!
- Reposted by Noga Zaslavsky[Not loaded yet]
- Reposted by Noga Zaslavsky[Not loaded yet]