Rachel Ryskin
Cognitive scientist @ UC Merced
http://raryskin.github.io
PI of Language, Interaction, & Cognition (LInC) lab: http://linclab0.github.io
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinImagination in bonobos! I am thrilled to share a new paper w/ Amalia Bastos, out now in @science.org We provide the first experimental evidence that a nonhuman animal can follow along a pretend scenario & track imaginary objects. Work w/ Kanzi, the bonobo, at Ape Initiative youtu.be/NUSHcQQz2Ko
- Reposted by Rachel RyskinHow do diverse context structures reshape representations in LLMs? In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/
- Reposted by Rachel RyskinThe Visual Learning Lab is hiring TWO lab coordinators! Both positions are ideal for someone looking for research experience before applying to graduate school. Application deadline is Feb 10th (approaching fast!)—with flexible summer start dates.
- Reposted by Rachel RyskinThe cerebellum supports high-level language?? Now out in @cp-neuron.bsky.social, we systematically examined language-responsive areas of the cerebellum using precision fMRI and identified a *cerebellar satellite* of the neocortical language network! authors.elsevier.com/a/1mUU83BtfH... 1/n 🧵👇
- Reposted by Rachel RyskinInterpreting EEG requires understanding how the skull smears electrical fields as they propagate from the cortex. I made a browser-based simulator for my EEG class to visualize how dipole depth/orientation change the topomap. dbrang.github.io/EEG-Dipole-D... Github page: github.com/dbrang/EEG-D...
- Reposted by Rachel RyskinNew paper with @inbalarnon.bsky.social and @simonkirby.bsky.social! Learnability pressures drive the emergence of core statistical properties of language–e.g. Zipf's laws–in an iterated sequence learning experiment, with learners’ RTs indicating sensitivity to the emerging sequence information.
- Does our "semantic space" get stuck in the past as we age? New work by @ellscain.bsky.social uses historical embeddings + behavioral data to show we are truly lifelong learners. Older adults don't rely on historical meanings—they update them to match current language! 🧠✨ doi.org/10.1162/OPMI...
- New paper w/ @ryskin.bsky.social in Open Mind! Words change: “broadcast” once meant scattering seeds; “tweet” was just a bird sound. Do older adults keep earlier meanings, or update as language evolves? Our new paper investigates how semantic representations differ across age groups. 🧵👇
- Reposted by Rachel RyskinA quick read to start off 2026…
- Language Evolution by Morten H. Christiansen: doi.org/10.21428/e2759450.3…
- Reposted by Rachel RyskinI may be a *little* biased but this 📘 is GREAT! If you ever found language structure interesting, but were turned off by implausible and overly complicated accounts, this book is 4U: a simple and empirically grounded account of the syntax of natural lgs. A must-read for lang researchers+aficionados!
- New book! I have written a book, called Syntax: A cognitive approach, published by MIT Press. This is open access; MIT Press will post a link soon, but until then, the book is available on my website: tedlab.mit.edu/tedlab_websi...
- Reposted by Rachel RyskinNew book! I have written a book, called Syntax: A cognitive approach, published by MIT Press. This is open access; MIT Press will post a link soon, but until then, the book is available on my website: tedlab.mit.edu/tedlab_websi...
- Reposted by Rachel RyskinNew preprint on prosody in the brain! tinyurl.com/2ndswjwu HeeSoKim NiharikaJhingan SaraSwords @hopekean.bsky.social @coltoncasto.bsky.social JenniferCole @evfedorenko.bsky.social Prosody areas are distinct from pitch, speech, and multiple-demand areas, and partly overlap with lang+social areas→🧵
- Reposted by Rachel RyskinThe Press and @openmindjournal.bsky.social are pleased to announce a partnership with Lyrasis through the Open Access Community Investment Program (OACIP). Learn how your institution can support this initiative to continue providing the latest #cogsci research—free of charge—here: bit.ly/452nMma
- Reposted by Rachel RyskinThe last chapter of my PhD (expanded) is finally out as a preprint! “Semantic reasoning takes place largely outside the language network” 🧠🧐 www.biorxiv.org/content/10.6... What is semantic reasoning? Read on! 🧵👇
- Reposted by Rachel RyskinUsing a large-scale individual differences investigation (with ~800 participants each performing an ~8-hour battery of non-literal comprehension tasks), we found that pragmatic language use fractionates into 3 components: Social conventions, intonation, and world knowledge–based causal reasoning.
- Reposted by Rachel RyskinA couple years (!) in the making: we’re releasing a new corpus of embodied, collaborative problem solving dialogues. We paid 36 people to play Portal 2’s co-op mode and collected their speech + game recordings. Paper: arxiv.org/abs/2512.03381 Website: berkeley-nlp.github.io/portal-dialo... 1/n
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin📣 Very happy to announce a new BBS target article with Nick Chater in which we propose a new theory of cultural evolution, highlighting the importance of bottom-up social interaction in explaining the emergence of cultural complexity 🧵 1/8 www.cambridge.org/core/journal...
- Reposted by Rachel Ryskinfor all of you using the ALIGN library (to measure lexical, syntactic and semantic alignment in conversations), Nick Duran has put together a great refactoring: ALIGN 2.0 (github.com/nickduran/al...), now integrated with Spacy and Bert
- Reposted by Rachel RyskinOrigins of language, one of humanity’s most distinctive traits, may be best explained as a unique convergence of multiple capacities each with its own evolutionary history, involving intertwined roles of biology & culture. This framing can expand research horizons. A 🧵 on our @science.org paper.🧪1/n
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinOur paper @sarabogels.bsky.social covering our pre-registered multi-year research is now finally out in Cognition. We show that in conversations people reduce their multimodal signals non-linearly; the steeper this non-linear drop-off the more communicative success. www.wimpouw.com/files/Bogels...
- Reposted by Rachel RyskinNew work to appear @ TACL! Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar. Yet they often assign higher probability to ungrammatical strings than to grammatical strings. How can both things be true? 🧵👇
- Reposted by Rachel RyskinNew pre-print from our lab, by Lakshmi Govindarajan with help from Sagarika Alavilli, introducing a new type of model for studying sensory uncertainty. www.biorxiv.org/content/10.1... Here is a summary. (1/n)
- Reposted by Rachel RyskinI will be recruiting PhD students via Georgetown Linguistics this application cycle! Come join us in the PICoL (pronounced “pickle”) lab. We focus on psycholinguistics and cognitive modeling using LLMs. See the linked flyer for more details: bit.ly/3L3vcyA
- Reposted by Rachel RyskinThe first publication of the #ERC project ‘LaDy’ is a fact and it’s an important one I think: We show that word processing and meaning prediction is fundamentally different during social interaction compared to using language individually! 👀 short 🧵/1 psycnet.apa.org/fulltext/202... #OpenAccess
- Reposted by Rachel RyskinAs our lab started to build encoding 🧠 models, we were trying to figure out best practices in the field. So @neurotaha.bsky.social built a library to easily compare design choices & model features across datasets! We hope it will be useful to the community & plan to keep expanding it! 1/
- 🚨 Paper alert: To appear in the DBM Neurips Workshop LITcoder: A General-Purpose Library for Building and Comparing Encoding Models 📄 arxiv: arxiv.org/abs/2509.091... 🔗 project: litcoder-brain.github.io
- Reposted by Rachel RyskinNew paper: We argue that linearization in language production is a foraging process, with speakers navigating semantic and spatial clusters. Lead author: Karina Tachihara, former UC Davis postdoc, now faculty at UIUC! www.sciencedirect.com/science/arti...
- 🚨 Postdoc Opportunity PSA! 🚨 🗓️ UC President’s Postdoctoral Fellowship Program applications are due Nov. 1 (ppfp.ucop.edu/info/) Open to anyone interested in a postdoc & academic career at a UC campus. I'm happy to sponsor an applicant if there’s a good fit— please reach out!
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinNew paper with @rjantonello.bsky.social @csinva.bsky.social, Suna Guo, Gavin Mischler, Jianfeng Gao, & Nima Mesgarani: We use LLMs to generate VERY interpretable embeddings where each dimension corresponds to a scientific theory, & then use these embeddings to predict fMRI and ECoG. It WORKS!
- Evaluating scientific theories as predictive models in language neuroscience biorxiv.org/content/10.1101/202…
- Reposted by Rachel RyskinLLM finds it FAR easier to distinguish b/w DO & PO constructions when the lexical & info structure of instances conform more closely w/ the respective constructions (left 👇). Where's pure syntax? LLM seems to say "🤷♀️" (right) @SRakshit adele.scholar.princeton.edu/sites/g/file...
- Reposted by Rachel RyskinIf you missed us at #cogsci2025, my lab presented 3 new studies showing how efficient (lossy) compression shapes individual learners, bilinguals, and action abstractions in language, further demonstrating the extraordinary applicability of this principle to human cognition! 🧵 1/n
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Looking forward to seeing everyone at #CogSci2025 this week! Come check out what we’ve been working on in the LInC Lab, along with our fantastic collaborators! Paper 🔗 in 🧵👇
- Reposted by Rachel RyskinSome happy science news (a small light in times of darkness). New paper out with @luciewolters.bsky.social and Mits Ota: : Skewed distributions facilitate infants’ word segmentation. sciencedirect.com/science/arti...
- Thrilled to see this work published — and even more thrilled to have been part of such a great collaborative team! One key takeaway for me: Webcam eye-tracking w/ jsPsych is awesome for 4-quadrant visual world paradigm studies -- less so for displays w/ smaller ROIs.
- Want to know what kinds of studies webcam-based eye tracking can be used for? Here's our take on the current tech. This certainly isn't the first paper on this topic, but it provides some converging evidence about the viability of eye tracking with online methods. online.ucpress.edu/collabra/art...
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinWhat are the organizing dimensions of language processing? We show that voxel responses during comprehension are organized along 2 main axes: processing difficulty & meaning abstractness—revealing an interpretable, topographic representational basis for language processing shared across individuals
- Reposted by Rachel Ryskin🤖🧠 Paper out in Nature Communications! 🧠🤖 Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths? Our answer: Use meta-learning to distill Bayesian priors into a neural network! www.nature.com/articles/s41... 1/n
- Reposted by Rachel RyskinUnfortunately, the NSF grant that supports our work has been terminated. This is a setback, but our mission has not changed. We will continue to work hard on making cognitive science a more inclusive field. Stay tuned for upcoming events.
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinAI models are fascinating, impressive, and sometimes problematic. But what can they tell us about the human mind? In a new review paper, @noahdgoodman.bsky.social and I discuss how modern AI can be used for cognitive modeling: osf.io/preprints/ps...
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel Ryskin[Not loaded yet]
- Reposted by Rachel RyskinWhat is human #StatisticalLearning for? The standard assumption is that the goal of SL is to learn the regularities in the environment to guide behavior. In our new Psych Review paper, we argue that SL instead is provides the basis for novelty detection within an information foraging system 1/2