Ryan Panela
🇨🇦🇵🇭 || Graduate Student || Cognitive & Computational Neuroscience || UofT & Rotman Research
- Reposted by Ryan PanelaWith some trepidation, I'm putting this out into the world: gershmanlab.com/textbook.html It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class. My hope is that this will be a living document, continuously improved as I get feedback.
- Reposted by Ryan PanelaFinally out: www.eneuro.org/content/earl... fMRI during naturalistic story listening in noise, looking at event-segmentation and ISC signatures. Listeners stay engaged and comprehend the gist even in moderate noise. with @ayshamota.bsky.social @ryanaperry.bsky.social @ingridjohnsrude.bsky.social
- New Preprint 🚨 This research with @alexbarnett.bsky.social, Yulia Lamekina, @barense.bsky.social, and @bjherrmann.bsky.social examines how background noise shapes event segmentation during continuous speech listening and its consequences for memory. osf.io/e67qr_v1 @auditoryaging.bsky.social
- Building Bridges in Brain Data. The event will focus on open science practices, innovative methods, and community in the neurosciences, with opportunities to engage in collaborative projects or explore new tools. No prior expertise is required. Registration for BrainHack 2026 is still open!
- Don't miss out on this year's BrainHack Global Toronto happening at @sickkidsto.bsky.social Jan 19-21. Register here: brainhackto.github.io/brainhack-to... & check out our video to learn more: brainhackto.github.io/brainhack-to... @ontariobrain.bsky.social @kcnhub.bsky.social @uhn.ca @utoronto.ca
- Reposted by Ryan PanelaNew work from the lab: www.biorxiv.org/content/10.1... Mobile eye-tracking glasses assess listening effort through pupil size and eye movements as good as a stationary eye tracker. But mobile glasses also show that people reduce their head movements when listening becomes more effortful.
- Excited to share the publication of our work which explores the application of LLMs in event segmentation and memory research. For researchers interested in applying these validated methods, an open-source module is available on GitHub (github.com/ryanapanela/EventRecall).
- Large language models automate event segmentation & recall scoring with human-level accuracy. LLMs identify event boundaries more consistently than humans, while semantic embeddings enable scalable memory assessments. @ryanapanela.bsky.social @bjherrmann.bsky.social www.nature.com/articles/s44...
- Reposted by Ryan Panela🚨 New preprint 🚨 Prior work has mapped how the brain encodes concepts: If you see fire and smoke, your brain will represent the fire (hot, bright) and smoke (gray, airy). But how do you encode features of the fire-smoke relation? We analyzed fMRI with embeddings extracted from LLMs to find out 🧵
- Reposted by Ryan PanelaShort speech utterances can be looped and after a few repetitions it sounds like the speaker is singing and once the switch from speech to song happens it never seems to go back. This paper we showed evidence for music knowledge being activated after the switch. www.sciencedirect.com/science/arti...