Cogan Lab
The Cogan Lab at Duke University: Investigating speech, language, and cognition using invasive neural human electrophysiology
http://coganlab.org
- Come by tomorrow morning to see Baishen's work on verbal working memory!
- Lastly (not least): Wed. Nov 19 8am-12pm: 411.11 / MM10 Sensory-motor mechanisms for verbal working memory* Postdoc Baishen Liang will be presenting his work on sensory-motor transformations for vWM @gregoryhickok.bsky.social *Also presenting at APAN
- Come by this morning to see Areti's poster!
- Next: Mon. Nov 17 8am-12pm: 173.10 / S11 Multimodal sensory-motor transformations for speech @dukeengineering.bsky.social PhD Student Areti Majumdar will be presenting her work on multimodal sensory-motor transformations for speech
- At #Sfn2025 ? Come see some of the lab's posters this afternoon!
- Coming to San Diego for SfN and/or APAN? Come check out the intracranial work from the lab (7 posters)! There's a bit of everything this year, so come say hello! #Sfn2025 #Neuroscience #neuroskyence @dukebrain.bsky.social @dukeneurosurgery.bsky.social @dukeengineering.bsky.social
- Coming to San Diego for SfN and/or APAN? Come check out the intracranial work from the lab (7 posters)! There's a bit of everything this year, so come say hello! #Sfn2025 #Neuroscience #neuroskyence @dukebrain.bsky.social @dukeneurosurgery.bsky.social @dukeengineering.bsky.social
-
View full threadLastly (not least): Wed. Nov 19 8am-12pm: 411.11 / MM10 Sensory-motor mechanisms for verbal working memory* Postdoc Baishen Liang will be presenting his work on sensory-motor transformations for vWM @gregoryhickok.bsky.social *Also presenting at APAN
- Then: Sun. Nov 16 1pm-5pm: 142.11 / LL17 Computational hierarchies of intrinsic neural timescales for speech perception and production Former CRS @nicoleliddle.bsky.social (now at UCSD Cog Sci) will be presenting her work on intrinsic timescales and speech perception/production
- Next: Mon. Nov 17 8am-12pm: 173.10 / S11 Multimodal sensory-motor transformations for speech @dukeengineering.bsky.social PhD Student Areti Majumdar will be presenting her work on multimodal sensory-motor transformations for speech
- Then: Sun. Nov 16 1pm-5pm: 142.05 / LL11 Verbal working memory is subserved by distributed network activity between temporal and frontal lobes Former Neurosurgery Resident Daniel Sexton (now at @stanfordnsurg.bsky.social ) will be presenting his work on network decoding of verbal WM
- Next: Sun. Nov 16 1pm-5pm: 142.06 / LL12 Hierarchical Speech Encoding in Non-Primary Auditory Regions* Postdoc Nanlin Shi will be presenting his work on speech encoding in non-canonical areas *Also presenting at APAN
- First up: Sun. Nov 16 1pm-5pm: 126.20 / T11 Automated speech annotation achieves manual-level accuracy for neural speech decoding @dukeengineering.bsky.social PhD Student Zac Spalding and Duke Kunshan undergrad Ahmed Hadwan will present work on validating automated speech alignment for BCI
- Next: Sun. Nov 16 1pm-5pm: 137.10 / HH2 Intracranial EEG Correlates of Concurrent Demands on Cognitive Stability and Flexibility Undergraduate Erin Burns and CNAP PhD Student Jim Zhang will present work from our lab and @tobiasegner.bsky.social Lab on cognitive control
- Come by tomorrow morning to hear about verbal working memory!
- Stop by this afternoon to see some intracranial speech decoding in the hippocampus and to say hello!
- Coming to DC for SNL later this week? Come check out our posters on speech decoding and verbal working memory using intracranial recordings! @snlmtg.bsky.social #SNL2025
- Friday Sept 12 4:30pm-6:00pm, Poster Session B B70: Yuchao Wang (Rotation CNAP PhD Student) will be presenting his work on auditory pseudoword decoding in the hippocampus.
- Saturday Sept. 13 11am-12:30pm, Poster Session C C54: Baishen Liang (Postdoctoral Associate) will be presenting his work on sensory-motor mechanisms for verbal working memory. Hope to see you all there!
- Last week, Zac Spalding (@zspald.bsky.social, 4th year PhD student, @dukeubme.bsky.social) presented Adam Gosztolai, Robert Peach, and colleagues’ 2025 paper on MARBLE, a method for finding interpretable latent representations of neural dynamics.
-
View full threadIf separate animals were treated as separate manifolds with an embedding-agnostic MARBLE, would you still expect an informative latent space to be learned without any need for post-hoc alignment?
- ❔3️⃣: In Figs. 4 and 5, do you obtain similar results if you operate directly on the spike trains instead of on the PCA-reduced spike trains? Why is PCA necessary first? Thank you to the authors for your work! cc: Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheynst
- ❔1️⃣: It is stated that non-neighbors (both within and across manifolds) are negative samples (mapped far) during the contrastive learning step. Does treating non-neighbors within and across manifolds as similarly “distant” lead to less interpretability of larger distances in latent space?
- ❔2️⃣: It seems that a linear transformation between MARBLE representations of different animals was necessary because the same information is present in the latent space but not necessarily with the same ordering... (con't)
- 🤍1️⃣: The initial proximity graph is a clever way to define distance and neighborhoods between inputs that can be used for downstream training. 🤍2️⃣: The rotation invariance is important and likely useful for extracting shared latent representations from systems with minor differences.
- 🤍3️⃣: The comparisons to state-of-the-art latent dynamical systems models are great for properly contextualizing the performance of MARBLE.
- They find that MARBLE successfully decomposes complex dynamical activity from spike trains into informative and easily decodable latent representations. This 🧵 explores our thoughts (🤍 & ❔). www.nature.com/articles/s41...
- From Cogan Lab Journal Club with @zspald.bsky.social these decomposition acronyms are getting out of hand!
- We’re happy to present @zspald.bsky.social 's work on shared neural representations of speech production across individuals! We find that patient-specific data can be aligned to a shared space that preserves speech information, enabling cross-patient speech BCIs. www.biorxiv.org/content/10.1...
- Last week, Nanlin Shi (Postdoctoral Associate) presented Jessie Liu and colleague’s latest work on speech encoding in the middle Precentral Gyrus (mPrCG). This 🧵 explores our thoughts (🤍 & ❔) www.nature.com/articles/s41...
-
View full thread❔2️⃣: Given that the specific content of a sequence from pre-speech activity can not be reliably decoded, how do you interpret the nature of the motor plan in the mPrCG?
- ❔3️⃣: Have the authors performed unique syllable-type decoding in other regions? Which regions might be responsible for more detailed/specific motor sequence planning? Thank you to the authors @changlabucsf.bsky.social for your great work, and we look forward to following more of it!
- 🤍3️⃣: The paper does an excellent job of connecting a basic science finding to a real-world clinical disorder. The demonstration that mPrCG stimulation produces errors that are “consistent with those made in AOS, offering significant insights for both researchers and clinicians.
- ❔1️⃣: Why do you think the sustained activity in the vPrCG is modulated by articulatory complexity, yet it is unable to predict reaction time (RT)?
- 🤍1️⃣: I appreciate that they were able to establish a causal role for the mPrCG through electrical stimulation.
- 🤍2️⃣: I also like the clear narrative structure: identifying a network → defining its function → linking it to behavior → providing causal evidence.
- This week, Jim Zhang (3rd year PhD student, @dukebrain.bsky.social) presented Brooke Staveland’s paper on circuit dynamics of approach-avoidance conflict in humans. This 🧵 explores our thoughts (🤍 & ❔) www.biorxiv.org/content/10.1...
-
View full thread❔3️⃣: In Fig. 2C, it looks like MFG has early low-frequency activity during the approach period that dissipates about 500ms before the decision to turn around. Could this be preparatory activity that reflects participants’ plans for the trial, or predictive of how many dots they decide to eat?
- Thank you to the authors at UC Berkeley Psychology and Neuroscience for your work, and we look forward to following more of it! cc: @ucberkeleyofficial.bsky.social
- ❔2️⃣: The study didn’t probe the role of subcortical beta during approach-avoidance conflict, perhaps due to limitations on recording sites. I wonder if STN beta activity may play a distinct role in this circuit as well, particularly as a signal to halt and turn around. [...]
- [cont.] I’m curious if there is a sharp rise in beta coherence between the STN and limbic regions right before the decision to turn around, and using Granger Causality, could it be shown that the STN drives activity in limbic regions during this brief window before the decision?
- ❔1️⃣: The ACC (and MFC more broadly) is also construed as a conflict-control region. I’m surprised it didn’t play a larger role in regulating the other regions during ghost attack trials. Perhaps, if ghost attacks occurred more randomly, instead of based on distance...
- [cont.] the high-frequency activity in ACC would reflect similar patterns as those in MFG? What exactly is the role of ACC during imminent threat? Might it serve a prediction role, as it does during other types of tasks?
- 🤍2️⃣: The anatomical summaries were quite helpful for understanding the complex analyses, and highlight the various subnetworks involved in the circuit.
- 🤍3️⃣: This experiment is well-suited for the recording locations that they had access to, and nicely contrasts the roles of limbic regions and MFG under anxiety and threat.
- 🤍1️⃣: I absolutely love this task! It’s obviously engaging, but it’s also well designed. In particular, the comparison between ghost strike and ghost chase trials is a brilliant manipulation that lets the authors probe dynamic threat levels.
- [Not loaded yet]
-
View full thread[cont.] converge onto a common semantic retrieval network before engaging the task-invariant pre-articulatory network and producing the response?
- Thank you to the authors @nyutandon.bsky.social for your work, and we look forward to following its progress! cc: Adeen Flinker, Daniel Friedman, Orrin Devinsky, Werner Doyle, Patricia Dugan
- ❔2️⃣: How do the frontal networks for semantic retrieval and articulation interact during natural conversation where speech is more continuous and less constrained than in structured naming tasks?
- ❔3️⃣: The paper highlights distinct pathways for auditory and visual naming, but both tasks result in the same articulated output. At what point do the auditory and visual naming networks...
- 🤍3️⃣: The results from the encoding models were straightforward and the figures were clear and interpretable!
- ❔1️⃣: Since word surprisal reflects the combined influence of lexical and syntactic information during sentence processing, what specific aspects are driving responses in the left hemisphere frontal network?