Women in AI Research - WiAIR
WiAIR is dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this exciting field.
- If you love @aicoffeebreak.bsky.social, this one's for you — Letitia Parcalabescu is our next guest on the #WiAIR_podcast! Stay tuned for our conversation: 🎬 YouTube: www.youtube.com/@WomeninAIRe...
- AI is becoming deeply embedded in how we work, decide, and create. But this question often gets overlooked: 𝐇𝐨𝐰 𝐝𝐨 𝐰𝐞 𝐭𝐞𝐚𝐜𝐡 𝐩𝐞𝐨𝐩𝐥𝐞 𝐰𝐡𝐞𝐧 𝐭𝐨 𝐫𝐞𝐥𝐲 𝐨𝐧 𝐀𝐈 — 𝐚𝐧𝐝 𝐰𝐡𝐞𝐧 𝐧𝐨𝐭 𝐭𝐨? Full conversation: 🎬 YouTube: youtu.be/rSC7L5WikcE #wiair
- ✨ How can language models assist suicide prevention research, without replacing experts? In our latest #WiAIR episode, we host Dr. Swabha Swayamdipta (@swabhs.bsky.social) to discuss “Uncovering Intervention Opportunities for Suicide Prevention with Language Model Assistants”. (1/6 🧵)
- 🎙️ Swabha Swayamdipta on language model inversion — when model outputs can reveal what was prompted. This is just an excerpt from our full interview now on YouTube 👇 youtu.be/rSC7L5WikcE
- What if “personalization” wasn’t just predicting your preference — but understanding the reasoning behind it? 🤔🧠 (1/8🧵)
- AI shouldn't make users babysit it. Swabha Swayamdipta explains why reliability must be built into models — not pushed onto users who shouldn't need expert awareness to use AI safely. This is just an excerpt from our full interview now on YouTube 👇 youtu.be/rSC7L5WikcE
- 🔐 How vulnerable are hidden prompts? In our latest #WiAIR episode, Dr. Swabha Swayamdipta (@swabhs.bsky.social) discusses her NeurIPS 2025 paper on recovering prompts using next-token probability sequences. (1/6 🧵)
- 🎙️ Season 2 of Women in AI Research just launched -- and the first episode with @swabhs.bsky.social of the new season is now out!
- We're kicking off Season 2 of the #WiAIRpodcast with @swabhs.bsky.social (USC), discussing hidden system prompts, LLM safety and alignment. 🎧 Full episode coming soon, subscribe on youtube: youtu.be/DDjBG_AhUjQ
- 🎙️ First WiAIR episode of 2026! Our guest is Swabha Swayamdipta @swabhs.bsky.social, Asst. Prof. at USC. Her research advances how we evaluate and understand generative language models—and how humans and AI can collaborate safely and effectively. Stay tuned! YouTube: lnkd.in/gFX-5jiu
- Do we need to understand AI, or just justify it? Different communities ask different questions: • Neuroscientists & mechanistic interpretability -- how models reason • Lawyers & economists -- whether decisions are justifiable Which matters more to you -- understanding or justification?
- How do language models align with conceptual meaning in the human brain? In our latest #WiAIR episode, we discuss COLM 2025 paper “Language models align with brain regions that represent concepts across modalities” with Dr. Maria Ryskina (@mryskina.bsky.social). 👇 (1/5 🧵)
- 2025 was the year #WiAIR_podcast became real. From a simple idea to a global community - made possible by volunteers, guests, and listeners who believe visibility in AI research matters. Grateful for every contribution. Happy New Year 💜 🎧 youtu.be/qnsEUhUQmwU
- 🤔 Do LLMs really understand the world — or are they mostly predicting what sounds likely in text? In our latest #WiAIR podcast we explore this exact question with Maria Ryskina. (1/8🧵)
- What does neuroscience say about how language models represent meaning, and why isn't scale enough? In this #WiAIRpodcast episode, we speak with @mryskina.bsky.social on neuroscience × AI, evaluation limits, interpretability, and why community shapes better research. 🎬 youtu.be/PQx4IvJR8Bg
- Do LLMs really understand or are we mistaking language for thought? In the next #WiAIRpodcast episode, @mryskina.bsky.social explores language vs. thought in LLMs, what AI can learn from cognitive science, and why model internals matter. Full conversation coming soon. youtu.be/1N-Cdts6Y7k
- We're excited to welcome @mryskina.bsky.social, CIFAR AI Safety postdoc at Vector Institute for AI @vectorinstitute.ai, as our next guest on Women in AI Research. #wiair #wiairpodcast