Ann Huang
Comp Neuro, ML, Dynamical Systems 🧠🤖PhD student at Harvard & Kempner Institute. Prev at McGill, Mila, EPFL.
💻: https://ann-huang-0.github.io
- Reposted by Ann Huang🤖📊 NEW in the Deeper Learning blog: @annhuang42.bsky.social & @kanakarajanphd.bsky.social break down their recent work examining how #RNNs solve the same task in different ways, and why that matters. Joint work with @satpreetsingh.bsky.social & @flavioh.bsky.social bit.ly/4kj4fVd #NeuroAI
- Reposted by Ann Huang1/X Excited to present this preprint on multi-tasking, with @david-g-clark.bsky.social and Ashok Litwin-Kumar! Timely too, as “low-D manifold” has been trending again. (If you read thru the end, we escape Flatland and return to the glorious high-D world we deserve.) www.biorxiv.org/content/10.6...
- Reposted by Ann Huang[Not loaded yet]
- 🧠🧵Presenting TODAY (4:30–7:30), poster #2001! Come by and say hi!
- 📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025! arxiv.org/pdf/2410.03972 It started from a question I kept running into: When do RNNs trained on the same task converge/diverge in their solutions? 🧵⬇️
- 📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025! arxiv.org/pdf/2410.03972 It started from a question I kept running into: When do RNNs trained on the same task converge/diverge in their solutions? 🧵⬇️
- RNNs trained from different seeds on the same task can show strikingly different internal solutions, even when they perform equally well. We call this solution degeneracy.