- Last week, Zac Spalding (@zspald.bsky.social, 4th year PhD student, @dukeubme.bsky.social) presented Adam Gosztolai, Robert Peach, and colleagues’ 2025 paper on MARBLE, a method for finding interpretable latent representations of neural dynamics.
- They find that MARBLE successfully decomposes complex dynamical activity from spike trains into informative and easily decodable latent representations. This 🧵 explores our thoughts (🤍 & ❔). www.nature.com/articles/s41...
Sep 9, 2025 14:51
- 🤍1️⃣: The initial proximity graph is a clever way to define distance and neighborhoods between inputs that can be used for downstream training. 🤍2️⃣: The rotation invariance is important and likely useful for extracting shared latent representations from systems with minor differences.
- 🤍3️⃣: The comparisons to state-of-the-art latent dynamical systems models are great for properly contextualizing the performance of MARBLE.
- ❔1️⃣: It is stated that non-neighbors (both within and across manifolds) are negative samples (mapped far) during the contrastive learning step. Does treating non-neighbors within and across manifolds as similarly “distant” lead to less interpretability of larger distances in latent space?
- ❔2️⃣: It seems that a linear transformation between MARBLE representations of different animals was necessary because the same information is present in the latent space but not necessarily with the same ordering... (con't)
- If separate animals were treated as separate manifolds with an embedding-agnostic MARBLE, would you still expect an informative latent space to be learned without any need for post-hoc alignment?
- ❔3️⃣: In Figs. 4 and 5, do you obtain similar results if you operate directly on the spike trains instead of on the PCA-reduced spike trains? Why is PCA necessary first? Thank you to the authors for your work! cc: Alexis Arnaudon, Mauricio Barahona, Pierre Vandergheynst