Wanna compare dynamics across neural data, RNNs, or dynamical systems? We got a fast and furious method🏎️
The 1st preprint of my PhD 🥳 fast dynamical similarity analysis (fastDSA):
📜:
arxiv.org/abs/2511.22828
💻:
github.com/CMC-lab/fast...
I’ll be
@cosynemeeting.bsky.social - happy to chat 😉
The original dynamic similarity analysis (DSA) developed by
@neurostrow.bsky.social and Ila Fiete is a powerful method to compare trajectories of (nonlinear) neural dynamics between different datasets and models:
arxiv.org/abs/2306.10168We made DSA up to 150 times faster 🤯 by introducing 3 new optimization objectives and solvers to speed up the DSA alignment step. Instead of enforcing exact orthogonality at every iteration, we use faster formulations that approximate or penalize the constraint.
Our method efficiently estimates the rank of delay embeddings of a dynamical system. For example, on Lorenz trajectories projected to higher dimensions, the estimated order matches the true latent rank and aligns with AIC/BIC baselines.
With different forms of noise, we showed how well the rank estimate supports DMD reconstruction. Across noise levels, the method detects the rank at the knee point automatically (with no tuning)
We first tested whether fastDSA is invariant to purely geometric deformations—changes that preserve the same underlying dynamics and attractor topology. All 3 fastDSA variants are faithful to dynamics and remain stable across geometric deformations, while being computationally more efficient.
Next we tested sensitivity to dynamical change by morphing a ring attractor into a line attractor (same model). fastDSA distances jump at the ring↔line transition, capturing topology change—unlike Procrustes—while being way faster than DSA.
Under strong noise, we repeat the transformation tests (Plus kernelDMD+Wasserstein distance (kwDSA)). kwDSA highlights a key pitfall: relying mainly on eigenvalues (ignoring eigenvectors) can miss fine dynamical differences. fastDSA alternatives remain sensitive and perform well even at high noise.
We also built 2 simple nonlinear systems (A, B) with identical eigenvalues but different eigenvectors. As expected, Wasserstein-based kwDSA struggles to separate them. All 3 fastDSA variants reliably distinguish A vs B (represented w/ MDS).
Related:
@neurostrow.bsky.social thread with
@wtredman.bsky.social & Igor Mezic on extending dynamical-similarity ideas—an exciting direction for future DSA-style methods:
bsky.app/profile/neur...There are also other fantastic tools around for comparing circuits/brains/models like the RSA developed by Nikolaus Kriegeskorte and many others (e.g., see the great work
www.biorxiv.org/content/10.1... by
@jbarbosa.org , and
@itsneuronal.bsky.social )
This work couldn’t have happened without my wonderful collaborators:
@neurostrow.bsky.social and Ila Fiete (master minds behind the original DSA),
@mmdtaha.bsky.social, Christian Beste, and
@neuroprinciplist.bsky.social ; and support from the
@cmc-lab.bsky.social .
Jan 8, 2026 16:08