Our first preprint has been accepted for publication
www.science.org/doi/10.1126/... !!
tldr:
@ezeyulu00.bsky.social ,
@amartyapradhan.bsky.social ,
@dkoveal.bsky.social and I developed a method using injectable nanoparticles to turn mice into…constellations in motion. 🧵⤵️

High-resolution in vivo kinematic tracking with customized injectable fluorescent nanoparticles
Injectable fluorescent nanoparticles were used to track positions on and inside of freely moving animals at high resolution.
If we want to understand movement, we have to track it. In people, we can look like Andy Serkis and wear a Styrofoam ball bedazzled swimsuit.

GIF by Cinecom.net - Find & Share on GIPHY
Discover & share this GIF by Cinecom.net with everyone you know. GIPHY is how you search, share, discover, and create GIFs.
Unlike Mr. Serkis, mice don’t like the suit, and they don’t like anything being attached to their skin. And even if you did manage to get something onto/into the skin, they have lots of fatty tissue, which makes their gelatinous bodies hard to track from the “outside-in”.
Oct 2, 2025 19:45Markerless trackers like DLC, SLEAP, DANNCE, Lightning Pose, and others have totally revolutionized tracking movement, since they don’t require the mice to wear anything. BUT, their precision is generally not as good as marker-based methods.
With the first person in the lab,
@ezeyulu00.bsky.social , we came up with the idea of injecting near-infrared-fluorescent nanoparticles to make little orbs of light inside the body that we could see from the outside.
We hit our first hurdle. Standard particle mixtures diffused away within 48 hours. So we formulated alternatives, and now we can watch the insides of mice glow for months as they freely move! We call the end-result Quantum Dot-based Pose estimation in vivo or…QD-Pi (pun intended).
We successfully targeted the fatty tissue beneath the skin and even into the knee joint! Both of these were readily resolvable with off-the-shelf cameras. If you’re handy, you can pluck out the IR filter in your cameras and try it yourself.
We figured out another cool trick. Markerless trackers require labeling of hundreds of frames (click, click, rinse & repeat 500 times). We worked out a machine learning pipeline to take data where we fluorescently tagged 10 body parts and automatically labeled all frames without human intervention.
We built a dataset with 100,000(ish) automatically labeled frames. Since the labels are localized through fluorescence, no labeling ambiguity! Turns out: with this many frames you can train techniques like SLEAP and DLC to label body parts with close to sub-mm precision.
We’re pumped about the future of this technique and are excited to share it with the community. There’s still a lot to optimize, but we’d like to optimize it with you! Seriously, reach out to us, we're nice!
This work was driven by the singular vision and dedication of Zeynep Ulutas,
@ezeyulu00.bsky.social . We could not have finished this without
@amartyapradhan.bsky.social .
@dkoveal.bsky.social provided key guidance on chemistry.