Christina Sartzetaki
PhD candidate @ UvA 🇳🇱, ELLIS 🇪🇺 | {video, neuro, cognitive}-AI
Neural networks 🤖 and brains 🧠 watching videos
🔗 sites.google.com/view/csartzetaki/
- 🚨🚨📄 Check out our new preprint! Our results reveal novel insights on how continuous visual input is integrated in the human brain💡, beyond the standard temporal processing hierarchy from low to high-level representations
- 📢 New preprint, together with @sargechris.bsky.social! Building on @sargechris.bsky.social's previous work, we benchmark 100+ image and video models 🤖 on brain representational alignment, this time to EEG data of humans 🧠 watching videos! 🧵⬇️ www.biorxiv.org/content/10.1...
- Excited to be presenting this paper at #ICLR2025 this week! Come to the poster if you want to know more about how human brains and DNNs process video 🧠🤖 📆 Sat 26 Apr, 10:00-12:30 - Poster session 5 (#64) 📄 openreview.net/pdf?id=LM4PY... 🌐 sergeantchris.github.io/hundred_mode...
- Reposted by Christina SartzetakiNew preprint (#neuroscience #deeplearning doi.org/10.1101/2025...)! We trained 20 DCNNs on 941235 images with varying scene segmentation (original. object-only, silhouette, background-only). Despite object recognition varying (27-53%), all networks showed similar EEG prediction.
- Reposted by Christina Sartzetaki✨ The VIS Lab at the #University of #Amsterdam is proud and excited to announce it has #TWELVE papers 🚀 accepted for the leading #AI-#makers conference on representation learning ( #ICLR2025 ) in Singapore 🇸🇬. 1/n 👇👇👇 @ellisamsterdam.bsky.social
- Excited to announce that this has been accepted in ICLR 25!
- 📢 New preprint! We benchmark 99 image and video models 🤖 on brain representational alignment to fMRI data of 10 humans 🧠 watching videos! Here’s a quick breakdown:🧵⬇️ www.biorxiv.org/content/10.1...
- Reposted by Christina Sartzetaki(1/4) The Algonauts Project 2025 challenge is now live! Participate and build computational models that best predict how the human brain responds to multimodal movies! Submission deadline: 13th of July. #algonauts2025 #NeuroAI #CompNeuro #neuroscience #AI algonautsproject.com
- 📢 New preprint! We benchmark 99 image and video models 🤖 on brain representational alignment to fMRI data of 10 humans 🧠 watching videos! Here’s a quick breakdown:🧵⬇️ www.biorxiv.org/content/10.1...
- 1/ Humans are very efficient in processing continuous visual input, neural networks trained to process videos are still not up to that standard. What can we learn from comparing the internal representations of the two systems (biological and artificial)?
- 2/ We take a step in this direction by performing a large-scale benchmarking of models on their representational alignment to the recently released Bold Moments Dataset of fMRI recordings from humans watching videos.
-
View full thread9/ This is our first research output in this interesting new direction and I’m actively working on this - so stay tuned for updates and follow-up works! Feel free to discuss your ideas and opinions with me ⬇️