Iris Groen
Associate Professor @UvA_Amsterdam | Cognitive neuroscience, Scene perception, Computational vision | Chair of CCN2025 | www.irisgroen.com
- I have a PhD opening for my #VIDI BrainShorts project 📽️🧠🤖! Are you or do you know an ambitious, recent (or almost) MSc graduate with a background in NeuroAI and interest in large-scale data collection and video perception? Check out our vacancy! (deadline Feb 15). werkenbij.uva.nl/en/vacancies...
- Check this out - yesterday I was on Dutch national television to discuss the intersection between #AI and the #brain, how we are addressing the fundamental questions there @uva.nl, and how we plan to tackle video understanding in my #NWO #VIDI project next! 🧠🤖📽️ npo.nl/start/afspel...
- starting fall 2026 i'll be an assistant professor at @upenn.edu 🥳 my lab will develop scalable models/theories of human behavior, focused on memory and perception currently recruiting PhD students in psychology, neuroscience, & computer science! reach out if you're interested 😊
- Great news, congratulations and good luck!!
- On the train to Brussels, towards the biannual NeuroCog meeting, whose theme this edition is “AI and the brain” neurocog.be Prepping my talk and looking forward to learning about exciting new research and immersing myself in some good discussions in the next two days!
- 📣 New preprint by a stellar team 🤩 I’m most excited by “phase III” in the alignment time course, which is best captured by mid-layers of temporally integrating video models! While we do not directly compare with image-EEG (yet - will do so in the #VIDI) I suspect this is unique to video vision 🎥🔥
- 📢 New preprint, together with @sargechris.bsky.social! Building on @sargechris.bsky.social's previous work, we benchmark 100+ image and video models 🤖 on brain representational alignment, this time to EEG data of humans 🧠 watching videos! 🧵⬇️ www.biorxiv.org/content/10.1...
- Honoured to receive an NWO #VIDI grant to study 'the TikTok Brain': representational alignment of brains and deep nets for #video! I'm hugely thankful to the anonymous reviewers for their enthusiasm and for the opportunity to grow and sustain my lab. Job ads coming soon! www.nwo.nl/en/news/149-...
- Our exciting new preprint emphasizing the role of texture/local statistics in brain-DNN alignment (for EEG), online now!
- 🧠 New preprint: Why do deep neural networks predict brain responses so well? We find a striking dissociation: it’s not shared object recognition. Alignment is driven by sensitivity to texture-like local statistics. 📊 Study: n=57, 624k trials, 5 models doi.org/10.1101/2025...
- Honored & looking forward to give this lecture tomorrow @ucl-neuroai.bsky.social!
- 🧠✨ How closely do AI models mirror the human brain? Join us for @irisgroen.bsky.social’s talk: “Alignment of visual representations in AI and human brains: beyond object recognition” 📅 8th September @unireps.bsky.social x @ucl-neuroai.bsky.social meetup 🔗 Zoom link: ethz.zoom.us/j/66426188160 1/4
- Somewhat exhausted but very happily and proudly looking back at #CCN2025 & ready to pass the torch to NEW Amsterdam for #CCN2026 🥳Please fill out the survey 👇 to give input to the organizing team led by @neurograce.bsky.social and @toddgureckis.bsky.social @NYU!
- The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
- I’ve had 40 birthdays so far, but this is the first one chairing a major international conference 👩💻! Thanks to the nearly 1000 attendees of #CCN2025 for coming to my birthday this year 😉
- [Not loaded yet]
- You’re welcome to do a surprise act Sush 😄
- #CCN2025, we’re ready for you!
- After preparing for a full year together with @neurosteven.bsky.social and all other amazing organizers of @cogcompneuro.bsky.social, #CCN2025 is finally here! While I'm proud of the entire program we put together, I'd now like to highlight my own lab's contributions, 6 posters total:
-
View full threadFinally, Otto Márton, an MSc student in the lab, shows small but consistent benefits of DNNs with hyperbolic geometry for capturing human representations of objects in behavior (using THINGS) and the brain (THINGS-EEG and NSD) 2025.ccneuro.org/poster/?id=b... Poster C109, 14:00-17:00.
- All together a diverse set of studies, grouped around the central question of how human-aligned deep neural networks are, and how we can use them to learn more about the brain! Looking forward to discuss, hearing your thoughts, and to an exciting and immersive week full of science ahead 😀🧠🤖🧑💻📈📊🥳
- Next, on Wednesday, another Proceedings paper by Amber Brands, showing that PredNet, a well-known predictive coding DNN, does not exhibit signatures of short-term adaptation that are ubiquitous in the brain, such as repetition suppression 2025.ccneuro.org/poster/?id=y... Poster B131, 13:00-16:00
- On Friday, niklasmuller.bsky.social shows that estimating population receptive fields (pRF) using DNN feature maps but without assuming a Gaussian pRF shape yields better predictions of THINGS ephys data, uncovering surprising pRF geometries! 2025.ccneuro.org/poster/?id=1... Poster C105, 14:00-17:00
- And in a third Tuesday poster, Clemens Bartnik presents very neat EEG results complementing his recent PNAS paper that used fMRI to demonstrate unique representations of locomotive affordance perception in the human brain www.pnas.org/doi/suppl/10...
- In the EEG study, accepted as Proceedings, we replicate these findings in the temporal domain, showing unique processing of locomotive affordances around 200 ms, which is independent of object or GIST features, and not well captured by DNNs 2025.ccneuro.org/poster/?id=6... Poster A69, 13:30-16:30
- On her CCN poster, Christina zooms in on a specific set of multi-pathway video-DNN that separately compute motion features and image features, to explore alignment of static vs. dynamic representations with cortical processing streams 2025.ccneuro.org/poster/?id=s... Poster A154, 13:30-16:30
- Also on Tuesday, @annewzonneveld.bsky.social reports whether video-DNNs exhibit temporal straightening, a computational motif found in brains thought to aid future state prediction. Spoiler: some CNNs straighten, Transformers do not! 2025.ccneuro.org/poster/?id=E... Poster A152, 13:30-16:30
- On Tuesday, @sargechris.bsky.social will present a follow-up on her earlier ICLR paper (openreview.net/pdf?id=LM4PY...), where we performed large-scale benchmarking of video-DNNs agains the BOLD Moments fMRI dataset, to see how well such models are representationally aligned with the human brain;
- Less than 2 weeks until CCN 2025 in Amsterdam! Here's everything you need to know to prepare for the 8th Cognitive Computational Neuroscience conference, August 12-15 at University of Amsterdam 🧠
- [Not loaded yet]
- Or, blend in with the Amsterdam locals and rent a bicycle for the duration of the conference 🚲😀
- Ik mocht bij het Oog op Morgen vertellen over een nieuw onderzoek naar de invloed van het gebruik van chatGPT op ons brein. De studie is nog erg 'preliminary', maar het is wel belangrijk dat er aandacht is voor hoe AI het leerproces en denkvermogen beïnvloedt. www.nporadio1.nl/fragmenten/n...
- Goed advies van @laurensvhg.bsky.social www.volkskrant.nl/beter-leven/...
- In these tumultuous times, still happy to report a scientific achievement: our preprint on affordance perception was just published in PNAS! www.pnas.org/doi/10.1073/... Using behavior, fMRI and deep network analyses, we report two key findings. To recapitulate (preprint 🧵lost on other place):
- [Not loaded yet]
- thanks!
- [Not loaded yet]
- No but I will now :) He's not on BlueSky though I think? (I left X a while back). But will go and read some of his papers!
- In our view, affordances or functions form an interesting alternative characterisation of scenes relative to more classic object- or global property based ones, as we also outline here: doi.org/10.1093/acre... and we just wanted to see if we could find this in the brain!
- Looking at the list of researchers I think the Simons Foundation project may be a bit more animal-research oriented? In general, I'm all for more ecologically-motivated/affordance-oriented research across the board, without any gatekeeping or overly fundamentalist views 😉 (yes I did see that thread)
- [Not loaded yet]
- Thanks and no, it is not! It follows a line of cogneuro research from me and others, started a while back, where we are interested in what factors influence scene categorization and drive patterns of neural activity in scene-selective cortex, e.g. elifesciences.org/articles/32962
- 🚨Paper alert!🚨 TL;DR first: We used a pre-trained deep neural network to model fMRI data and to generate images predicted to elicit a large response for each many different parts of the brain. We aggregate these into an awesome interactive brain viewer: piecesofmind.psyc.unr.edu/activation_m...
- Nice work Mark, and cool interactive viewer! We recently did something similar, using image diffusion and CLIP. We predicted activations for generated images with separate encoders, but didn't validate with new fMRI recordings yet - trying to get that funded now! openreview.net/pdf?id=CGON8...
- Preprint here: www.biorxiv.org/content/10.1...
- And some concise writeups from UvA’s Press Office: Eng: www.uva.nl/en/content/n... Dutch: www.uva.nl/content/nieu...
- Can we enhance DNN-human alignment for affordances? We tried three things: direct supervision with affordance labels, linguistic representations via captions, and probing a multi-modal LLM (ChatGPT!). While we saw improvements, none of these perfectly captured locomotive affordance representations.
- So, our paper shows that 1) there are neural correlates of locomotive affordance perception in visual cortex and 2) these are not trivially explained by other scene properties or deep network features! Open data link here in case you’d like to try your own favourite model: osf.io/v3rcq/
- We tested a whole bunch of models, and it turns out that all models showed lower alignment with affordances than objects. This was true for models on various tasks – not just classic object or scene recognition, but also contrastive learning with text, self-supervised tasks, video, etc.
- Moreover, the models showed quite poor alignment with the fMRI patterns we had measured for these scenes. And the unique affordance-related variance was not ‘explained away’ by the best-aligned DNN. Our second key finding!
- Then, we measured MRI, and found that brain activity patterns in scene-selective visual regions also contained ‘unique’ variance reflecting locomotive affordance information only. Hence, demonstrating a ‘neural reality of affordances’ (PNAS Editor’s quote!) – our first key finding!
- Second, we asked about deep nets – we all know they are good at labeling objects and scenes and also predict brain activity in visual cortex better than any model did before: oxfordre.com/neuroscience... But do they represent locomotive action affordances in scenes?
- This is a very easy task for humans: they respond in a split second, giving highly consistent answers. And these answers form a highly structured representational space, clearly separating different actions along meaningful dimensions, such as water-based vs. road-based activities.
- You might think: but this task is easy because you can just say ‘swimming’ if you see ‘water’, and ‘driving’ when you see ‘road’. But the answers were not trivially explained by such labels - almost 80% of the variation in the locomotive action ratings was unexplained by other scene properties!
- We studied affordances, a term introduced by Gibson (1979) to describe the idea that vision entails perceiving the action possibilities of environments. We wanted to know: can we find evidence that the human brain represents perceived affordances of scenes? www.taylorfrancis.com/books/mono/1...
- We showed participants images from real-world indoor and outdoor environments while they did a simple task: mark 6 ways (walking, driving, biking, swimming, boating, climbing) you could realistically move in this environment – i.e. indicate its locomotive action affordances.
- Interesting analogy!
- "As appealing as they can be, Large Language Models are as useful to scientific research as microwaves are to fine cuisine." doi.org/10.1177/0301...
- So excited that our excellence cluster 'The Adaptive Mind' got funded! 🥳 Looking forward to lots of great science and projects! #TheAdaptiveMind #ExcellenceInitiative 👁️🧠🤖
- Herausragender Erfolg in der #Exzellenzstrategie: Gleich drei #Exzellenzcluster für die #JLUGiessen – #Herz-#Lungen-, #Batterie- und #Wahrnehmungsforschung konnten im Wettbewerb überzeugen @cpi-exstra.bsky.social @heroldlab.bsky.social @dfg.de @wissrat.bsky.social www.uni-giessen.de/de/ueber-uns...
- Woohoo, the hard work paid off! Congratulations to all! 🥳
- Just a few months until Cognitive Computational Neuroscience comes to Amsterdam! Check out our now-complete schedule for #CCN2025, with descriptions of each of the Generative Adversarial Collaborations (GACs), Keynotes-and-Tutorials (K&Ts), Community Events, Keynote Speakers, and social activities!
- Good morning, Bsky'ers! Some people have asked for a life update, here it is: I finished my last chemotherapy yesterday! 🥳 It’s been a long ride, but I’m feeling happy, relieved, and incredibly grateful. I sort of always knew, but now more than ever: #cancersucks #sciencematters #sciencesaveslives
- Dear Katha, I'm inspired by your strength and optimism! Wishing you all the best!!! 🙏
- [Not loaded yet]
- Having FOMO!!
- [Not loaded yet]
- Nice paper! Enjoyed your talk at the Re^2 align workshop.
- Thank you! I’m a big fan of your blog!