Ben van Buren
Assistant Professor of Psychology at The New School in NYC studying visual perception and cognition: scholar.google.com/citations?hl=en&use…
- Reposted by Ben van BurenWhen you look around, you see a world full of objects. But can you fully identify multiple objects at once? In this new study, we revived and updated a classic experimental test: the redundant target paradigm. link.growkudos.com/1elj6rh9o8w doi.org/10.1037/xhp0...
- Reposted by Ben van BurenCongratulations to @lillianbehm.bsky.social, Nick Turk-Browne, and a huge team for putting together this paper (out today) on lessons from a decade of attempts to study awake infants with fMRI: onlinelibrary.wiley.com/doi/10.1111/...
- Reposted by Ben van BurenI've got a new #visionscience video for you! Today I'm showing off how to see the hidden object in Magic Eye stereograms without worrying about getting your eyes lined up the right way. If you're interested in reading why this works... <1/2>
- Reposted by Ben van Buren❗New paper published in Cognition❗ "A common signal-strength factor limits awareness and precise knowledge of multiple moving objects across the adult lifespan" from Iris Wiegand, Igor S. Utochkin, Ava Mitra, Chia-Chien Wu, and Jeremy M. Wolfe authors.elsevier.com/sd/article/S...
- Reposted by Ben van BurenThis past weekend, I was excited to visit @museumofscience.bsky.social to start my work as a Science Communication fellow. Now that I'm back home, the #scicomm begins, starting with a new YouTube channel! Just published my 1st video & stay tuned for #visionscience content coming soon.
- Ran across this interesting research today (I forget what the original search was for). Wishing everyone many joyful screams in 2026. www.news.uzh.ch/en/articles/...
- Reposted by Ben van BurenMany things in the world move, and can even move behind other things. When will the cat reappear? To predict this, remembering the cat’s speed will likely help. But... how do people remember something like speed, which is defined by displacement over both (🤯) space and time? TWEEPRINT ALERT! 🚨🧵1/n

- Reposted by Ben van BurenWhat makes visual stimuli memorable? Wilma Bainbridge, @keisukefukuda.bsky.social, Lore Goetschalckx, and I investigate the role of processing fluency for memorability in a new review paper in Nature Reviews Psychology. Check it out! rdcu.be/eSyjz
- Reposted by Ben van BurenThe new Art & Perception issue with all VSAC2025 abstracts is out! Hosted in the stunning MRE Wiesbaden, VSAC brought 227 contributors, 20+ artists, and immersive art–science events together. Dive into the talks, posters & performances that made VSAC2025 unforgettable: brill.com/view/journal...
- Score 1 for Aesop's town mouse: link.springer.com/article/10.1...
- Interesting paper investigating posing biases in pope portraiture: www.tandfonline.com/doi/full/10....
- Reposted by Ben van BurenHoly shit. This guy saved a PNG to a bird. (he drew a bird into a spectrogram, played that sound to a starling, and the starling reproduced it back to him with enough accuracy he got his bird drawing back in their call's spectrogram) www.youtube.com/watch?v=hCQC...
- Reposted by Ben van BurenAnother #VisionScience demo for your Halloween over at my teaching website. This time, a very basic version of the Gelb Staircase. This one has a reputation for being tricky, but I've found it to be pretty successful.
- Reposted by Ben van Buren[This post could not be retrieved]
- Reposted by Ben van BurenA quick webpage update to showcase my textbook PRACTICAL VISION SCIENCE! If you're thinking about options for your Spring course in #visionscience, check it out! sites.google.com/view/hands-o...
- Reposted by Ben van BurenI'm sorry, worldwide, irrevocable, non-exclusive, transferable permission to my voice and likeness? For what now? In any manner for any purpose??? This is in academia/.edu's new ToS, which you're prompted to agree to on login. Anyway I'll be jumping ship. You can find my stuff at hcommons.org.
- Reposted by Ben van BurenAdvocating for vision research on Capitol Hill! With the National Alliance for Eye & Vision Research + Research to Prevent Blindness + fellow vision scientists. Thanks to Congress for currently planning to maintain NIH & NEI funding! #seewhatmatters @corticalcavanaugh.bsky.social www.rpbusa.org
- Tomorrow (Thurs) at 9am in HS19, Marina Pace will present a series of studies that she ran with Tessa Bury, to test whether observers perceive *implied motion* in accessibility icons. #ECVP2025
- Marina finds that these icons — especially the increasingly popular Accessible Icon Project icon — are strong directional cues. For example, they automatically orient spatial attention in the direction that they are facing.
- This 'sense of motion' may be desirable sometimes, but it could cause trouble in situations where the icon is meant to indicate that the *present* location is accessible. Come to Marina's talk to hear more reflections about her collaboration w/ Tessa linking vision science, design and public policy!
- Reposted by Ben van Buren#ECVP2025 In addition to yesterday's Illusion Night we would like to mention the contribution of @elinevg.bsky.social and @lisa-kossmann.bsky.social about Open-Source Toolboxes for aesthetics and perception.
- Reposted by Ben van Buren[Not loaded yet]
- At 9:15 in HS19, hongbnguyen.bsky.social will present her work demonstrating an advantage predicting the behavior of simple shapes which look alive and goal-directed! #ECVP
- Maya Theresia Laughton presenting a beautiful film weaving together ideas from science and Hindu spirituality, called ‘Noumenon’. She made the film in collaboration with Janna Kyllästinen, as part of the Science New Wave Festival. #VSAC2025
- Maarten Leemans using image aesthetic mapping to explore the impact of details on the visual inspection of art photographs and paintings. #VSAC2025
- Dirk Walther presenting his lab’s work demonstrating an inverse relationship between the metabolic cost of visual processing and aesthetic judgments. #VSAC2025
- Branke Spehar discussing individual differences in how image features drive aesthetic preferences. #VSAC2025
- Anna Miscenà discussing the use of connoisseurs to determine the provenance of a painting. #VSAC2025
- VSAC is a teen! Claus-Christian Carbon getting ready to give a presentation surveying VSAC’s first 13 years. #VSAC2025
- Johan Wagemans gives the Visual Science of Art Conference a taste of the latest results from his lab’s impressive Gestalts Relate Aesthetic Preferences to Perceptual Analysis (GRAPPA) project. #VSAC2025 @gestaltrevision.bsky.social
- Hong Nguyen @hongbnguyen.bsky.social speaking about perceptual animacy and predictive processing at the Visual Science of Art Conference. #VSAC2025
- @dididu.bsky.social presenting her Celebrity EYE-Q card game at the Visual Science of Art Conference poster & demo session: #VSAC2025
- Ronald Hübner on Gustav Fechner’s origins: #VSAC2025
- Peggy Gerardin discussing the perception of line drawings: #VSAC2025
- A roundtable discussion on the provocative question, ‘What Makes a Good Artwork?’ - with Claus-Christian Carbon, Ida-Marie Corell, Aaron Hertzmann and Angela Kohlrusch. #VSAC2025
- Reposted by Ben van BurenAlso on Friday, Elisabeth Van der Hulst is part of the symposium "Temporal dynamics of cognitive processes underlying aesthetic experience" organized by @bvb373.bsky.social and @hongbnguyen.bsky.social with her talk: "In pursuit of movement: Choreographic complexity and viewing behavior in dance"⬇️
- Zsofia Pilz discusses her study of how information in labels influences visitors’ viewing patterns and experiences of artworks the Rijksmuseum. #VSAC2025
- Christopher Linden comparing viewers’ experiences of the same art exhibit in real life, as a virtual tour, and as static images: #VSAC2025 @vsac-social.bsky.social @gestaltrevision.bsky.social
- Aaron Hertzmann giving an inspiring keynote address to kick off the Visual Science of Art Conference in Wiesbaden: #VSAC @vsac-social.bsky.social
- Reposted by Ben van BurenEye movements are cheap, right? Not necessarily! 💰 In our review just out in @natrevpsychol.nature.com, Alex Schütz and I discuss the different costs associated with making an eye movement, how these costs affect behaviour, and the challenges of measuring this… rdcu.be/eAm69 #visionscience #vision
- Reposted by Ben van Buren"The brain is NOT a computer!!!!!!!!" well sure maybe yours isn't
- Check out Didi’s interesting new paper on memory biases for facial age! #CogSci
- Reposted by Ben van BurenThe OPAM 2025 Abstract deadline has been EXTENDED! Due to demand we will now accept submissions until the end of this Friday. But if you are planning to submit we still encourage you not to wait until the alst minute! Hope to see you all in Denver!🏔️⛷️
- Reposted by Ben van BurenNew Balas Lab preprint! This paper describes our recent work examining face pareidolia in school-age children and adults. What makes pareidolia more or less frequent in natural images? We looked at the impact of reversing contrast on face pareidolia rates with an unconstrained task. <1/n>
- Thanks to all at #vss2025 who stopped by tonight to play @dididu.bsky.social’s tabletop card game, Celebrity EYE-Q! Didi developed this game as a fun way for people of all ages and backgrounds to learn about holistic face processing. You can follow updates about the game on instagram @celebeyeq!
- In the game, players have to identify celebrities while viewing pictures of just their eyes. This is easy when you see the eyes alone, but much harder when the eyes are held up to another player’s face, causing disruptive integration with surrounding facial features. #visionscience #sciart #scicomm
- For more scientific background, check out research studying holistic face processing using the composite face effect. And for many other hands-on vision science demos (including a fun DIY demo of the composite face effect), check out @bjbalas.bsky.social’s great book, Practical Vision Science!
- We've already seen several inspiring presentations at #VSS2025! Two New School perception folks will present work in the Pavillion tomorrow (Sat): 1. In the morning, Didi Dunin @dididu.bsky.social will present her research on memory distortions for facial age (in collaboration with Joan Ongchoco).
- 2. Then, in the afternoon, Hong Nguyen @hongbnguyen.bsky.social will present a series of experiments which show that observers make more efficient use of visual information when predicting the behavior of a shape which looks goal-directed.
- Less exposure time is needed to predict the future orientation of a moving dart shape that faces toward another shape, compared to various inanimate-looking control displays. Hong's work provides especially direct evidence for the computational savings which come from taking the intentional stance.
- Calling all those working at the intersection of visual art and vision science — Submit an abstract to the Visual Science of Art Conference, and join us in Wiesbaden, Germany, Aug 21-23!
- Reposted by Ben van BurenHead on over to our brand-new website opamconference.com to create a free account and register for our first free online workshop taking place later this month!
- Reposted by Ben van BurenExcited to share a new paper accepted to @cogscisociety.bsky.social 2025 on the cognitive representation polysemy and their use in reasoning! Pre-print here: doi.org/10.31234/osf... Thread below! (🧵1/9)
- Reposted by Ben van BurenFeeling shaken by the attacks on science + education? You’re not alone—and you don’t have to face it alone. Join us for a Scientists in Solidarity Action Hour 🗓️ Mon, April 17 ⏰ 7–8PM ET / 4–5PM PT 🔗 Register here: bit.ly/4iXU2vN Make meaning. Build power. Take action—together.
- Reposted by Ben van Buren📣 #ECVP DEADLINE EXTENSION !! The deadline for Abstract Submission and Early Bird Registration has been extended by two weeks! 🗓️ New deadline: April 20th at 11:59 PM (latest time zone on earth)
- Reposted by Ben van BurenVery pleased that the Special Issue of Visual Cognition on Teaching Sensation & Perception that @ankosov.bsky.social, @juliafstrand.bsky.social and myself edited is now published in full! If you're looking for some exciting ideas about teaching #VisionScience, start here!
- Reposted by Ben van BurenWe are excited to welcome three new organizers joining our team this year: Hong Nguyen, Giovanna del Sordo, and Sisi Wang! They will be joining Dock Duncan, Noah Britt and William Narhi-Martinez to lead OPAM 2025 this year. Stay tuned for more announcements regarding our new website and workshops!
- Reposted by Ben van BurenThe dynamic version is even more disturbing...
- Reposted by Ben van BurenPublic protests are important for solidarity & signaling. But the real work having impact right now is in the courts. So if you've attended protests and are thinking 'what's next?' here are some legal funds to donate to: statedemocracydefenders.org/fund/ democracyforward.org ldad.org/letters-briefs
- Reposted by Ben van BurenThinking a lot about how to promote good #scicomm locally. If any of my #VisionScience friends have ideas for things to do or want to work together on making stuff to this end, let me know. I'd love to find more outlets for just describing what our lab is doing to non-scientists.
- Reposted by Ben van BurenVSAC 2025 is honoured to welcome our keynote speakers! One of them will be Aaron Hertzmann. @aaronhertzmann.bsky.social Find more information on 2025.vsac.eu #VSAC2025 #Conference #VisualScienceofArt VISUAL SCIENCE OF ARI Keynote
- In this thread, I introduce the ‘Blindfold Test’ — a new tool for deciding whether an experimental effect reflects perception, or higher-level judgment. Paper here: www.nssrperception.com/docs/van%20B...
- A rich tradition in vision science emphasizes that we see the world not only in terms of its physical structure, but also in terms of its social and causal structure. For example, even simple shapes look vividly alive if they move in particular ways (Heider & Simmel, 1944):
- Most work on the perception of animacy involves asking participants for subjective reports about what they see (e.g. “How ‘alive’ did that shape look, from 1-7?"). For example, in one study, moving dots which changed speed or heading more were rated as more animate (Tremoulet & Feldman, 2000):
-
View full threadMany thanks to my co-author Brian Scholl, and to our colleagues in the Yale Perception & Cognition Lab and New School Perception Lab. We are particularly indebted to Chaz Firestone (@chazfirestone.bsky.social) for the term ‘Blindfold Test’!
- Reposted by Ben van BurenOur study on size illusions & different tasks for measuring them is now officially published in Vision Research: Shows different illusions likely involve separable processes & so do different tasks (adjustment vs 2AFC): doi.org/10.1016/j.vi... #visionscience #neuroskyence #psychscisky
- Reposted by Ben van BurenIn this review article, I summarize some of our recent work on the neural basis of visual search in scenes, showing how attention and expectation interactively drive preparatory activity in visual cortex and jointly modulate the visual processing of potential target objects. doi.org/10.1177/0963...
- Reposted by Ben van BurenWe've joined #bluesky on our 50th anniversary! Follow us for research, commentary, and helpful tips for authors. To kick things off, we're excited to share these short editorials by Michael Posner, Isabel Gauthier (our editor), and David Rosenbaum: psycnet.apa.org/PsycARTICLES...
- Reposted by Ben van BurenAttentional Capture and Control | Annual Reviews - www.annualreviews.org/content/jour...
- Reposted by Ben van Buren[This post could not be retrieved]
- Reposted by Ben van BurenVSAC 2025: Call for abstracts and symposia! Submit your proposal at 2025.vsac.eu/submission We look forward to hearing about your work! #art #science #visualartofscience #conference #vsac #visualart #artscience #illusionart
- Reposted by Ben van BurenCome work as a postdoctoral researcher in our research group at KU Leuven in Belgium and help us unravel the visual factors underlying aesthetic image preferences! www.kuleuven.be/personeel/jo...
- Reposted by Ben van Burenso excited about this one, out now in American Psychologist, "The Detection of Automatic Behavior in Other People" (with Ilona Bass & me) link: psycnet.apa.org/record/2025-... pre-print: osf.io/preprints/ps...
- Reposted by Ben van BurenExcited to be co-organizing this workshop with Mariel Goddu (Stanford) and colleagues at the MPI EVA! Join us in Leipzig, Dec 17-18th -> Free registration: bit.ly/intuitive-ph... #intuitivephysics #cognition #development #comparativepsychology #Leipzig
- Reposted by Ben van BurenThe journals Perception and i-Perception are now on Bluesky! We’ll post here updates about articles published in the journals, calls for special issues, and any other exciting news related to perception research.
- Reposted by Ben van Buren🧵Join us for a short tour of how a large-scale MEG dataset collected during active vision made us question it all. New work with Carmen Amme, Philip Sulewski, Eelke Spaak, Martin Hebart, and Peter König. www.biorxiv.org/content/10.1...
- link.springer.com/article/10.1... A possible homolog to human ‘doing the robot’ dance behavior: youtu.be/fIosmA6aMmQ?...