Jenelle Feather
Flatiron Research Fellow #FlatironCCN. PhD from #mitbrainandcog. Incoming Asst Prof #CarnegieMellon in Fall 2025. I study how humans and computers hear and see.
- [Not loaded yet]
- This remains my personal fave: www.youtube.com/watch?v=3ADu...
- So excited for CCN2026!!! 🧠🤔🤖🗽
- The rumors are true! #CCN2026 will be held at NYU. @toddgureckis.bsky.social and I will be executive-chairing. Get in touch if you want to be involved!
- Join us at #NeurIPS2025 for our Data on the Brain & Mind workshop! We aim to connect machine learning researchers and neuroscientists/cognitive scientists, with a focus on emerging datasets. More info: data-brain-mind.github.io
- 🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind 📣 Call for: Findings (4- or 8-page) + Tutorials tracks 🎙️ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social 🌐 Learn more: data-brain-mind.github.io
- Consider submitting your recent work on stimulus synthesis and selection to our special issue at JOV!
- Submissions now accepted for a special issue of the Journal of Vision: Choose your stimuli wisely: Advances in stimulus synthesis and selection Submission deadline: Dec 12, 2025 Futher details: jov.arvojournals.org/ss/synthetic...
- Topics include but are not limited to: •Optimal and adaptive stimulus selection for fitting, developing, testing or validating models •Stimulus ensembles for model comparison •Methods to generate stimuli with “naturalistic” properties •Experimental paradigms and results using model-optimized stimuli
- Super excited for our #VSS2025 symposium tomorrow, "Model-optimized stimuli: more than just pretty pictures". Join us to talk about designing and using synthetic stimuli for testing properties of visual perception! May 16th @ 1-3PM in Talk Room #2 More info: www.visionsciences.org/symposia/?sy...
- The symposium also serves to kick off a special issue of JOV! "Choose your stimuli wisely: Advances in stimulus synthesis and selection" jov.arvojournals.org/ss/synthetic... Paper Deadline: Dec 12th For those not able to attend tomorrow, I will strive to post some of the highlights here 👀 👀 👀
- We are presenting our work “Discriminating image representations with principal distortions” at #ICLR2025 today (4/24) at 3pm! If you are interested in comparing model representations with other models or human perception, stop by poster #63. Highlights in 🧵 openreview.net/forum?id=ugX...
-
View full threadThese examples demonstrate how our framework can be used to probe for informative differences in local sensitivities between complex models, and suggest how it could be used to compare model representations with human perception.
- This is joint work with fantastic co-authors from @flatironinstitute.org Center for Computational Neuroscience: @lipshutz.bsky.social (co-first) @sarah-harvey.bsky.social @itsneuronal.bsky.social @eerosim.bsky.social
- As an example, we use this framework to compare a set of simple models of the early visual system, identifying a novel set of image distortions that allow immediate comparison of the models by visual inspection.
- In a second example, we apply our method to a set of deep neural network models and reveal differences in the local geometry that arise due to architecture and training types, illustrating the method's potential for revealing interpretable differences between computational models.
- We then extend this work to show that the metric may be used to optimally differentiate a set of *many* models, by finding a pair of “principal distortions” that maximize the variance of the models under this metric.
- This provides an efficient method to generate stimulus distortions that discriminate image representations. These distortions can be used to test which model is closest to human perception.
- We propose a framework for comparing a set of image representations in terms of their local geometries. We quantify the local geometry of a representation using the Fisher information matrix (FIM), a standard statistical tool for characterizing the sensitivity to local stimulus distortions.
- We use the FIM to define a metric on the local geometry of an image representation near a base image. This metric can be related to previous work investigating the sensitivities of one or two models.
- Recent work suggests that many models are converging to representations that are similar to each other and (maybe) to human perception. However, similarity often focuses on stimuli that are far apart in stimulus space. Even if global geometry is similar, the local geometry can be quite different.
- Applications close TODAY (April 14) for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop. All you need to apply is a CV and a 1 page abstract. 🧠🗽
- Applications are open for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! A two-day workshop 7/10-7/11 in NYC for PhD students and postdocs. All travel paid. Apply by April 14th.🧠🗽🧑🔬http://jtnworkshop2025.flatironinstitute.org/ @flatironinstitute.org @simonsfoundation.org
- Already feeling #cosyne2025 withdrawal? Apply to the Flatiron Institute Junior Theoretical Neuroscience Workshop! Applications due April 14th jtnworkshop2025.flatironinstitute.org
- Applications are open for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! A two-day workshop 7/10-7/11 in NYC for PhD students and postdocs. All travel paid. Apply by April 14th.🧠🗽🧑🔬http://jtnworkshop2025.flatironinstitute.org/ @flatironinstitute.org @simonsfoundation.org
- Teddy's project is one of the only clear examples we know of (so far) where modifying the *objective function* used for training a deep neural network systematically improves neural prediction of IT responses. Come chat at his poster this afternoon at #cosyne2025! (Poster 2-036)
- If you are at #cosyne2025 come check out my poster this afternoon! We demonstrate a case where objective function design can systematically improve neural predictivity in deep networks. [2-036] Contrastive-Equivariant Self-Supervised Learning Improves Alignment with Primate Visual Area IT
-
View full threadThanks for this chance to clarify and the links! I tried to be specific, but what I mean here is "in commonly used macaque IT electrophysiology datasets". I totally agree that there are cases of model changes resulting in increased prediction performance for other brain regions/datasets (1/2).
- I would put the linked papers into this pile. Do you know if these changes transfer to the common BrainScore IT datasets? If so, that would be a result I just wasn't aware of but would love to hear about, as we are actively thinking about how our network changes might transfer to other datasets.
- We are presenting two exciting projects tonight at #Cosyne2025! 🧠🧑🔬 [1-032] Changes in tuning curves, not neural population covariance, improve category separability in the primate ventral visual pathway [1-112] Comparing image representations in terms of sensitivities to local distortions
- Both projects are with amazing collaborators [1-032] w/ Long Sha, @gouki-okazawa.bsky.social, Isaac Moran, Nga Yu Lo, @sueyeonchung.bsky.social, and @roozbehkiani.bsky.social [1-112] w/ @lipshutz.bsky.social @sarah-harvey.bsky.social @itsneuronal.bsky.social @eerosim.bsky.social
- Applications are open for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! A two-day workshop 7/10-7/11 in NYC for PhD students and postdocs. All travel paid. Apply by April 14th.🧠🗽🧑🔬http://jtnworkshop2025.flatironinstitute.org/ @flatironinstitute.org @simonsfoundation.org
- At #NeurIPS2023? Interested in brains, neural networks, and geometry? Come by our **Spotlight Poster** Tuesday @ 5:15PM (#1914) on A Spectral Theory of Neural Prediction and Alignment. paper: openreview.net/forum?id=5B1... w/ Abdul Canatar, Albert Wakhloo & SueYeon Chung @sueyeonchung.bsky.social
- [Not loaded yet]
-
View full threadLots more can be found in the paper, including experiments with brain predictions, regularization and “classic” models. We also released code to generate metamers from your favorite PyTorch model and to run the human recognition experiments github.com/jenellefeath... 🧵24/N
- Finally, a very big *thank you* to our reviewers for this article! Their feedback improved our paper. It also would not have been possible without the support during my PhD from MIT Brain and Cog and DOECSGF. Thanks for reading! Link again: www.nature.com/articles/s41... 🧵25/N
- If idiosyncratic invariances were also present in humans, the phenomenon we describe might not represent a human–model discrepancy and could instead be a common property of recognition systems. 🧵22/N
- The main argument against this interpretation is that several model modifications (adversarial training & architectural tweaks to reduce aliasing) reduced the idiosyncratic invariances present in the models, suggesting that they are not unavoidable in a recognition system. 🧵23/N
- In my favorite result of the paper, we found that human recognizability was well correlated with other-model recognizability. Thus, the discrepant metamers are due to the models having *idiosyncratic invariances* that are not shared with other models or human observers! 🧵20/N
- Might humans analogously have invariances that are specific to an individual? This is hard to test definitively given that we do not have analogous access to human perceptual systems, and cannot generate human metamers at will. 🧵21/N
- Metamer recognizability also dissociated from other forms of robustness, such as susceptibility to class-preserving image corruptions. 🧵18/N
- So what is happening with these model representations to cause them to be misaligned with humans? To get at this, we tested how well a model’s metamers were recognized by other models. 🧵19/N
- This result suggests that something about the adversarial training procedure aligns model invariances with those of humans, but robustness itself does not drive the effect. 🧵16/N
- We also examined other sources of adversarial robustness: architectural changes to reduce aliasing (the “Lowpass” model) and a V1-inspired front-end (“VOne” model). Although these yielded similar robustness (f), the lowpass architecture had more recognizable metamers (g). 🧵17/N
- We trained audio models with adversarial training and found the same result! These models also had more human-recognizable model metamers compared to their standard-trained counterparts. 🧵14/N
- Is this just another test to assess adversarial vulnerability? NO! Even though adversarial training improved human-recognizability of model metamers, within adversarially trained models, metamer recognizability was not predicted by adversarial robustness. 🧵15/N
- Can we fix this human-model discrepancy? We found that humans were better able to recognize metamers from models trained with *adversarial training*. Adversarial examples were generated online and models were trained to associate them with the correct label. 🧵12/N
- Metamers from adversarially trained models appeared more natural, and were more recognizable to humans. But note that at late stages the metamers are still less than fully recognizable - the training does not fully mitigate the discrepancy with humans. 🧵13/N
- Could these misaligned invariances be due to the supervised task? To get at this, we tested visual self-supervised models. Although some models had slightly more recognizable metamers at intermediate stages, human recognition was still low in absolute terms. 🧵10/N
- Another discrepancy between current models and humans is the tendency for models to base their judgments on texture rather than shape. However, we found that models trained to reduce this texture bias had metamers that are also comparably unrecognizable to humans. 🧵11/N
- This general method has previously been used for model visualization in computer science papers, but the significance for models of human perception has gone mostly unnoticed. 🧵8/N
- We quantified these observations with human behavioral experiments. By the final stages of the tested models, humans were nearly at chance on the task, even though the model represented these the same as the natural stimulus (and recognized them as such). 🧵9/N
- Successive stages of a model may build up invariance, producing successively larger sets of model metamers. Do these metamers remain recognizable to humans for commonly used computational models, as they would in a “correct” model? 🧵6/N
- We tested various supervised neural network models, including convolutional architectures, transformers, and models trained on large datasets. In all cases, model metamers generated from the final stages appeared unnatural and were generally unrecognizable to humans. 🧵7/N
- Invariances can be described in terms of sets in the stimulus space. For a given reference stimulus, a set of stimuli will evoke the same classification judgment as the reference. A subset of these stimuli (metamers) produce the same activations as the reference. 🧵4/N
- Humans performed a classification task on metamers generated from different stages of a model. We investigated both *audio* and *visual* models. If model invariances are shared by humans, humans should be able to classify model metamers as the reference stimulus class. 🧵5/N
- Artificial neural networks are popular models of sensory systems and are often proposed to learn representational transformations with invariances like those in the brain. But are their invariances consistent with human invariances? We set out to explicitly test this. 🧵2/N
- We generated stimuli whose activations within an artificial neural network match those of a natural stimulus. Inspired by previous work in human color perception and visual crowding, we call these stimuli “Model Metamers.” 🧵3/N