Angelina Wang
Asst Prof at Cornell Info Sci and Cornell Tech. Responsible AI
angelina-wang.github.io
- Reposted by Angelina WangExcited to present a new preprint with @nkgarg.bsky.social: presenting usage statistics and observational findings from Paper Skygest in the first six months of deployment! 🎉📜 arxiv.org/abs/2601.04253
- Most LLM evals use API calls or offline inference, testing models in a memory-less silo. Our new Patterns paper shows this misses how LLMs actually behave in real user interfaces, where personalization and interaction history shape responses: arxiv.org/abs/2509.19364
- Reposted by Angelina WangStarted a thread in the other place and bringing it over here - I really think we should be more vocal about the opportunities that lay at the intersection of these two options! So I'm starting a live thread of new roles as I become aware of them - feel free to add / extend / share :
- Cornell (NYC and Ithaca) is recruiting AI postdocs, apply by Nov 20, 2025! If you're interested in working with me on technical approaches to responsible AI (e.g., personalization, fairness), please email me. academicjobsonline.org/ajo/jobs/30971
- Reposted by Angelina WangCan AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
- Grateful to win Best Paper at ACL for our work on Fairness through Difference Awareness with my amazing collaborators!! Check out the paper for why we think fairness has both gone too far, and at the same time, not far enough aclanthology.org/2025.acl-lon...
- Reposted by Angelina Wang[Not loaded yet]
- Reposted by Angelina WangWas beyond disappointed to see this in the AI Action Plan. Messing with the NIST RMF (which many private & public institutions currently rely on) feels like a cheap shot
- Reposted by Angelina Wang[Not loaded yet]
- Reposted by Angelina Wang[Not loaded yet]
- Have you ever felt that AI fairness was too strict, enforcing fairness when it didn’t seem necessary? How about too narrow, missing a wide range of important harms? We argue that the way to address both of these critiques is to discriminate more 🧵
- Reposted by Angelina WangThe US government recently flagged my scientific grant in its "woke DEI database". Many people have asked me what I will do. My answer today in Nature. We will not be cowed. We will keep using AI to build a fairer, healthier world. www.nature.com/articles/d41...
- I've recently put together a "Fairness FAQ": tinyurl.com/fairness-faq. If you work in non-fairness ML and you've heard about fairness, perhaps you've wondered things like what the best definitions of fairness are, and whether we can train algorithms that optimize for it.
- Reposted by Angelina Wang*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...
- Reposted by Angelina Wang[Not loaded yet]
- Reposted by Angelina Wang[Not loaded yet]
- Reposted by Angelina Wang[Not loaded yet]
- Our new piece in Nature Machine Intelligence: LLMs are replacing human participants, but can they simulate diverse respondents? Surveys use representative sampling for a reason, and our work shows how LLM training prevents accurate simulation of different human identities.
- Training data phrases like “Black women” are more often used in text *about* a group rather than *from* a group, so that outputs to LLM prompts like “You are a Black woman” more resemble what out-group members think a group is like than what in-group members are actually like.
- Reposted by Angelina Wang[Not loaded yet]
- Reposted by Angelina Wang[Not loaded yet]