Petter Törnberg
Assistant Professor in Computational Social Science at University of Amsterdam
Studying the intersection of AI, social media, and politics.
Polarization, misinformation, radicalization, digital platforms, social complexity.
- Reposted by Petter TörnbergThe bots populating Moltbook apparently all come to sound like a familiar kind of internet denizen with delusions of philosophical grandeur. Manifestos and declarations all over the place.
- Somebody created an Reddit clone exclusively populated by personal digital agents and let them interact with each other. It’s bonkers, fascinating and terrifying in equal measure. www.forbes.com/sites/amirhu...
- Reposted by Petter Törnberg📢 We’re hiring @vuamsterdam.bsky.social PhD position in my ERC project: Welfare State Transformation in the Age of Artificial Intelligence Interested in how AI is reshaping labor markets, social protection, and the politics of redistribution? Apply here: lnkd.in/eCcAXsaS Please share widely!
- The impact of AI is not just a matter of what AI can do. It is shaped by the economy in which AI is forged. Today, AI is emerging inside technofeudalism — built on lock-in, dependency, and rent extortion. Our new preprint explores what this means for the coming AI society. osf.io/preprints/so...
- 🚨 We are entering the AI society. After two decades of platforms reshaping markets, communication, and governance, a new shift is underway: AI is transforming society. What should we expect from the 'AI society'? 🧵 📄 Our new paper faces this question: osf.io/preprints/so...
- AI doesn't arrive from nowhere. It grows out of platform logics, infrastructures, and power relations. 🧱➡️🤖 To understand the AI society, we must start from the platform society. In our paper, we map this transition across three entangled domains: 💰 economic 🧠 epistemic 🏛️ political
- 💰 Economic shift: platform capitalism to AI capitalism Platforms do not make profits through free market competition, but by controlling chokepoints and extracting monopoly rents—often described as techno-feudalism. Those same feudal logics are now shaping AI.
-
View full threadBut honestly, this is one of those "papers that should have been a book" - hard to summarize in a thread! :) I recommend reading the preprint! Also, follow my coauthor: @uitermark.bsky.social Open Access preprint: osf.io/preprints/so...
- 📕 Review of “Seeing like a Platform” “What makes this book essential reading is its recognition that digital technology’s democratic potential and authoritarian dangers are not separate phenomena but two faces of the same coin.” albertoblumenscheincruz.substack.com/p/review-see...
- Perfect Holiday gift! 🎁 Worth every penny! (It’s open access 😉)
- Reposted by Petter TörnbergI’m about a quarter of the way through this open access* book and it’s SO GOOD, so far at least lol, I really want to talk about it with someone * www.taylorfrancis.com/reader/downl...
- 📘 Book launch: Seeing Like a Platform (with Justus Uitermark) 🗓 10 Dec | Amsterdam A conversation on platforms, politics, power, and digital modernity - followed by drinks. Hope to see you there! 👉 Details & RSVP: globaldigitalcultures.uva.nl/content/even...
- This paper is now out in Artificial Intelligence Review Bottom line: using LLMs to "simulate humans" sits in a no-man’s-land between theory and empirics—too opaque to function as a model, too ungrounded to count as evidence. Validation remains the core challenge. link.springer.com/article/10.1...
- Large Language Model-based social simulation has emerged as an exciting new research method. But do LLMs actually resolve the problems that have historically limited use of Agent-Based Models? What do they bring? We review the literature to find out! with Maik Laaroij arxiv.org/abs/2504.03274
- 📘 Book launch: Seeing Like a Platform (with Justus Uitermark) 🗓 10 Dec | Amsterdam A conversation on platforms, politics, power, and digital modernity - followed by drinks. Hope to see you there! 👉 Details & RSVP: globaldigitalcultures.uva.nl/content/even...
- Reposted by Petter TörnbergFeatured article by Anton Törnberg and @pettertornberg.com: www.tandfonline.com/doi/full/10....
- Pretty wild: I just learned that, in terms of Altmetric, this paper became the most impactful Political Science article of the past five years, and the 4th most impactful EVER. 😳 Altmetric is... well, Altmetric. But still, kind of surreal.
- Misinformation isn't random - it's strategic. 🧵 In the first cross-national comparative study, we examine 32M tweets from politicians. We find that misinformation is not a general condition: it is driven by populist radical right parties. with @julianachueri.bsky.social doi.org/10.1177/1940...
- Reposted by Petter Törnberg.@pettertornberg.com's keynote in Oxford was fantastic. What comes after the traditional model of social media ends? 1) Algorithmic broadcasting platforms (everything turning into TikTok and Instagram reels) 2) Private and semi-private spheres (like group chats) 3) Chatbots and LLMs as new media
- LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text But... can they? We don’t actually know. In our new study, we develop a Computational Turing Test. And our findings are striking: LLMs may be far less human-like than we think.🧵
- Most prior work validated "human-likeness" with human judges. Basically, do people think it looks human? But humans are actually really bad at this task: we are subjective, scale poorly, and very easy to fool. We need something more rigorous.
- We introduce a Computational Turing Test — a validation framework that compares human and LLM text using: 🕵️♂️ Detectability — can an ML classifier tell AI from human? 🧠 Semantic fidelity — does it mean the same thing? ✍️ Interpretable linguistic features — style, tone, topics.
-
View full threadFind my co-authors on Bluesky: @chrisbail.bsky.social @cbarrie.bsky.social Colleagues who do excellent work in this field, and might find these results interesting: @mbernst.bsky.social @robbwiller.bsky.social @joon-s-pk.bsky.social @janalasser.bsky.social @dgarcia.eu @aaronshaw.bsky.social
- Most people study what misinformation says. We decided to study how it looks. Using novel multi-modal AI methods, we study 17,848 posts by top climate denial accounts - and uncovered a new front in the misinformation war. Here's what it means 🧵 www.tandfonline.com/doi/full/10....
- On social media, content is no longer just text - it’s text wrapped in images and motion. Visuals travel faster, trigger emotion more easily, and slip past critical thought. That’s what makes them such fertile ground for misinformation - and yet, we’ve barely studied them.
- When we examined the visual language of climate misinformation, the results were striking We found what we call "scientific mimicry". Much of it borrows the look and feel of science: clean graphs, neutral tones, and technical diagrams that perform objectivity. It looks like science - but it’s not
-
View full threadThe battlefield of misinformation isn’t just about facts. It’s about form. Design and aesthetics have become powerful weapons - shaping what feels rational, what seems credible, and who gets to speak for science.
- Is social media dying? How much has Twitter changed as it became X? Which party now dominates the conversation? Using nationally representative ANES data from 2020 & 2024, I map how the U.S. social media landscape has transformed. Here are the key take-aways 🧵 arxiv.org/abs/2510.25417
- In short, the ANES data shows: 📉 Social media use is shrinking 💥 Twitter/X posting has moved ~50 points to the right 🧩 Platforms are splintering 🔊 Fewer people are talking — but those still talking are more politically extreme
- Overall social media use is declining. Between 2020 and 2024, more Americans — especially the youngest (18–24) and oldest (65+) — report using no social media at all. A small group of heavy users remains, but the middle is thinning out.
-
View full threadHere's the full preprint. Feel free to write me if you want any additional analyses in the final version! arxiv.org/abs/2510.25417