Petter Törnberg
Assistant Professor in Computational Social Science at University of Amsterdam
Studying the intersection of AI, social media, and politics.
Polarization, misinformation, radicalization, digital platforms, social complexity.
- The impact of AI is not just a matter of what AI can do. It is shaped by the economy in which AI is forged. Today, AI is emerging inside technofeudalism — built on lock-in, dependency, and rent extortion. Our new preprint explores what this means for the coming AI society. osf.io/preprints/so...
- 🚨 We are entering the AI society. After two decades of platforms reshaping markets, communication, and governance, a new shift is underway: AI is transforming society. What should we expect from the 'AI society'? 🧵 📄 Our new paper faces this question: osf.io/preprints/so...
-
View full threadTo meet this moment, we need Critical AI Studies 📚🤖 – Historically grounded: platform roots matter – Power-centered: AI as cognitive infrastructure – Reflexive: AI reshapes inquiry itself
- But honestly, this is one of those "papers that should have been a book" - hard to summarize in a thread! :) I recommend reading the preprint! Also, follow my coauthor: @uitermark.bsky.social Open Access preprint: osf.io/preprints/so...
- 🏛️ Political shift: from data politics to alignment politics AI is quietly coming to produce a growing share of human language and culture. Control the models, and you shape meaning, politics, and culture. This is infrastructural empire.
- You might think the current moment of AI is bad. But we're still in the honeymoon phase of generative AI - mirroring the "millennial lifestyle subsidy" era of platforms, when venture capital underwrote cheap urban consumption. Enshittification has yet to come.
- But where platforms locked users in through network effects, these are weaker for AI. Instead, AI creates cognitive and social dependencies—which may be even harder to escape. 🧠🔗 AI represents the infrastructuralization of cognition itself, concentrating unprecedented levels of private power.
- 🧠 Epistemic shift: from prediction to generation Just as platform data reshaped the social sciences, generative AI is reshaping how we understand the world. LLMs don’t just run analyses—they participate in interpretation and theorizing, creating deep dependence on private, opaque infrastructures.
- AI doesn't arrive from nowhere. It grows out of platform logics, infrastructures, and power relations. 🧱➡️🤖 To understand the AI society, we must start from the platform society. In our paper, we map this transition across three entangled domains: 💰 economic 🧠 epistemic 🏛️ political
- 💰 Economic shift: platform capitalism to AI capitalism Platforms do not make profits through free market competition, but by controlling chokepoints and extracting monopoly rents—often described as techno-feudalism. Those same feudal logics are now shaping AI.
- [Not loaded yet]
- If you are feeling unsafe, I would encourage you to contact either 988 or 911. They will be able to help you.
- 📕 Review of “Seeing like a Platform” “What makes this book essential reading is its recognition that digital technology’s democratic potential and authoritarian dangers are not separate phenomena but two faces of the same coin.” albertoblumenscheincruz.substack.com/p/review-see...
- Perfect Holiday gift! 🎁 Worth every penny! (It’s open access 😉)
- 📘 Book launch: Seeing Like a Platform (with Justus Uitermark) 🗓 10 Dec | Amsterdam A conversation on platforms, politics, power, and digital modernity - followed by drinks. Hope to see you there! 👉 Details & RSVP: globaldigitalcultures.uva.nl/content/even...
- [Not loaded yet]
- No, sorry - in person only! Really happy you liked the book! :)
- This paper is now out in Artificial Intelligence Review Bottom line: using LLMs to "simulate humans" sits in a no-man’s-land between theory and empirics—too opaque to function as a model, too ungrounded to count as evidence. Validation remains the core challenge. link.springer.com/article/10.1...
- Large Language Model-based social simulation has emerged as an exciting new research method. But do LLMs actually resolve the problems that have historically limited use of Agent-Based Models? What do they bring? We review the literature to find out! with Maik Laaroij arxiv.org/abs/2504.03274
- Pretty wild: I just learned that, in terms of Altmetric, this paper became the most impactful Political Science article of the past five years, and the 4th most impactful EVER. 😳 Altmetric is... well, Altmetric. But still, kind of surreal.
- Misinformation isn't random - it's strategic. 🧵 In the first cross-national comparative study, we examine 32M tweets from politicians. We find that misinformation is not a general condition: it is driven by populist radical right parties. with @julianachueri.bsky.social doi.org/10.1177/1940...
- [Not loaded yet]
- To be fair, the fact that my paper has a higher Altmetric than Marx's Das Kapital might be taken to imply that the limitations in the methodology are to my paper's benefit... ;)
- [Not loaded yet]
- [Not loaded yet]
- Thanks Jonathan. Democrats still have users but rarely visit the site. I would point you to the preprint I put up for more details and better versions of the figures: www.arxiv.org/abs/2510.25417
- LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text But... can they? We don’t actually know. In our new study, we develop a Computational Turing Test. And our findings are striking: LLMs may be far less human-like than we think.🧵
-
View full threadThis has been carried out by amazing Nicolò Pagan, with Chris Bail, Chris Barrie, and Anikó Hannák. Paper (preprint): arxiv.org/abs/2511.04195 Happy to share prompts, configs, and analysis scripts.
- Find my co-authors on Bluesky: @chrisbail.bsky.social @cbarrie.bsky.social Colleagues who do excellent work in this field, and might find these results interesting: @mbernst.bsky.social @robbwiller.bsky.social @joon-s-pk.bsky.social @janalasser.bsky.social @dgarcia.eu @aaronshaw.bsky.social
- We also found some surprising trade-offs: 🎭 When models sound more human, they drift from what people actually say. 🧠 When they match meaning better, they sound less human. Style or meaning — you have to pick one.
- Takeaways for researchers: • LLMs are worse stand-ins for humans than they may appear. • Don’t rely on human judges. • Measure detectability and meaning. • Expect a style–meaning trade-off. • Use examples + context, not personas. • Affect is still the biggest giveaway.
- Some findings surprised us: ⚙️ Instruction-tuned models — the ones fine-tuned to follow prompts — are easier to detect than their base counterparts. 📏 Model size doesn’t help: even 70B models don’t sound more human.
- So what actually helps? Not personas. And fine-tuning? Not always. The real improvements came from: ✅ Providing stylistic examples of the user ✅ Adding context retrieval from past posts Together, these reduced detectability by 4-16 percentage points.
- The results were clear — and surprising. Even short social media posts written by LLMs are readily distinguishable. Our BERT-based classifier spots AI with 70–80% accuracy across X, Bluesky, and Reddit. LLMs are much less human-like than they may seem.
- Where do LLMs give themselves away? ❤️ Affective tone and emotion — the clearest tell. ✍️ Stylistic markers — average word length, toxicity, hashtags, emojis. 🧠 Topic profiles — especially on Reddit, where conversations are more diverse and nuanced.
- We use our Computational Turing Test to see whether LLMs can produce realistic social media conversations. We use data from X (Twitter), Bluesky, and Reddit. This task is arguably what LLMs should do best: they are literally trained on this data!
- We test the state-of-the-art methods for calibrating LLMs — and then push further, using advanced fine-tuning. We benchmark 9 open-weight LLMs across 5 calibration strategies: 👤 Persona ✍️ Stylistic examples 🧩 Context retrieval ⚙️ Fine-tuning 🎯 Post-generation selection
- Most prior work validated "human-likeness" with human judges. Basically, do people think it looks human? But humans are actually really bad at this task: we are subjective, scale poorly, and very easy to fool. We need something more rigorous.
- We introduce a Computational Turing Test — a validation framework that compares human and LLM text using: 🕵️♂️ Detectability — can an ML classifier tell AI from human? 🧠 Semantic fidelity — does it mean the same thing? ✍️ Interpretable linguistic features — style, tone, topics.
- Most people study what misinformation says. We decided to study how it looks. Using novel multi-modal AI methods, we study 17,848 posts by top climate denial accounts - and uncovered a new front in the misinformation war. Here's what it means 🧵 www.tandfonline.com/doi/full/10....
-
View full threadThis aesthetic strategy expands denialism’s reach. It appeals to audiences who’d never click on conspiracies - because it looks like reason, not ideology. By mimicking science, denialists perform neutrality while undermining it. This isn’t just denial. It’s strategic depoliticization.
- The battlefield of misinformation isn’t just about facts. It’s about form. Design and aesthetics have become powerful weapons - shaping what feels rational, what seems credible, and who gets to speak for science.
- These posts could pass for pages from a scientific report - except they twist or cherry-pick data to cast doubt on climate science. They give misinformation the aesthetics of rationality: white men in white lab coats pointing at complicated graphs.
- Meanwhile, climate researchers and activists are portrayed as emotional and irrational: 😢 Crying protesters ⚠️ Angry crowds 🚫 “Ideological fanatics” The contrast is deliberate: Climate denial looks calm and factual. Climate action looks hysterical and extreme.
- On social media, content is no longer just text - it’s text wrapped in images and motion. Visuals travel faster, trigger emotion more easily, and slip past critical thought. That’s what makes them such fertile ground for misinformation - and yet, we’ve barely studied them.
- When we examined the visual language of climate misinformation, the results were striking We found what we call "scientific mimicry". Much of it borrows the look and feel of science: clean graphs, neutral tones, and technical diagrams that perform objectivity. It looks like science - but it’s not
- Is social media dying? How much has Twitter changed as it became X? Which party now dominates the conversation? Using nationally representative ANES data from 2020 & 2024, I map how the U.S. social media landscape has transformed. Here are the key take-aways 🧵 arxiv.org/abs/2510.25417
- [Not loaded yet]
- Yeah it should be noted that the ANES data only includes 18+ US citizens. But this does track with my BSc students. They seem to be much less online than I.
- Posting is correlated with affective polarization: 😡 The most partisan users — those who love their party and despise the other — are more likely to post about politics 🥊 The result? A loud angry minority dominates online politics, which itself can drive polarization (see doi.org/10.1073/pnas...)
- Here's the full preprint. Feel free to write me if you want any additional analyses in the final version! arxiv.org/abs/2510.25417
- Politically, the landscape is shifting too: 🔴 Nearly all platforms have become more Republican 🔵 But they remain Democratic-leaning overall 🏃♂️ Democrats are fleeing to smaller platforms (Bluesky, Threads, Mastodon)
- Twitter/X is a story on its own: 🔴 While users have become more Republican 💥 POSTING has completely transformed: it has moved nearly ❗50 percentage points❗ from Democrat-dominated to slightly Republican-leaning.
- Overall social media use is declining. Between 2020 and 2024, more Americans — especially the youngest (18–24) and oldest (65+) — report using no social media at all. A small group of heavy users remains, but the middle is thinning out.
- Legacy platforms are losing ground: ⬇️ Facebook ⬇️ YouTube ⬇️ Twitter/X But ⬆️ TikTok and Reddit "Other" platforms - including Bluesky - has not seen significant growth as a whole. (But huge compositional changes)
- In short, the ANES data shows: 📉 Social media use is shrinking 💥 Twitter/X posting has moved ~50 points to the right 🧩 Platforms are splintering 🔊 Fewer people are talking — but those still talking are more politically extreme
- These analyses are now available in preprint form arxiv.org/abs/2510.25417
- [Not loaded yet]
- [Not loaded yet]
- Was faster than I thought! Here it is: arxiv.org/abs/2510.25417