Mattias Rost
HCI, HCAI, AI. PhD, Associate Professor, Gothenburg University, Head of Division
- We often describe LLMs as “next-token predictors.” That description is correct, and deeply insufficient. In a new AI Letters paper, I argue for proto-interpretation: understanding inference as a temporally structured interpretive process. dl.acm.org/doi/10.1145/...
- One motivation for this paper: Treating LLM outputs as static artifacts hides the dynamics that produce them: commitments, stabilizations, and path dependencies during generation. Proto-interpretation is an attempt to name and study that middle ground.