We often describe LLMs as “next-token predictors.”
That description is correct, and deeply insufficient.
In a new AI Letters paper, I argue for proto-interpretation: understanding inference as a temporally structured interpretive process.
dl.acm.org/doi/10.1145/...
Proto-Interpretation: The Temporality of Large Language Model Inference | ACM AI Letters
We show that autoregressive generation in large language models exhibits a temporal structure: each token is not only conditioned on the past but also reshapes the future continuation space. We call this process proto-interpretation: the probabilistic ...