- It pulls more relevant papers now, but often invents interpretations not in them. When pressed for quotes, it admits making things up. Feels more dangerous than the old nonsense …because it sounds right when it’s not 😨
- OpenAI’s GPT-5 hallucinates less than previous models do, but cutting hallucination completely might prove impossible go.nature.com/4gkzr4A