- 🚀 New Paper Alert! 🚀 We introduce Q-Filters, a training-free method for efficient KV Cache compression! It is compatible with FlashAttention and can compress along generation which is particularly useful for reasoning models ⚡ TLDR: we make Streaming-LLM smarter using the geometry of attentionMar 6, 2025 16:02