Skip to main content
When your conversation approaches the model’s context window limit, Cline automatically summarizes it to free up space and keep working.
Auto-compact condensing conversation context

How It Works

Cline monitors token usage during your conversation. When you’re getting close to the limit, he:
  1. Creates a comprehensive summary of everything that’s happened
  2. Preserves all technical details, code changes, and decisions
  3. Replaces the conversation history with the summary
  4. Continues exactly where he left off
You’ll see a summarization tool call when this happens, showing the cost like any other API call.

Why This Matters

Previously, Cline would truncate older messages when hitting context limits, losing important context. Now with summarization:
  • All technical decisions and code patterns are preserved
  • File changes and project context remain intact
  • Cline remembers everything he’s done
  • You can work on much larger projects without interruption
Auto Compact works beautifully with Focus Chain. When Focus Chain is enabled, todo lists persist across summarizations. Cline can work on long-horizon tasks spanning multiple context windows while staying on track.

Cost Considerations

Summarization leverages your existing prompt cache from the conversation, so it costs about the same as any other tool call. Since most input tokens are already cached, you’re primarily paying for summary generation (output tokens), making it cost-effective.

Supported Models

Auto Compact uses advanced LLM-based summarization for these models:
  • Claude 4 series
  • Gemini 2.5 series
  • GPT-5
  • Grok 4
With other models, Cline falls back to standard rule-based context truncation, even if Auto Compact is enabled.

Restoring Context

You can use checkpoints to restore your task state from before a summarization occurred. You never truly lose context since you can always roll back. Editing a message before a summarization tool call works similarly, restoring the conversation to that point.