95.7% Token Savings
Verified & Reproducible
Current benchmark with 1,650 decisions proves Continuity is 23.3× more efficient than CLAUDE.md. See the math, run the tests yourself.
Token Comparison
CLAUDE.md: 336,704 tokens. Continuity: 14,472.
The Scaling Problem
CLAUDE.md hits context limits. Continuity scales infinitely.
Token Growth: O(n) vs O(1)
CLAUDE.md grows linearly with decisions. Continuity stays constant.
CLAUDE.md hits the 200K context limit at about 968 decisions and becomes unusable.
At 1,650 decisions CLAUDE.md would require 336,704 tokens - about 168.4% of the context window. Continuity stays at 14,472 tokens for the current benchmark.
Calculate Your Savings
See how much you'll save based on your project size
Calculate Your Savings
CLAUDE.md vs Continuity
They serve different purposes and work best together
CLAUDE.md
Static documentation file
Grows linearly with decisions
•Loads all decisions every time
•Manual editing required
•Breaks around 968 decisions
•No search capability
Best for: Project instructions, coding standards, static context (~5,277 base tokens)
Continuity
Intelligent memory system
Constant regardless of decisions
•Loads only what you need
•Automatic logging
•Unlimited decisions
•Semantic search
Best for: Decision memory, searchable history, dynamic queries (14,472 tokens at the current benchmark)
| Feature | CLAUDE.md | Continuity |
|---|---|---|
| Static project instructions | ||
| Store 100 decisions | ||
| Store 500 decisions | ||
| Store 968+ decisions | ||
| Search specific decisions | ||
| Semantic similarity search | ||
| Log decisions automatically | ||
| Session handoff | ||
| Decision relationships | ||
| Link decisions to code | ||
| Token efficient(O(n) vs O(1)) | ||
| Team collaboration |
Real-World Scenario: "What did we decide about authentication?"
With CLAUDE.md Only
- 1.Claude loads all 1,650 decisions (336,704 tokens — 168.4% of the 200K context window)
- 2.Claude scans through everything manually
- 3.You've already used 168.4% of context before even asking
- 4.Doesn't work - exceeds the context limit
With Continuity
- 1.Claude calls
search_decisions("authentication") - 2.Returns only the relevant decisions (~14,472 tokens at the benchmark)
- 3.You've used just 7.8% of context
- 4.Works with room for long conversations
The Verdict: They Work TOGETHER
Use CLAUDE.md for:
What to do (instructions)
Use Continuity for:
What we decided (memory)
CLAUDE.md and Continuity serve different purposes. They're not replacements - they're complements.
Scaling Projections
Token savings increase as your project grows
| Decisions | CLAUDE.md Tokens | Continuity Tokens | Savings | Status |
|---|---|---|---|---|
| 50 | 15,320 | 14,472 | 5.5% | |
| 100 | 25,363 | 14,472 | 42.9% | |
| 250 | 55,493 | 14,472 | 73.9% | |
| 500 | 105,709 | 14,472 | 86.3% | |
| 968LIMIT | 199,714 | 14,472 | 92.8% | |
| 1,000 | 206,142* | 14,472 | 93.0% | |
| 1,650ACTUAL | 336,704* | 14,472 | 95.7% | |
| 5,000 | 1,009,601* | 14,472 | 98.6% | |
| 10,000 | 2,013,925* | 14,472 | 99.3% |
Key Insight: Savings increase as your project grows.
The more decisions you log, the more efficient Continuity becomes. At 10,000 decisions, you save nearly 99.4% of tokens.
Monthly Cost Comparison
Mathematical Proof
Formal proof that Continuity provides asymptotically superior token efficiency
| Symbol | Definition | Measured Value |
|---|---|---|
| n | Total number of decisions | 1,650 |
| D | Total decision tokens | 331,427 |
| d | Average tokens per decision | 200.9 |
| b | Base CLAUDE.md instructions | 5,277 |
| q | Search queries per session | 3 |
| r | Results returned per query | 15 |
| C | Context window limit | 200,000 |
The Fundamental Equation
As decisions grow, Continuity's relative cost approaches zero.
Q.E.D.
Getting Started
What is Continuity?
A RAG-based synthetic memory system for AI coding assistants. Captures decisions, tracks changes, and provides persistent context.
O(1) Token Efficiency
Semantic search retrieves only relevant decisions — constant token usage regardless of project size.
Installation
- Open VS Code Marketplace
- Click Install
- Reload VS Code
First Launch
- Zero-click setup — auto-detects AI tools
- Initial scan — creates draft decisions from codebase
- Git seeding — pre-populates from commit history
- Creates
.continuity/folder with plain JSON
14-Day Free Trial
Full Pro access, no credit card required. After trial, unlimited logging stays free.
Features
Trial + Free Tier
Unlimited decisions, semantic search, knowledge graph
Auto-detects and configures AI tools
19 detection points auto-capture decisions
Pro Features
UpgradeO(1) efficiency
1,500 nodes
4-phase consolidation
Auto-outdating
Refactoring safety
No storage limits
License Management
Activation
- Press
Cmd/Ctrl+Shift+P - Type "Continuity: Activate License"
- Enter your license key and email
License Types
Cancel anytime, 14-day free trial
Save 18%, 14-day free trial
Never expires, all future updates
MCP Integration
What is MCP?
Continuity uses the Model Context Protocol to provide AI assistants with persistent memory and decision logging.
Memory Tools
log_decisionLog decisions with question, answer, and tags
search_decisionsSearch by keyword or semantic query — O(1)
get_quick_contextLoad project history at conversation start
session-handoffComplete project context for handoffs
API Reference
POST /api/licenses/validateValidate a license key with device info
POST /api/devices/activateActivate a new device
DELETE /api/devices/:device_idDeactivate a device
Troubleshooting
License activation fails
- Verify license key (check for typos)
- Use the email from your purchase
- Check internet connection
MCP not working
- Verify Continuity is installed and activated
- Check MCP server logs in VS Code Output
- Restart your AI assistant
FAQ
Why is Continuity more efficient than CLAUDE.md?
CLAUDE.md loads ALL decisions (O(n)). Continuity retrieves only relevant ones (O(1)). At 1,650 decisions: 95.7% savings.
What happens past 968 decisions?
CLAUDE.md exceeds the 200K context window. Continuity keeps working.
How much money will I save?
$580.02/month or $6960.21/year in benchmark-model savings.
Is my data secure?
All decisions stored locally in .continuity/ — never on our servers.
What AI assistants work?
Claude Code, Cursor, GitHub Copilot, Google Gemini, and more via zero-click setup.
Still have questions?
Our team is here to help.