Who Uses Continuity

Real professionals, real problems solved. See how Continuity fits your workflow.

🚀

Getting Started

Zero-Click Setup — It Just Works

Install the extension and Continuity auto-detects your AI tools — Claude Desktop, Claude Code, Cursor, GitHub Copilot, Google Gemini, and more. It configures synthetic memory access automatically. No manual config files, no JSON editing.

From install to working synthetic memory in under a minute

Your First Project, Pre-Populated

Open any existing codebase and the Initial Project Scanner reads your files and creates draft decisions automatically. First-run git seeding pre-populates decisions from your recent commit history. You start with a foundation — not a blank slate.

Instant context on day one — no manual logging required

14-Day Free Trial — Full Pro, No Limits

Your trial gives you unlimited decisions and every Pro feature: semantic search, Dream Engine, Knowledge Graph, code intelligence, and the full synthetic memory tool set available to your profile. Test the entire workflow before committing. After the trial, unlimited decision logging continues free — premium features like knowledge graph and semantic search require Pro starting at $9/month.

Experience the full Pro workflow before paying anything

👤

Solo Developers

Decisions Captured Without You Thinking About It

Continuity runs 5 automated detection layers with 19 detection points — file system monitoring, git hooks, memory middleware interception, conversation analysis, and enhanced prompts. Important architectural choices are captured even when you forget to log them manually.

Your project's memory grows automatically as you work

Come Back After a Month and Pick Up Instantly

Synthetic memory persists across every session. O(1) constant-time lookups mean your AI has full context in seconds — even with hundreds of decisions accumulated over months.

Skip the re-orientation phase and start coding immediately

Refactor Without Fear

Code intelligence links decisions to specific files and functions. When you change something significant, your AI checks refactoring safety automatically — it knows what assumptions exist in the code and warns you what might break.

Fewer regressions after major refactors

Understand Any Codebase on Day One

Clone a repo and the Project Scanner reads the structure and git history to generate draft decisions automatically. Your AI understands the codebase's architecture before you've written a single line.

Get productive on day one of any new project

👥

Teams

Team Knowledge Lives in the Repo

Decisions are stored as plain JSON in `.continuity/` — fully git-committable. Everyone on the team gets the same context. A new hire clones the repo and their AI already knows every architectural choice the team has made.

No more "Can someone explain why we use X?" Slack messages

Visualize Your Architecture With the Knowledge Graph

The Knowledge Graph holds up to 1,500 nodes and renders in force-directed, radial, tree, or DAG layouts. See how decisions relate to each other, which ones were superseded, and how your architecture evolved — all in one interactive view.

Understand the "why" behind the current state of your codebase

Faster Developer Onboarding

New hires skip weeks of reading stale wiki docs. The team's full decision history — complete with reasoning and lifecycle status (active, draft, outdated, superseded) — loads into the AI automatically. Teams save significantly on token costs vs. traditional documentation approaches.

Get new developers productive in days, not weeks

Auto-Generate Documentation

Export Architecture Decision Records (ADRs) directly from your logged decisions. Compliance-ready docs from your actual architectural history — not manually maintained wikis that go stale.

Stay audit-ready without the manual documentation burden

🤖

AI Power Users

94.3% Fewer Tokens — 17.6x More Efficient

Current benchmark: 1,677 decisions total. Embedding that history in CLAUDE.md would cost 243,607 tokens, while a Continuity search session comes in at 13,852 tokens across three queries. That is 94.3% savings and 17.6x efficiency, with the embedded approach hitting the 200K context wall around 1,376 decisions. At scale (5,000 decisions), savings reach 98.1% with a 51x efficiency ratio.

Skip the warmup phase — search keeps the context window usable as the repo grows

Profile-Based Memory Tooling

Continuity exposes memory tools in profiles instead of a brittle fixed count. The current tool set is grouped into core, standard, and full profiles, with support across Claude Desktop, Claude Code, Cursor, GitHub Copilot, Google Gemini, and more.

Use the best AI tool for each task without losing context

Dream Engine: Memory Consolidation While You Sleep

The Dream Engine runs a 4-phase memory consolidation cycle — like sleep for your project's memory. It identifies redundant decisions, resolves conflicts, promotes draft decisions, marks outdated ones, and strengthens connections in the Knowledge Graph.

Your project's memory stays clean and coherent over time

Semantic Search Finds What You Actually Mean

Pro's hybrid search combines keyword matching with embedding-based semantic search. Ask "why did we avoid using a monorepo?" and get relevant decisions even if they never use those exact words.

Find the right decision fast — even when you don't remember the exact phrasing

🌐

Beyond Software

Domain Profiles for Any Field

Continuity includes 6 domain profiles — Software Engineering, Writing, Research, Medical, Legal, and General. Each profile adapts vocabulary, templates, and capture patterns to your field. Writers track editorial decisions, researchers log methodology choices, legal teams document case strategy.

One tool for any knowledge-intensive profession

Anti-Sycophancy for Honest AI

Echo Chamber Detection and Epistemic Rigor Scoring surface when your AI is reinforcing assumptions without evidence. Get honest feedback loops, not just agreement.

Prevent groupthink in AI-assisted decision making