Memory that belongs to your code— not your model
Long-term synthetic memory for developers who don't want to be locked in.
Store architectural decisions in your project. Your connected AI tools remember them—across models, IDEs, and teams.
After installing, ask your AI this question:
Your AI should confirm it has log_decision(), search_decisions(), and SESSION_HANDOFF.md loaded with your project context.
What to Expect Next
- •Automatic synthetic memory: Every new session, your AI will automatically load SESSION_HANDOFF.md with your latest project context, recent commits, and past decisions
- •No manual loading: You don't need to tell your AI to "read the notes" — Continuity injects context into every conversation automatically
- •Persistent synthetic memory: Decisions you log today remain available to your AI tomorrow, next week, and next month
Share Email, Get 7-Day Pro Trial
Experience all Pro features free for 7 days. Plus get 10 bonus decisions (15 total) when trial ends.
Honest value exchange. Your email for Pro trial + 10 bonus decisions. You decide.
Common Problems Continuity Solves
If you use AI coding assistants, you've probably experienced these frustrations
Why does Cursor keep forgetting my project structure?
Every time you start a new chat, Cursor loses context about your architecture, design patterns, and past decisions. You waste 15-30 minutes re-explaining how your codebase works.
How Continuity fixes this:
Automatically captures architectural decisions through git commits and file saves. Cursor (and other supported AI tools) can query this synthetic memory via the MCP protocol.
Claude/Cline loses context between sessions
AI coding assistants have temporary context windows that reset. Your project's history, conventions, and rationale disappear every time.
How Continuity fixes this:
Embedding-based retrieval keeps architectural decisions persistently accessible. 768-dimensional embeddings enable semantic search so relevant context is surfaced automatically.
I'm tired of re-explaining my codebase architecture
Onboarding new AI tools or starting fresh conversations means repeating yourself constantly about project structure, naming conventions, and design choices.
How Continuity fixes this:
A triple-detection system (git commits + file saves + AI conversations) builds a living knowledge graph of your architecture. All connected AI tools access the same synthetic memory.
My AI coding assistant doesn't remember past conversations
Each coding session starts from zero. The AI doesn't learn from previous interactions or remember solutions you've already discussed.
How Continuity fixes this:
Stores every architectural decision with relationships and timestamps. AI tools see the evolution of your codebase—not just the current state.
Many developers lose hours every week to context re-entry.
What is Synthetic Memory?
Context windows are temporary buffers. Synthetic memory is permanent storage for AI.
The Problem: Context Windows Reset
"We're using PostgreSQL, not MongoDB."
"Remember, we chose React over Vue for this."
"Like I said yesterday, our auth uses JWT tokens."
Every new chat starts from zero. You re-explain the same architectural decisions repeatedly.
The Solution: Synthetic Memory
Synthetic memory is permanent storage that lives outside the context window. When you log a decision ("Use Postgres for better transaction support"), it's stored in your project folder. Any supported AI tool you connect via Continuity can access it persistently.
- •Stored in .continuity/ as plain JSON
- •Works across Claude Desktop, Claude Code, Cursor, Cline, Roo Code
- •Never sent to the cloud — you own the data
- •Commit to git, share with your team
How It Changes Your Workflow
Open any AI chat and it already knows why you picked Postgres, how your auth works, and which patterns you use. No more context-setting. No more re-explaining. Just start coding.
Start with 5 free decisions. Share your email for 10 more + a 7-day Pro trial. Go Pro at $15/month, $139/year, or $399 lifetime.
Context windows are temporary RAM. Synthetic memory is permanent storage.
Why context windows aren't enough
Context windows are temporary buffers. You need permanent storage—synthetic memory.
Automated Decision Capture
Multiple detection methods work together to capture architectural decisions automatically. From file monitoring to AI conversation analysis, these layers ensure high capture rates without manual logging.
5 Detection Layers Working Together
File monitoring, git hooks, AI conversation analysis, and more—capturing decisions automatically so you don't have to.
Monitors 7 architectural file patterns (package.json, tsconfig, Docker, CI/CD) with git-aware diff calculation
Pre-commit hooks detect architectural changes and prompt for decision logging or add to debt tracker
Intercepts AI tool calls to detect 5 decision patterns: research-based, continuity-informed, iterative, config, dependency
AI-powered extraction using Claude 3.5 Sonnet analyzes conversation logs to find missed decisions
Contextual reminders and accountability metrics shown to AI tools via MCP protocol
How Synthetic Memory Works
Permanent storage that works across all your AI tools. Log once, remember forever.
Make a decision once, every AI tool remembers it forever.Context windows reset. Synthetic memory doesn't.