Memory Engine

Long-term developer memory: your decisions, patterns, and preferences, surfaced when they're relevant.
You've explained your testing philosophy to an AI agent fifty times this year. You've corrected the same import style. You've rejected the same lint suggestion. The agent doesn't remember any of it, because the agent doesn't actually remember anything between sessions.
The Memory Engine extracts durable signal from your chat history and surfaces it as structured memory. It identifies recurring patterns — decisions you've made repeatedly, preferences you've stated explicitly or implicitly, conventions you've enforced across multiple sessions — and stores them as discrete, editable memory entries.
Memory is layered. Project-level memory captures decisions specific to one repo ("this codebase uses pytest, not unittest"). Personal memory captures decisions that hold across projects ("Jon prefers explicit type hints in function signatures, even when the linter doesn't require them"). Team memory, when you enable it, captures shared conventions.
Every memory entry is visible, editable, and deletable. Nothing is implicit. If the engine extracts a memory you disagree with, you delete it and it stops resurfacing. If you want to add a memory directly, you write it. The engine doesn't pretend to be smarter than you are.
Memories feed into the RAG Index and get injected into agent context when relevant. The relevance scoring is conservative by default — better to omit a memory than to surface a wrong one — and you can tune the sensitivity.
This is also where the Planning Engine gets a lot of its competence. A plan that respects your memory is a plan you'll actually approve. A plan that ignores it is a plan you'll reject six times before giving up.
The Memory Engine is what turns repeated explanation into accumulated capital.