Local-First Runtime

A Dockerized control plane that mounts your filesystem, scans your repos, and runs agents on your machine.
The default mode for AI development tools is: your code goes to their servers, their model processes it, their logs retain it for some duration you didn't read. For a lot of work, that's fine. For proprietary code, client work under NDA, or anything you'd rather not hand to a third party, it isn't.
The Local-First Runtime is a $30 one-time download that runs the entire control plane on your machine. Docker handles isolation. The runtime mounts the directories you specify — your home directory, individual repos, whatever scope you want — and runs scanning, ingestion, planning, and agent execution locally.
What stays local: your code, your filesystem, your chat history, your instruction files, your task graphs, your memory entries. Nothing leaves the machine unless you explicitly enable a cloud module (ChatSync, RAG Index, Memory Engine) and authorize what gets uploaded.
What goes to the cloud, when you opt in: encrypted embeddings and chat content for the modules you've enabled, scoped to your tenant, never aggregated, never used for training. You can purge cloud state at any time and the local runtime continues to work standalone.
The control plane UI runs in your browser, served from the local container. It shows your ingested chat history, your scanned repos, your active plans, the agent queue, and the memory entries — all queryable, all editable, all yours.
Installation is one command. Mount setup is a configuration step, not a multi-day project. The runtime is designed for developers who want infrastructure they own, not another SaaS dashboard.
One-time payment. No subscription on the runtime itself. The cloud modules are opt-in and priced separately.