Local-First · Self-Healing · AI Developer OS

The AI Operating System
for Developers

Olympus isn't a CLI tool — it's an operating system you develop within. It orchestrates multiple AI models, governs code quality transparently, survives crashes, and remembers everything across sessions. You code. Olympus protects you, remembers for you, and resolves problems for you.

$ olympus
# ⚡(Zeus): Resuming — was resolving conflict in auth.go
# 🛡️(Aegis): Security scan passed, no findings
# 🦉(Athena): Intent validated — 2 specs generated
olympus> fix the failing tests in payment_service
60–90%
Cloud token reduction
<200ms
Local response latency
10
Pantheon Modules working for you
Sessions survived across crashes

An AI system that works with you, not just for you

Most AI coding tools are stateless — they answer a question and forget you exist. Olympus is different. It's a persistent, living system that remembers your project, understands your intent, governs code quality behind the scenes, and picks up exactly where it left off — even after a crash.

🛡
Transparent

Built-in quality governance

Behind the scenes, Olympus reviews your code for security, architecture, and correctness — like having a senior team watching every PR. You see a green status indicator, not the internals. Like the lock icon on HTTPS.

Always alive

Survives everything

Close your laptop. Kill the process. Run out of context. Olympus remembers what it was doing and picks up seamlessly. "Yesterday you were working on the auth refactor. Status: PR created, awaiting CI."

The problems Olympus solves

🚫

AI agents close issues without finishing the work

Olympus validates that completed work actually matches your original intent. Stub implementations, dead code, and half-done PRs are caught and rejected automatically.

💸

Cloud AI costs spiral out of control

Olympus routes to local models first — free and under 200ms. Cloud providers are only used when necessary, and you're always asked before spending money.

🔄

Every session starts from scratch

Olympus remembers everything — your preferences, past decisions, what you were working on last week. No more re-explaining context to your AI tools.

🤷

No visibility into what the AI is doing

Olympus governs transparently. Security reviews, architecture checks, and intent validation happen automatically — and you can inspect every detail on demand.

Two loops running simultaneously

Olympus runs two concurrent loops that work together to give you a seamless experience.

🛡
Loop 1 — Governance

Quality enforcement behind the scenes

While you code, Olympus runs security scans, architecture reviews, and intent validation in the background. When everything passes, you see a green indicator. When something needs attention, you get a plain-language explanation — not a stack trace.

Loop 2 — Persistence

Crash recovery and memory

Every action is checkpointed. If the process dies — whether from Ctrl-C, a crash, or context exhaustion — the next session resumes exactly where you left off. Olympus remembers your project history, preferences, and in-progress work across sessions.

Learn more about the architecture →

Specialized Pantheon Modules, each with one job

Olympus is built from specialized Pantheon Modules — each named after a figure from Greek mythology. Every Pantheon Module has a single responsibility and communicates through a secure internal bus. Think of them as a team of specialists, each handling what they're best at.

Pantheon ModuleRoleWhat it does for you
ZeusCoordinatorManages your session, delegates tasks to the right modules, and keeps everything in sync
🦉 AthenaIntent & ValidationUnderstands what you mean, creates concrete specs, and verifies the work actually matches your request
🌈 IrisAI Model RoutingPicks the best AI model for each task — local models first, cloud only when needed
🔗 HermesInternal CommunicationSecurely connects all modules so they can work together without conflicts
🔨 HephaestusCode & FilesHandles code generation, branch management, conflict resolution, and file operations
👁️ ArgusMonitoringTracks costs, watches CI pipelines, and detects flaky tests — reports everything, never interferes
🛡️ AegisSecurity & GovernanceScans for vulnerabilities, detects secrets in code, runs governance reviews, and maintains audit trails
🐕 CerberusCost ProtectionGuards against unexpected spending — pauses and asks you before using paid cloud models
🧠 MnemosyneMemoryRemembers everything across sessions — your preferences, decisions, and work-in-progress
🧩 EpimetheusLearningAnalyzes past outcomes so Olympus gets smarter over time and avoids repeating mistakes

Explore all Pantheon Modules in detail →

🦉 Athena — making sure the work is actually done

The biggest problem with AI coding agents is that they close issues without truly completing the work. Athena solves this with a four-step intent loop that ensures every piece of work matches what you actually asked for.

flowchart LR subgraph Translate["1. Understand"] HI["Your request"] CS["Concrete spec"] end subgraph Gate["2. Verify spec"] GC["Has test\ncriteria?"] GR["Pass / Block"] end subgraph Execute["3. Do the work"] EX["Code changes"] EV["Evidence"] end subgraph Validate["4. Validate"] VC["Matches intent?"] VR["MATCHED / PARTIAL\n/ NOT MATCHED"] end HI --> CS CS --> GC GC --> GR GR -->|passed| EX EX --> EV EV --> VC VC --> VR style Translate fill:#1a2535,color:#D8DEE9,stroke:#5E81AC style Gate fill:#1a2e20,color:#D8DEE9,stroke:#A3BE8C style Execute fill:#1e2430,color:#D8DEE9,stroke:#4C566A style Validate fill:#252d3a,color:#D8DEE9,stroke:#EBCB8B

Only work that fully matches your original intent gets merged. Partial implementations, stub code, and TODO-as-implementation are automatically caught and rejected.

🌈 Local first. Cloud only when needed.

Every request flows through an intelligent routing waterfall. Olympus uses local models whenever possible — they're free and fast. Cloud providers only activate when the task genuinely requires more capability.

flowchart TD Q([Your request]) --> O O["1 · Local Model\n(free · <200ms)"] O -->|available| OR([Response · free]) O -->|unavailable| CP CP["2 · Subscription Provider\n(no per-token cost)"] CP -->|available| CPR([Response · subscription]) CP -->|unavailable| GH GH["3 · Subscription Fallback\n(no per-token cost)"] GH -->|available| GHR([Response · subscription]) GH -->|unavailable| CA CA["4 · Pay-per-token API\n⚠ last resort"] CA -->|available| CAR([Response · cost warning]) CA -->|unavailable| ERR([Error + diagnosis]) style O fill:#2E3440,color:#A3BE8C,stroke:#A3BE8C style OR fill:#1a2e20,color:#A3BE8C,stroke:#A3BE8C style CP fill:#2E3440,color:#88C0D0,stroke:#88C0D0 style CPR fill:#1a2535,color:#88C0D0,stroke:#88C0D0 style GH fill:#2E3440,color:#81A1C1,stroke:#81A1C1 style GHR fill:#1a2535,color:#81A1C1,stroke:#81A1C1 style CA fill:#3a2020,color:#BF616A,stroke:#BF616A style CAR fill:#3a2020,color:#EBCB8B,stroke:#EBCB8B style ERR fill:#3a2020,color:#BF616A,stroke:#BF616A style Q fill:#252d3a,color:#D8DEE9,stroke:#4C566A
10 turns

~70% savings

Context compression reduces cloud token usage dramatically

20 turns

~85% savings

Longer sessions see even greater cost reduction

40 turns

~91% savings

Extended work sessions with minimal cloud cost

See how routing decisions work →

Fixes problems before you even notice them

When Olympus hits a blocker — a merge conflict, a failing test, a missing dependency — it tries to fix it automatically. You're only asked for help as a last resort.

StepWhat happensExample
1. Self-resolveFixes it directlyRebases a branch, resolves a merge conflict, installs a missing dependency
2. Retry with contextRetries with error detailsRe-runs with the failure output as additional context
3. Reduce scopeDelivers what it canSkips a conflicting file, merges the clean parts
4. Ask youEscalates with full detailsExplains exactly what was tried and why it failed

🐕 Cost-aware self-healing

If local models are unavailable and fixing a problem would require a paid cloud model, Olympus pauses and gives you options:

🐕(Cerberus): Local models not available. Self-healing requires a cloud model.
1. Start local models — free
2. Wait — Olympus retries when local models are available
3. Use cloud — proceed with cloud model (costs money)
4. Skip — resolve manually

🧠 Olympus remembers everything

Unlike other AI tools that start fresh every session, Olympus has persistent memory. It remembers your project context, decisions, and preferences — and uses them to give you better results over time.

ScenarioWhat Olympus does
Context window fills upAuto-checkpoints and continues seamlessly in a new session
You hit Ctrl-CNext startup: "You had work in progress on X. Continue?"
You close your laptop and open it tomorrow"Yesterday you were working on X. Status: PR created, awaiting CI."
You switch to a different machineFull context restored from encrypted cloud backup (Pro plan)
You ask "what happened last week?"Full recall of intents, decisions, and outcomes

🛡️ Transparent quality assurance

Olympus governs code quality like HTTPS protects your browser — you know it's there, but you don't see the handshake. Six review panels run automatically on every PR, covering security, architecture, documentation, cost, and more.

What you seeWhat's happening
🟢 Green indicatorAll governance panels passed — your code is clean and secure
🟡 Yellow indicatorMinor findings — non-blocking suggestions for improvement
🔴 Red indicatorSomething needs attention — plain-language explanation provided
/governance commandFull details on demand — every panel verdict, finding, and recommendation

Learn about the governance pipeline →

Everything you need, nothing you don't

🌈
Free · <200ms

Local-first AI routing

Local models handle everything by default. Cloud providers only activate when the task genuinely requires more capability.

🗜
85% token savings

Smart context compression

When cloud models are needed, Olympus compresses context automatically — cutting token costs by up to 91%.

🦉
Intent validation

Work verification

Every piece of work is verified against your original request. No more stub implementations or issues closed without real completion.

🛡
Automated

Security & quality reviews

Six governance panels review every PR — security, architecture, documentation, cost, threat modeling, and data governance.

🧠
Persistent

Cross-session memory

Olympus remembers everything — your intents, decisions, preferences, and file changes. Survives crashes, context exhaustion, and restarts.

🔨
Automatic

Self-healing

Git conflicts, test failures, missing dependencies — Olympus fixes them before you notice. You're only asked as a last resort.

Works with the models you already use

Olympus supports multiple AI providers out of the box. Configure them with olympus configure, or add any compatible API as a plugin.

Priority Type Cost Best for
1 · Primary Local model (on your machine) Free Everything. Default for all queries.
2 · Cloud fallback Subscription provider Subscription Reasoning, long context, complex code
3 · Cloud fallback Secondary subscription provider Subscription Code generation, diffs
4 · Last resort Pay-per-token API Per token ⚠ Fallback only — cost warning shown
5 · Plugin Any compatible API Per token Any API you want to add
# Add any compatible API as a plugin provider
olympus providers add my-provider \
  --key sk_... \
  --model my-preferred-model \
  --base-url https://api.example.com/v1

Get started in a few minutes

# Install via Homebrew
brew install convergent-systems-co/tap/olympus

# Pull a local model for free routing (optional but recommended)
ollama pull llama3

# Configure cloud providers (optional — local models work without any)
olympus configure

# Start Olympus
olympus

What you can do

# Just tell Olympus what you need in plain language
olympus> fix the null pointer in auth_service.go
olympus> explain the token bucket algorithm
olympus> review the payment processing module
olympus> refactor the database connection pool
olympus> write tests for UserService.CreateAccount

# Or use slash commands for specific actions
/diff       # review staged git changes
/security   # security-focused code review
/governance # full governance details

What the labels and indicators mean

Olympus uses color-coded labels and module icons throughout its interface to keep you informed without being intrusive.

Status indicators

IndicatorMeaning
🟢 HealthyEverything is running smoothly — all governance checks passing, no blockers
🟡 AttentionNon-blocking findings — suggestions for improvement, advisory notes
🔴 BlockedSomething needs your input — a security issue, a failed validation, or a cost decision

Pantheon Module icons in output

When Olympus shows you a message, the icon and name tell you which Pantheon Module is talking:

⚡(Zeus): Session resumed — picking up where you left off
🦉(Athena): Your request has been translated into 3 concrete specs
🛡️(Aegis): Security scan complete — no issues found
🌈(Iris): Routed to local model — free, <200ms
🐕(Cerberus): Local models unavailable — waiting for you to decide
🧠(Mnemosyne): Checkpoint saved — safe to close

Governance verdict labels

LabelMeaningImpact
[CRITICAL]Security or correctness issueBlocks merge — must fix
[HIGH]Significant production riskBlocks merge — must fix
[MEDIUM]Notable gapNon-blocking — should fix
[LOW]Minor improvement opportunityAdvisory — fix if convenient
[INFO]Informational observationNo action needed

Intent validation verdicts

VerdictMeaning
INTENT_MATCHEDThe work fully matches your original request — ready to merge
PARTIALSome of the work is done, but there are gaps — Olympus tells you exactly what's missing
NOT_MATCHEDThe work doesn't match what you asked for — blocked with an explanation

Deep dives

Architecture

How the two-loop system works, how Pantheon Modules communicate, and how Olympus boots up.

The Pantheon

Every Pantheon Module explained — what it does, why it exists, and how it helps you.

Governance

How automated review panels work — six perspectives checking every PR.

APIS Standard

The issue format that prevents AI agents from closing work prematurely.

Smart Routing

How Olympus picks the right AI model for each task — local first, cloud as fallback.

Pantheon Module Domains

How responsibilities are divided so every capability has exactly one owner.

Free to start. Pay only for what you need.

The full Olympus experience — all Pantheon Modules, local-first routing, governance, memory, and self-healing — runs on your machine at no cost. Paid plans add cloud sync, analytics, and team features.

Available now

Free

$0

  • All Pantheon Modules included
  • Local-first AI model routing
  • Full governance pipeline
  • Intent validation (Athena)
  • Self-healing
  • Cross-session memory (local)
  • APIS enforcement
Coming soon

Pro

$12–19/mo

  • Everything in Free
  • Encrypted cloud sync
  • Advanced analytics
  • Priority model routing
  • Extended memory history
  • Multi-machine support
Future

Teams

$29–49/seat/mo

  • Everything in Pro
  • Shared governance policies
  • Team-wide memory
  • Per-developer cost budgets
  • Team usage analytics
  • SSO / SAML

Enterprise

For organizations that need centralized AI governance, compliance reporting, audit trails, data loss prevention, and dedicated support. Custom pricingcontact us.

Unified AI Billing Coming soon

One invoice for all your AI providers. Olympus routes your requests to the cheapest capable model — you save money even with the margin. Stop managing multiple API keys and billing accounts.

What's next

Next

Encrypted cloud sync

Sync your Olympus memory across machines. Encrypted before upload. Multi-machine support for ~$0.50/mo.

Next

Advanced analytics

Deep insights into your AI usage — cost breakdowns, model performance, and optimization recommendations.

Future

Team collaboration

Real-time sync across team members. Shared governance policies. Per-developer budgets and analytics.