The AI Operating System
for Developers
Olympus isn't a CLI tool — it's an operating system you develop within. It orchestrates multiple AI models, governs code quality transparently, survives crashes, and remembers everything across sessions. You code. Olympus protects you, remembers for you, and resolves problems for you.
# ⚡(Zeus): Resuming — was resolving conflict in auth.go
# 🛡️(Aegis): Security scan passed, no findings
# 🦉(Athena): Intent validated — 2 specs generated
olympus> fix the failing tests in payment_service
An AI system that works with you, not just for you
Most AI coding tools are stateless — they answer a question and forget you exist. Olympus is different. It's a persistent, living system that remembers your project, understands your intent, governs code quality behind the scenes, and picks up exactly where it left off — even after a crash.
Built-in quality governance
Behind the scenes, Olympus reviews your code for security, architecture, and correctness — like having a senior team watching every PR. You see a green status indicator, not the internals. Like the lock icon on HTTPS.
Survives everything
Close your laptop. Kill the process. Run out of context. Olympus remembers what it was doing and picks up seamlessly. "Yesterday you were working on the auth refactor. Status: PR created, awaiting CI."
The problems Olympus solves
AI agents close issues without finishing the work
Olympus validates that completed work actually matches your original intent. Stub implementations, dead code, and half-done PRs are caught and rejected automatically.
Cloud AI costs spiral out of control
Olympus routes to local models first — free and under 200ms. Cloud providers are only used when necessary, and you're always asked before spending money.
Every session starts from scratch
Olympus remembers everything — your preferences, past decisions, what you were working on last week. No more re-explaining context to your AI tools.
No visibility into what the AI is doing
Olympus governs transparently. Security reviews, architecture checks, and intent validation happen automatically — and you can inspect every detail on demand.
Two loops running simultaneously
Olympus runs two concurrent loops that work together to give you a seamless experience.
Quality enforcement behind the scenes
While you code, Olympus runs security scans, architecture reviews, and intent validation in the background. When everything passes, you see a green indicator. When something needs attention, you get a plain-language explanation — not a stack trace.
Crash recovery and memory
Every action is checkpointed. If the process dies — whether from Ctrl-C, a crash, or context exhaustion — the next session resumes exactly where you left off. Olympus remembers your project history, preferences, and in-progress work across sessions.
Specialized Pantheon Modules, each with one job
Olympus is built from specialized Pantheon Modules — each named after a figure from Greek mythology. Every Pantheon Module has a single responsibility and communicates through a secure internal bus. Think of them as a team of specialists, each handling what they're best at.
| Pantheon Module | Role | What it does for you |
|---|---|---|
| ⚡ Zeus | Coordinator | Manages your session, delegates tasks to the right modules, and keeps everything in sync |
| 🦉 Athena | Intent & Validation | Understands what you mean, creates concrete specs, and verifies the work actually matches your request |
| 🌈 Iris | AI Model Routing | Picks the best AI model for each task — local models first, cloud only when needed |
| 🔗 Hermes | Internal Communication | Securely connects all modules so they can work together without conflicts |
| 🔨 Hephaestus | Code & Files | Handles code generation, branch management, conflict resolution, and file operations |
| 👁️ Argus | Monitoring | Tracks costs, watches CI pipelines, and detects flaky tests — reports everything, never interferes |
| 🛡️ Aegis | Security & Governance | Scans for vulnerabilities, detects secrets in code, runs governance reviews, and maintains audit trails |
| 🐕 Cerberus | Cost Protection | Guards against unexpected spending — pauses and asks you before using paid cloud models |
| 🧠 Mnemosyne | Memory | Remembers everything across sessions — your preferences, decisions, and work-in-progress |
| 🧩 Epimetheus | Learning | Analyzes past outcomes so Olympus gets smarter over time and avoids repeating mistakes |
🦉 Athena — making sure the work is actually done
The biggest problem with AI coding agents is that they close issues without truly completing the work. Athena solves this with a four-step intent loop that ensures every piece of work matches what you actually asked for.
Only work that fully matches your original intent gets merged. Partial implementations, stub code, and TODO-as-implementation are automatically caught and rejected.
🌈 Local first. Cloud only when needed.
Every request flows through an intelligent routing waterfall. Olympus uses local models whenever possible — they're free and fast. Cloud providers only activate when the task genuinely requires more capability.
~70% savings
Context compression reduces cloud token usage dramatically
~85% savings
Longer sessions see even greater cost reduction
~91% savings
Extended work sessions with minimal cloud cost
Fixes problems before you even notice them
When Olympus hits a blocker — a merge conflict, a failing test, a missing dependency — it tries to fix it automatically. You're only asked for help as a last resort.
| Step | What happens | Example |
|---|---|---|
| 1. Self-resolve | Fixes it directly | Rebases a branch, resolves a merge conflict, installs a missing dependency |
| 2. Retry with context | Retries with error details | Re-runs with the failure output as additional context |
| 3. Reduce scope | Delivers what it can | Skips a conflicting file, merges the clean parts |
| 4. Ask you | Escalates with full details | Explains exactly what was tried and why it failed |
🐕 Cost-aware self-healing
If local models are unavailable and fixing a problem would require a paid cloud model, Olympus pauses and gives you options:
1. Start local models — free
2. Wait — Olympus retries when local models are available
3. Use cloud — proceed with cloud model (costs money)
4. Skip — resolve manually
🧠 Olympus remembers everything
Unlike other AI tools that start fresh every session, Olympus has persistent memory. It remembers your project context, decisions, and preferences — and uses them to give you better results over time.
| Scenario | What Olympus does |
|---|---|
| Context window fills up | Auto-checkpoints and continues seamlessly in a new session |
| You hit Ctrl-C | Next startup: "You had work in progress on X. Continue?" |
| You close your laptop and open it tomorrow | "Yesterday you were working on X. Status: PR created, awaiting CI." |
| You switch to a different machine | Full context restored from encrypted cloud backup (Pro plan) |
| You ask "what happened last week?" | Full recall of intents, decisions, and outcomes |
🛡️ Transparent quality assurance
Olympus governs code quality like HTTPS protects your browser — you know it's there, but you don't see the handshake. Six review panels run automatically on every PR, covering security, architecture, documentation, cost, and more.
| What you see | What's happening |
|---|---|
| 🟢 Green indicator | All governance panels passed — your code is clean and secure |
| 🟡 Yellow indicator | Minor findings — non-blocking suggestions for improvement |
| 🔴 Red indicator | Something needs attention — plain-language explanation provided |
/governance command | Full details on demand — every panel verdict, finding, and recommendation |
Everything you need, nothing you don't
Local-first AI routing
Local models handle everything by default. Cloud providers only activate when the task genuinely requires more capability.
Smart context compression
When cloud models are needed, Olympus compresses context automatically — cutting token costs by up to 91%.
Work verification
Every piece of work is verified against your original request. No more stub implementations or issues closed without real completion.
Security & quality reviews
Six governance panels review every PR — security, architecture, documentation, cost, threat modeling, and data governance.
Cross-session memory
Olympus remembers everything — your intents, decisions, preferences, and file changes. Survives crashes, context exhaustion, and restarts.
Self-healing
Git conflicts, test failures, missing dependencies — Olympus fixes them before you notice. You're only asked as a last resort.
Works with the models you already use
Olympus supports multiple AI providers out of the box. Configure them with olympus configure, or add any compatible API as a plugin.
| Priority | Type | Cost | Best for |
|---|---|---|---|
| 1 · Primary | Local model (on your machine) | Free | Everything. Default for all queries. |
| 2 · Cloud fallback | Subscription provider | Subscription | Reasoning, long context, complex code |
| 3 · Cloud fallback | Secondary subscription provider | Subscription | Code generation, diffs |
| 4 · Last resort | Pay-per-token API | Per token ⚠ | Fallback only — cost warning shown |
| 5 · Plugin | Any compatible API | Per token | Any API you want to add |
# Add any compatible API as a plugin provider
olympus providers add my-provider \
--key sk_... \
--model my-preferred-model \
--base-url https://api.example.com/v1
Get started in a few minutes
# Install via Homebrew
brew install convergent-systems-co/tap/olympus
# Pull a local model for free routing (optional but recommended)
ollama pull llama3
# Configure cloud providers (optional — local models work without any)
olympus configure
# Start Olympus
olympus
What you can do
# Just tell Olympus what you need in plain language
olympus> fix the null pointer in auth_service.go
olympus> explain the token bucket algorithm
olympus> review the payment processing module
olympus> refactor the database connection pool
olympus> write tests for UserService.CreateAccount
# Or use slash commands for specific actions
/diff # review staged git changes
/security # security-focused code review
/governance # full governance details
What the labels and indicators mean
Olympus uses color-coded labels and module icons throughout its interface to keep you informed without being intrusive.
Status indicators
| Indicator | Meaning |
|---|---|
| 🟢 Healthy | Everything is running smoothly — all governance checks passing, no blockers |
| 🟡 Attention | Non-blocking findings — suggestions for improvement, advisory notes |
| 🔴 Blocked | Something needs your input — a security issue, a failed validation, or a cost decision |
Pantheon Module icons in output
When Olympus shows you a message, the icon and name tell you which Pantheon Module is talking:
🦉(Athena): Your request has been translated into 3 concrete specs
🛡️(Aegis): Security scan complete — no issues found
🌈(Iris): Routed to local model — free, <200ms
🐕(Cerberus): Local models unavailable — waiting for you to decide
🧠(Mnemosyne): Checkpoint saved — safe to close
Governance verdict labels
| Label | Meaning | Impact |
|---|---|---|
[CRITICAL] | Security or correctness issue | Blocks merge — must fix |
[HIGH] | Significant production risk | Blocks merge — must fix |
[MEDIUM] | Notable gap | Non-blocking — should fix |
[LOW] | Minor improvement opportunity | Advisory — fix if convenient |
[INFO] | Informational observation | No action needed |
Intent validation verdicts
| Verdict | Meaning |
|---|---|
INTENT_MATCHED | The work fully matches your original request — ready to merge |
PARTIAL | Some of the work is done, but there are gaps — Olympus tells you exactly what's missing |
NOT_MATCHED | The work doesn't match what you asked for — blocked with an explanation |
Deep dives
Architecture
How the two-loop system works, how Pantheon Modules communicate, and how Olympus boots up.
The Pantheon
Every Pantheon Module explained — what it does, why it exists, and how it helps you.
Governance
How automated review panels work — six perspectives checking every PR.
APIS Standard
The issue format that prevents AI agents from closing work prematurely.
Smart Routing
How Olympus picks the right AI model for each task — local first, cloud as fallback.
Pantheon Module Domains
How responsibilities are divided so every capability has exactly one owner.
Free to start. Pay only for what you need.
The full Olympus experience — all Pantheon Modules, local-first routing, governance, memory, and self-healing — runs on your machine at no cost. Paid plans add cloud sync, analytics, and team features.
Free
$0
- All Pantheon Modules included
- Local-first AI model routing
- Full governance pipeline
- Intent validation (Athena)
- Self-healing
- Cross-session memory (local)
- APIS enforcement
Pro
$12–19/mo
- Everything in Free
- Encrypted cloud sync
- Advanced analytics
- Priority model routing
- Extended memory history
- Multi-machine support
Teams
$29–49/seat/mo
- Everything in Pro
- Shared governance policies
- Team-wide memory
- Per-developer cost budgets
- Team usage analytics
- SSO / SAML
Enterprise
For organizations that need centralized AI governance, compliance reporting, audit trails, data loss prevention, and dedicated support. Custom pricing — contact us.
Unified AI Billing Coming soon
One invoice for all your AI providers. Olympus routes your requests to the cheapest capable model — you save money even with the margin. Stop managing multiple API keys and billing accounts.
What's next
Encrypted cloud sync
Sync your Olympus memory across machines. Encrypted before upload. Multi-machine support for ~$0.50/mo.
Advanced analytics
Deep insights into your AI usage — cost breakdowns, model performance, and optimization recommendations.
Team collaboration
Real-time sync across team members. Shared governance policies. Per-developer budgets and analytics.