How Apogee Works¶
Apogee is three layers that build on each other. Today the governance layer ships and is production-ready. The execution layer has a working core. The domain pipeline layer is designed. This page explains how the layers connect and where each stands.
Layer 1: Governance (Shipping)¶
The governance layer is what you install today. It provides the rules, structure, and analysis that make AI assistants disciplined:
Skills — 102 reusable lifecycle workflows covering design, planning, implementation, testing, review, and release. Skills are Markdown files compiled into versioned Python modules. They compose into chains for multi-step execution with checkpoint and resume. Five inheritance layers (framework, org, team, repo, user) let teams customize without forking.
CPG engine — a Rust Code Property Graph library (apogee-tree)
that parses 8 languages via tree-sitter and emits structural data into
DuckDB. The MCP server exposes 13 CPG query tools: call graphs, impact
analysis, security scanning, branch divergence, backport checking. The
Rust manifest generator (apogee-manifest) orchestrates multi-
branch indexing with incremental history.
Constraint routing — every user message is classified as read or write intent. In read mode, only discovery tools are available. In write mode, the full skill catalog unlocks. This prevents the AI from making changes when you’re exploring.
Governance gates — policy-driven approval gates at lifecycle
boundaries. Gate conditions (test pass, manifest score, proof report)
are evaluated against apogee.policy.json. Immutable gates can’t be
overridden. The gate system has four severity levels (blocker, warning,
acceptable, informational) with lifecycle-stage-aware defaults.
MCP server — a stdio MCP server (apogee-mcp) that exposes
the entire governance layer to IDE-integrated AI assistants. 27 tools,
8 resources, 2 prompts. Works with Claude Code, Cursor, OpenCode, and
any MCP-compatible host.
How the governance layer flows¶
User message
→ constraint routing (read/write classification)
→ skill discovery (find_skill)
→ skill execution (run_skill)
→ chain management (define_chain → execute_chain)
→ governance gates (gate evaluation at step boundaries)
→ manifest scoring (codebase quality context)
→ CPG analysis (structural evidence for decisions)
Layer 2: Execution Engine (Shipping)¶
Component maturity: Shipping
Tested, stress-tested, and integrated as the default chain executor.
Accessed via MCP chain tools (define_chain, execute_chain).
The execution engine is Perigee — it runs chain execution as supervised processes with crash recovery, concurrent session management, and checkpoint persistence. If one chain session crashes, others are unaffected. Sessions can be paused, resumed, transferred between agents, and abandoned cleanly.
What it provides:
Crash isolation — one failed session doesn’t take down others
Concurrent sessions — dozens of chains running in parallel
Checkpoint persistence — chain state survives process crashes
Project-scoped sessions — sessions are tracked per-project with automatic cleanup
Inter-agent coordination — event subscriptions for multi-agent workflows
How it connects to Layer 1: the MCP server communicates with
Perigee over a Unix socket. Set APOGEE_EXECUTOR=socket (the
default) to route chain execution through Perigee. The governance
layer works independently — Layer 1 is fully functional without
Layer 2 for single-session use.
Planned backends — three autonomous execution backends that Perigee will orchestrate:
OpenHands — agent loops in Docker sandboxes for deep analysis and code generation
opencode — headless code modification in local or sandboxed mode
Continue CLI — single-shot execution for CI checks and PR review
Layer 3: Domain Pipelines (Designed)¶
Domain pipelines are specific applications built on Layers 1 and 2. They are the reason the execution engine exists — each pipeline is a multi-step, multi-model workflow that runs autonomously with human approval at configured gates.
Sashiko (kernel patch review) — a planned 9-stage pipeline that
reviews patches from LKML. Each stage runs a different model (screening
with a fast model, deep analysis with a reasoning model, security review
with a specialized model). Stages run in Docker sandboxes via OpenHands.
Findings are compiled into structured reports. Fixes are optionally
implemented and submitted upstream via git-send-email.
Synthmerge (conflict resolution) — a planned single-step pipeline that fans out to N models in parallel, each producing a conflict resolution proposal. Consensus selects the best resolution. Uses OpenHands for the agent loops.
CI-driven review — automated quality gates using Continue CLI as a GitHub Actions status check. Webhook-triggered, stateless, single-shot. Optimized to fail fast with actionable output.
These pipelines are defined as TOML pipeline files consumed by Perigee. Each step specifies its model, backend, sandbox policy, and approval mode. The pipeline format is designed but not yet implemented.
How the layers connect¶
Layer 3: Domain Pipelines
+--------------------------------------------------+
| Patch review, conflict resolution, CI review, |
| custom consumer pipelines |
+-------------------------+------------------------+
|
Layer 2: Execution |
+-------------------------+------------------------+
| Perigee orchestrator |
| Backends: OpenHands | opencode | Continue CLI |
+-------------------------+------------------------+
|
Layer 1: Governance |
+-------------------------+------------------------+
| Skills | CPG | Manifests | Gates | MCP Server |
| Constraint routing | Chain engine |
+--------------------------------------------------+
Each layer depends downward only. Domain pipelines use the execution engine. The execution engine uses governance components (skills, CPG, gates). Nothing depends upward — Layer 1 works without Layers 2 or 3.
See also¶
Why Apogee? — problem, audience, differentiation
Architecture — component map and design
Quickstart — install and first use