Architecture Overview
How Colony is built — Gleam + OTP for orchestration, Rust for the TUI, SolidJS for the web dashboard, and Linux namespaces for isolation.
Colony is composed of three main components, each built with a technology optimized for its purpose.
Components
Mycelium (Orchestration Layer)
Built with Gleam + OTP on the Erlang VM. Mycelium manages colony lifecycles, handles IPC, and coordinates agent environments.
- Actor-per-colony architecture using OTP supervisors
- HTTP API via Wisp for external integrations
- WebSocket for real-time updates to Bloom
- Per-colony RingLogger with ETS ring buffer
- PTY session management for terminal access
Why Gleam + OTP: The Erlang VM excels at managing thousands of concurrent processes with fault tolerance. Each colony is an independent OTP actor — if one fails, it doesn’t affect others.
Agent Backend (hld Daemon)
Colony delegates agent execution to HumanLayer’s hld daemon — a local process that manages Claude Code sessions. Communication is via JSON-RPC over Unix socket (mycelium_hld_ffi.erl).
- Session launch, continue, and interrupt
- Real-time event streaming via subscription
- Cost tracking and session metadata
- Colony runs its own isolated hld instance at
~/.humanlayer/colony/
Persistence
State survives restarts via Erlang term storage:
- Colony state →
~/.cache/colony/colonies.dat - Project state →
~/.cache/colony/projects.dat - Brood state →
~/.cache/colony/broods.dat
Writes are atomic (tmp file + rename) to prevent corruption.
Bloom (Web Dashboard)
Built with SolidJS + SolidStart. Bloom is the primary interface for monitoring and managing colonies.
- Real-time updates via WebSocket
- Plugin-based preview system (web preview, terminal)
- Resizable panel layout with @corvu/resizable
Stem (Terminal Interface)
Built with Rust + Ratatui. Stem is an optional terminal UI for power users and headless/SSH environments.
- Async TUI with IPC bridge to Mycelium
- Lazygit-inspired keyboard-driven UX
- Communicates over Unix Domain Sockets using Protobuf
Isolation Model
Colony uses Linux network namespaces — the same kernel primitive Docker uses internally, but without the container overhead.
Each colony gets:
- Isolated network stack (own IP, routing table, firewall)
- Dedicated Jujutsu VCS workspace
- Independent SQLite database
- Shared filesystem at native speed
Communication
Stem (Rust) ←→ UDS + Protobuf ←→ Mycelium (Gleam/OTP)
↕ ↕
WebSocket Unix Socket
(JSON) (JSON-RPC)
↕ ↕
Bloom (SolidJS) hld Daemon
Stem ↔ Mycelium uses Protocol Buffers (shared .proto definitions compiled by prost for Rust and gpb for Erlang). Bloom ↔ Mycelium uses JSON over WebSocket. Mycelium ↔ hld uses JSON-RPC over Unix socket.
Infrastructure
| Concern | Solution |
|---|---|
| Reverse proxy | Caddy with dynamic route registration |
| DNS | dnsmasq for *.colony.local resolution |
| VCS | Jujutsu with per-agent workspaces |
| Logging | Per-colony RingLogger → SSE streaming |
| Agent backend | HumanLayer hld daemon (JSON-RPC over Unix socket) |
| Persistence | Erlang term storage with atomic writes |
| Dev tools | Nix Flakes for reproducible environments |