Networking Deep Dive
How Colony uses Linux network namespaces, Caddy reverse proxy, and dnsmasq for isolated colony networking.
Colony uses Linux network namespaces, Caddy reverse proxy, and dnsmasq DNS. Each colony gets isolated networking.
Linux Network Namespaces
A network namespace isolates the network stack. Each namespace gets:
- Network interfaces (lo, eth0, etc.)
- IP addresses and routing tables
- Firewall rules (iptables)
- Socket bindings
Docker uses this same primitive. We use it directly without container overhead.
What Namespaces Isolate
| Resource | Isolated | Shared |
|---|---|---|
| Network interfaces | ✓ | — |
| IP addresses | ✓ | — |
| Port bindings | ✓ | — |
| Routing table | ✓ | — |
| Firewall rules | ✓ | — |
| Filesystem | — | ✓ |
| Processes | — | ✓ |
| User IDs | — | ✓ |
Filesystem is shared at native speed. No bind mounts. No FUSE. No overlay layers. Namespaces only isolate networking.
Creating a Namespace
Mycelium creates namespaces using the ip netns command:
# Create namespace for colony "my-app"
ip netns add colony-my-app
# Verify it exists
ip netns list
# colony-my-app
Each colony gets a namespace named colony-{name}.
Running Commands in a Namespace
To run a process inside a namespace:
ip netns exec colony-my-app <command>
Example:
# Start web server in colony namespace
ip netns exec colony-my-app npm run dev
The process sees its own isolated network stack. Port 3000 inside the namespace doesn’t conflict with port 3000 on the host or other namespaces.
Port Forwarding to Host
Isolated colonies need to be accessible from the host. Mycelium uses iptables NAT rules for port forwarding.
Example:
# Forward host port 3000 to namespace port 3000
iptables -t nat -A PREROUTING -p tcp --dport 3000 \
-j DNAT --to-destination 10.200.1.2:3000
Now the service is accessible at localhost:3000.
Phase 1 uses basic port forwarding. Phase 2+ will implement veth pairs (virtual Ethernet) for full network connectivity without iptables rules.
Caddy Reverse Proxy
Caddy is the unified entry point for all colony services. You access services by URL, not port.
Dynamic Route Registration
Caddy’s JSON API (port 2019) handles runtime route updates. No config files. No reloads.
When a colony spawns:
- Mycelium reads
colony.tomlservices - Constructs a Caddy route JSON for each service
- POSTs to
http://localhost:2019/config/apps/http/servers/colony/routes
Example route:
{
"match": [
{
"host": ["my-app-3000.colony.local"]
}
],
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "localhost:3000"
}
]
}
]
}
Requests to my-app-3000.colony.local proxy to localhost:3000.
Why Caddy Over nginx?
| Feature | Caddy | nginx |
|---|---|---|
| Dynamic config | JSON API | Reload required |
| HTTPS | Automatic | Manual certs |
| HTTP/2 | Default | Extra config |
| API-first | ✓ | — |
Caddy’s JSON API fits Colony’s dynamic routing. nginx needs config file edits + nginx -s reload. Slower and error-prone.
Inspecting Routes
View all registered routes:
curl http://localhost:2019/config/ | jq '.apps.http.servers.colony.routes'
This shows every {name}-{port}.colony.local route currently active.
TLS Support (Planned)
Caddy can auto-provision TLS certificates via Let’s Encrypt. Phase 2 enables HTTPS for all colonies:
https://my-app-3000.colony.local
Opt-in for production-like TLS testing.
DNS Resolution (dnsmasq)
dnsmasq resolves *.colony.local to localhost. No /etc/hosts editing.
How It Works
- dnsmasq listens on
127.0.0.1:5354 - Wildcard rule:
address=/colony.local/127.0.0.1 - System resolver forwards
.localqueries to dnsmasq
Any subdomain under .colony.local resolves to 127.0.0.1:
dig @127.0.0.1 -p 5354 anything.colony.local
# ANSWER SECTION:
# anything.colony.local. 0 IN A 127.0.0.1
macOS /etc/resolver Workaround
macOS uses mDNSResponder, which bypasses /etc/resolv.conf for .local domains. Colony fixes this with /etc/resolver/local:
# /etc/resolver/local
nameserver 127.0.0.1
port 5354
This tells macOS to use 127.0.0.1:5354 for .local domains.
Setup script handles this:
./scripts/setup-dns.sh
Linux Configuration
On Linux, add dnsmasq to /etc/resolv.conf:
# /etc/resolv.conf
nameserver 127.0.0.1
nameserver 8.8.8.8 # Fallback
Or use systemd-resolved (most modern distros):
sudo systemctl enable dnsmasq
sudo systemctl start dnsmasq
Verifying DNS Works
Test DNS:
# Direct query to dnsmasq
dig @127.0.0.1 -p 5354 test.colony.local
# System resolution (should use dnsmasq)
dig test.colony.local
Both return 127.0.0.1.
If you’re on a corporate network with custom DNS, the setup script may fail. Contact your IT team to whitelist *.colony.local or use localhost:PORT instead of DNS.
URL Pattern
Colony services follow this URL pattern:
http://{colony-name}-{port}.colony.local
Examples:
| Colony | Service | Port | URL |
|---|---|---|---|
my-app | web | 3000 | http://my-app-3000.colony.local |
my-app | api | 8080 | http://my-app-8080.colony.local |
payment-svc | server | 4000 | http://payment-svc-4000.colony.local |
This pattern ensures unique URLs per service without port collision.
Web Preview Connection Stack
When you open a web preview in Bloom, here’s the full path:
Browser
↓ HTTP GET http://my-app-3000.colony.local
DNS (dnsmasq)
↓ Resolves to 127.0.0.1:80
Caddy (reverse proxy)
↓ Routes based on Host header
Localhost (port forwarding)
↓ Forwards to namespace port 3000
Colony Namespace
↓ Process listening on :3000
Service (e.g., npm run dev)
Step-by-Step
- Browser issues GET to
http://my-app-3000.colony.local - OS resolver queries dnsmasq at
127.0.0.1:5354 - dnsmasq returns
127.0.0.1(wildcard rule) - Browser connects to
127.0.0.1:80(Caddy’s HTTP port) - Caddy reads
Host: my-app-3000.colony.localheader - Caddy matches route → proxy to
localhost:3000 - Kernel forwards to namespace port 3000 (iptables rule)
- Service inside namespace responds
- Response flows back through Caddy → DNS → Browser
All of this happens transparently. From the user’s perspective, it’s just a normal web URL.
Network Isolation Guarantees
What IS Isolated
- Port bindings — Colony A and Colony B can both bind to
:3000without conflict - Network interfaces — Each colony has its own
lo(loopback) - Firewall rules — Different iptables per namespace (Phase 2+)
What IS NOT Isolated
- Filesystem — Shared at native speed (intentional for performance)
- Processes — Visible via
ps(use cgroups for process isolation, Phase 2+) - Users — Same UID/GID (use user namespaces for full isolation, Phase 2+)
Network namespaces provide networking isolation only. For full containerization, you’d add user namespaces, mount namespaces, and cgroups. Colony intentionally omits these for simplicity and performance.
Troubleshooting
Service Not Accessible
Check Caddy routes:
curl http://localhost:2019/config/ | jq '.apps.http.servers.colony.routes'
Ensure a route exists for {name}-{port}.colony.local.
Check DNS:
dig my-app-3000.colony.local
Should return 127.0.0.1. If not, re-run ./scripts/setup-dns.sh.
Check service is running:
curl http://localhost:3000
If this works but my-app-3000.colony.local doesn’t, it’s a routing issue.
DNS Not Resolving
Verify dnsmasq is running:
# Linux
systemctl status dnsmasq
# macOS
brew services list | grep dnsmasq
Check /etc/resolver/local (macOS):
cat /etc/resolver/local
# Should output:
# nameserver 127.0.0.1
# port 5354
Restart dnsmasq:
# Linux
sudo systemctl restart dnsmasq
# macOS
brew services restart dnsmasq
Port Already in Use
Two colonies with the same port. Mycelium should reject this, but if you manually started services:
lsof -i :3000
Kill the conflicting process or change the port in colony.toml.
Next Steps
- Configuration Reference — Defining services and ports in
colony.toml - Security Model — Understanding isolation boundaries
- Stem TUI — Managing colonies from the terminal