Agent coordination environment. Run Claude Code locally (ant CLI), route between agents (gateway), or deploy as multi-tenant service (platform). Independent components that compose.
Go ยท SQLite ยท Docker ยท MCP
Technical support bot โ Mount repos read-only. User asks about a feature, agent greps codebase, reads implementation, cites specific lines. Facts about APIs, patterns. Answers with evidence.
Multi-agent team โ Main agent routes to specialists. @researcher greps and reads, @writer drafts docs, @reviewer validates. Each has different permissions, mounts, skills. Delegate between them, assemble final answer.
Scheduled tasks โ Daily summary of news/calendar delivered to chat. Agent runs in container on cron, writes to diary/, memory accumulates. Skills persist across runs.
Multi-tenant SaaS โ One gateway, many customers. Each group: isolated filesystem, permissions, memory. Tier system prevents cross-tenant access. Provision users via approval workflow, monitor via dashboard.
User: "How does feature X work?"
Agent checks facts/ โ finds feature-x.md
Agent greps codebase โ implementation.ts line 234
Reply: "Feature X works like this... [evidence]
see: repo/implementation.ts#L234"
ant โ Local CLI. Run Claude Code in terminal. Container per run, no server. Good for dev.
gateway โ Multi-tenant router. Messages from Telegram/Discord/WhatsApp/web โ containerized agents. Groups delegate, schedule tasks, memory persists. Add channels (8 platforms), provisioning (user approval), monitoring (dashboard), auth (JWT/OAuth). Pick what you need.
ant works standalone. Gateway is composable โ start with routing, add components later.
Message arrives from Telegram. Gateway writes to SQLite. Routing rules determine which group handles it. Fresh Docker container spawns with persistent directory bind-mounted as home. Agent reads files (diary/, CLAUDE.md, facts/), calls MCP tools, writes response. Container exits, files survive. Next message spawns new container with same directory. Skills and memory accumulate across runs.
Multi-tenant: each group isolated, different grants. Multi-user: routing by chat ID. Channels (Telegram, Discord, WhatsApp, email) are external processes speaking HTTP. Scheduler writes to messages table on cron. SQLite is the queue, router, and audit log.
Gateway โ Polls SQLite, matches routes, spawns containers. Delegation, escalation, topic routing, sticky sessions.
Channels โ Telegram, Discord, WhatsApp, Mastodon, Bluesky, Reddit, Email, Web. Standalone daemons, 3-endpoint HTTP. Add platforms without gateway changes.
Access control โ Grant rules (action-level), JWT, OAuth (Google/GitHub/Discord), tier restrictions. Filters which MCP tools agents see.
Dashboard (dashd) โ Read-only HTMX portal. 6 views: status, tasks, activity, groups, memory, portal. No JS framework.
Provisioning (onbod) โ Onboarding state machine. Unrouted user โ leaves message โ admin approves โ group created.
Scheduler (timed) โ Cron daemon. Polls scheduled_tasks, inserts prompts. Cron expressions, intervals, one-shot.
Monitoring โ Container logs, session transcripts, audit trail. SQLite is queue + router + log. Dashboard shows errors, queue, routes.
Coordination is harder than execution. Running one agent is easy. Routing between many, controlling permissions, persisting memory, scheduling work, managing users โ that's the problem arizuko solves.
Container-per-run. Any agent runtime works: Claude Code, OpenAI Assistants, custom Python/Rust/Go. Containers are ephemeral, files persist. Memory survives context resets.
SQLite is the contract. Schema-first. Any process can read/write tables. Gateway, scheduler, dashboard, channels โ all talk through SQLite. No special queue, no message bus.
Pluggable. Channels speak HTTP (3 endpoints). Add platforms without gateway code changes. MCP sidecars for domain tools. Routing via match expressions.
Independent components. Use ant alone. Add gateway for routing. Add platform for provisioning/monitoring. Pick what you need.
Design: container-per-run (nanoclaw), MCP-over-IPC (kanipi), schema-first (distributed systems). See agent research.
arizuko create foo
vim /srv/data/arizuko_foo/.env # set tokens
arizuko group foo add tg:-123456789
arizuko run foo
Schema is contract โ The SQLite table structure IS the API. No separate specification. The messages table columns define how components talk. Any process that can read/write those tables participates in the system.
Groups are directories โ $HOME for an agent is the group folder. Backup is cp -r. Diff is git diff. Memory, diary, facts are plain files.
Permissions by depth โ How deep a group is in the directory tree determines its permissions. groups/main/ has full access. groups/main/research/ (deeper) has less. No permission table โ just folder depth.
Agent-agnostic containers โ Run any containerized workload: Claude Code CLI today, OpenAI Assistants API, custom Python/Rust agents, batch jobs. Container isolation is the primitive. Agent choice is configuration.
Extensible by design โ Pluggable channels (add new platforms via HTTP adapter protocol). Pluggable routing (match expressions over platform/room/sender/verb). Pluggable sidecars (custom MCP servers for embeddings, RAG, domain tools). Architecture brings best-of-breed together.
MCP grants and control โ Grant rules filter which MCP tools agents can call. Secret injection for API keys. Action-level permissions (not just collection-level). Custom MCP servers per group.
Boring stack โ Go, SQLite WAL, Docker, one file per component. No distributed coordination. Known failure modes, not novel ones.
Reasonable speed โ Chose Go to scale without hampering dev speed. Fast enough for production, simple enough to ship.
Gateway polls messages, routes via match expressions (three-layer pipeline: sticky โ command โ prefix โ routing), spawns containers with resource limits, bridges MCP over unix socket, filters grants.
Store is SQLite in WAL mode. Tables: messages, routes, registered_groups, scheduled_tasks, sessions. Schema is the contract. Single writer, multiple readers. No distributed coordination.
Channels are independent processes adapting external platforms (Telegram, WhatsApp, Discord, email). Each speaks a 3-endpoint HTTP protocol (register, inbound, outbound). Gateway never imports channel code. Add platforms without touching gateway.
Containers are the workload primitive. Agent choice, runtime, language โ all configuration. Same infrastructure runs Claude Code (TypeScript), custom Python agents, Rust batch jobs, whatever. arizuko-hive extends this to general-purpose distributed workloads.
Scheduler polls scheduled_tasks, inserts into messages when cron/interval triggers. Stateless. Tasks are messages.
Memory survives compaction. Pre-compact hook writes diary, archives episodes. On next invocation, injected back. Long-running agents accumulate institutional memory.
Deep dive into gateway, store, IPC, auth, channels, scheduler. Design decisions and tradeoffs.
Worked examples: multi-agent setups, custom adapters, grant configurations, memory patterns.