Package: ipc/. MCP server per group, unix socket transport, tier-based tool gating.
The IPC package implements an MCP server that runs on the host for the duration of a container invocation. It gives the agent access to gateway capabilities (routing, scheduling, group management) via standard MCP tool calls. There is no other IPC mechanism between the gateway and running agents.
The server has no owned tables. All state reads and writes go through the store package functions passed in at construction.
Before docker run: the gateway calls ipc.Start(folder), which creates a unix socket at data/ipc/<folder>/router.sock and begins accepting connections.
After the container exits: the gateway calls ipc.Stop(), which closes the socket and removes the socket file.
seedSettings writes the following MCP server entry into the agent's settings.json:
"arizuko": {
"command": "socat",
"args": ["STDIO", "UNIX-CONNECT:/var/run/pub/arizuko/router.sock"]
}
The socket directory is mounted at /var/run/pub/arizuko/ inside the container. Claude Code connects to it as a standard MCP server via socat.
The IPC server resolves caller identity from the socket path: the folder name extracted from the path maps to a group, and the group's depth in the directory tree determines its tier.
| Tier | Depth | Label | Default capabilities |
|---|---|---|---|
| 0 | root group (no parent) | root | All tools (*) |
| 1 | one level deep | world | Management tools + send |
| 2 | two levels deep | agent | send only |
| 3+ | three or more levels | worker | send_reply only |
Tool availability is intersected with the group's grant rules at server startup. The agent only sees tools it is permitted to call โ unknown tools are not listed in the MCP manifest returned to the client.
| Tool | Domain | Minimum tier |
|---|---|---|
send_message | Messaging | 2 |
send_reply | Messaging | 3 |
send_document | Messaging | 2 |
clear_session | Session | 1 |
inject_message | Session | 0 |
register_group | Group management | 1 |
get_groups | Group management | 1 |
refresh_groups | Group management | 1 |
delegate_to_child | Delegation | 1 |
delegate_to_parent | Delegation | 2 |
spawn_group | Group management | 0 |
schedule_task | Scheduler | 1 |
get_task | Scheduler | 1 |
update_task | Scheduler | 1 |
cancel_task | Scheduler | 1 |
list_tasks | Scheduler | 1 |
list_routes | Routing | 1 |
set_routes | Routing | 1 |
add_route | Routing | 1 |
delete_route | Routing | 1 |
Route tools no longer operate on a per-JID basis. The routes table uses match expressions instead:
list_routes โ returns all routes visible to this group (tier 0 sees all routes, others see only routes targeting their folder or descendants)add_route โ adds a route with seq, match (key=glob pairs), and targetset_routes โ replaces all routes targeting this folder or descendants (tier 1+)delete_route โ deletes a route by IDMatch expression keys: platform, room, chat_jid, sender, verb. Example: room=-5075870332 or platform=telegram verb=mention. See gateway docs for full match expression language.
auth.Authorize checks the action against the group's grants rulesThe IPC server owns no tables and holds no long-lived state beyond the socket file descriptor. It is a thin authorization and dispatch layer over the existing store and gateway functions. Restarting the server between invocations is safe.