IPC

Package: ipc/. MCP server per group, unix socket transport, tier-based tool gating.

Role

The IPC package implements an MCP server that runs on the host for the duration of a container invocation. It gives the agent access to gateway capabilities (routing, scheduling, group management) via standard MCP tool calls. There is no other IPC mechanism between the gateway and running agents.

The server has no owned tables. All state reads and writes go through the store package functions passed in at construction.

Socket lifecycle

Before docker run: the gateway calls ipc.Start(folder), which creates a unix socket at data/ipc/<folder>/router.sock and begins accepting connections.

After the container exits: the gateway calls ipc.Stop(), which closes the socket and removes the socket file.

Transport into the container

seedSettings writes the following MCP server entry into the agent's settings.json:

"arizuko": {
  "command": "socat",
  "args": ["STDIO", "UNIX-CONNECT:/var/run/pub/arizuko/router.sock"]
}

The socket directory is mounted at /var/run/pub/arizuko/ inside the container. Claude Code connects to it as a standard MCP server via socat.

Identity and tier

The IPC server resolves caller identity from the socket path: the folder name extracted from the path maps to a group, and the group's depth in the directory tree determines its tier.

TierDepthLabelDefault capabilities
0root group (no parent)rootAll tools (*)
1one level deepworldManagement tools + send
2two levels deepagentsend only
3+three or more levelsworkersend_reply only

Tool availability is intersected with the group's grant rules at server startup. The agent only sees tools it is permitted to call โ€” unknown tools are not listed in the MCP manifest returned to the client.

Tool list

ToolDomainMinimum tier
send_messageMessaging2
send_replyMessaging3
send_documentMessaging2
clear_sessionSession1
inject_messageSession0
register_groupGroup management1
get_groupsGroup management1
refresh_groupsGroup management1
delegate_to_childDelegation1
delegate_to_parentDelegation2
spawn_groupGroup management0
schedule_taskScheduler1
get_taskScheduler1
update_taskScheduler1
cancel_taskScheduler1
list_tasksScheduler1
list_routesRouting1
set_routesRouting1
add_routeRouting1
delete_routeRouting1

Routing tools (v0.25.0)

Route tools no longer operate on a per-JID basis. The routes table uses match expressions instead:

Match expression keys: platform, room, chat_jid, sender, verb. Example: room=-5075870332 or platform=telegram verb=mention. See gateway docs for full match expression language.

Single tool call flow

  1. Agent calls MCP tool via socat transport
  2. IPC server receives JSON-RPC request over the unix socket
  3. Handler looks up caller identity from socket path
  4. auth.Authorize checks the action against the group's grants rules
  5. If authorized: execute the tool (read/write via store, queue, or chanreg)
  6. Return JSON-RPC response

Stateless design

The IPC server owns no tables and holds no long-lived state beyond the socket file descriptor. It is a thin authorization and dispatch layer over the existing store and gateway functions. Restarting the server between invocations is safe.