CLI chat mode

Planned — not yet shipped. This cookbook describes a feature in design. The architecture is settled; the implementation is not.
Strip everything away and the core is: one process, one container, stdin/stdout. container.Run() is the unit — not an artifact of the gateway.

The gateway is one consumer of container.Run(). It is not the only possible consumer. CLI chat mode is a second consumer — no SQLite, no channel adapters, no HTTP API. One process reads from stdin, runs a container, writes to stdout. The agent inside is identical: same image, same MCP tools, same skills.

Planned interface

arizuko chat [group-folder]

With no group folder: creates a temporary session directory, uses stub grants (full access), exits cleanly on Ctrl-C.

With a group folder: reuses the group's existing memory, diary, facts, and session state — picks up exactly where the last gateway invocation left off.

How it works

Reuses container.Run() unchanged

The container lifecycle is identical to the gateway's invocation. The container starts, reads start.json from stdin (volume-mounted file), and produces output between the delimiter lines. The only difference is what happens with the output.

In the gateway: OnOutput parses the output and calls HTTPChannel.Send to route to a channel adapter.

In CLI mode: OnOutput writes to stdout.

Multi-turn via IPC file-drop

Claude Code CLI is one-shot by design — it receives a prompt and exits. Multi-turn in CLI mode works via a file-drop IPC protocol:

  1. Host reads a line from stdin
  2. Host writes {"content": "user message"} to /workspace/ipc/input/<timestamp>.json
  3. Container polls that directory every 500ms
  4. When a file appears: inject it as a new user turn, continue
  5. Container writes response between delimiter lines; host reads and prints

The container does not need to know it is talking to a CLI. It sees user messages the same way it does in gateway mode.

No DB dependency

Flat cli-session.json in the session directory replaces the SQLite session record. It holds the session ID, start time, and message count. The container's working directory and memory files are identical to a gateway session.

MCP still works

The MCP unix socket path is volume-mounted into the container exactly as in gateway mode. CLI mode starts the same MCP server with a stub GatedFns: gateway-specific tools like send_message and spawn_group return a "not available in CLI mode" error; everything else works normally.

// stub for CLI mode
fns := ipc.GatedFns{
    SendMessage: func(jid, content string) error {
        return fmt.Errorf("send_message not available in CLI mode")
    },
    // ... other gateway-specific tools stubbed ...
    // LookupUser, schedule_task, etc. work normally
}

How it fits the system

The container is a unit. The gateway is one consumer of that unit. CLI mode is another. Because container.Run() is the seam — not the gateway's message loop — you get full agent capability (MCP tools, subagents, pre-compact hooks, skill files) without running the gateway stack.

This matters for development: iterate on skills and tools locally with a single process, then deploy the same group folder to a gateway instance. The agent's behavior is identical in both contexts.