Traditional AI agents are ephemeral. Each conversation starts from zero. ChatGPT forgets. Claude API resets. Even autonomous frameworks like AutoGPT lose context between runs.
HOME23 inverts this model: the agent is the permanent resident, conversations are temporary events. The system:
- Thinks continuously via cognitive loops (not just on-demand)
- Remembers everything through persistent neural brain with semantic embeddings
- Consolidates knowledge through sleep-dream cycles that synthesize experience
- Grows over time as documents and research are ingested and incorporated
- Stays reachable via Telegram, web dashboard, and integrated IDE
Installation: single command node cli/home23.js init → then node cli/home23.js agent create <name> → then node cli/home23.js start <name>. No cloud, no subscription, runs locally.
HOME23 is intentionally designed as four integrated layers. Each is critical—the system collapses if any is removed:
│ Front Door (Dashboard, Chat, Settings) │
├────────────────────────────────────────┤
│ House Runtime (Agent, Tools, Channels) │
├────────────────────────────────────────┤
│ Brain (Memory, Embeddings, Continuity) │
├────────────────────────────────────────┤
│ COSMO Engine (Loops, Persistence, Ops) │
└────────────────────────────────────────┘
Layer 1: COSMO Engine — JavaScript cognitive engine (291 files, ~20k lines). Continuous think-consolidate-dream cycles. Multi-agent specialist system with 15+ specialized agents (ResearchAgent, SynthesisAgent, QualityAssuranceAgent). Quantum reasoner, thermodynamic controller, dynamic roles. Memory persistence in JSONL + gzip. Dashboard API on port 5001 + WebSocket for realtime events.
Layer 2: Brain — Persistent memory with semantic embeddings (local Ollama, OpenAI, Ollama Cloud), vector search, knowledge graph. Documents are synthesized by LLM before entering brain (understanding, not raw chunks). BRAIN_INDEX.md maintained by agent to track what it knows.
Layer 3: House Runtime — TypeScript agent harness (40 .ts files, ~10k lines). LLM tool-use loop (supports Anthropic, OpenAI, Ollama Cloud, xAI, Codex). 30+ tools: brain search, file I/O, web browsing, shell execution, research operations. Channel adapters: Telegram, webhooks, sibling agents, bridge chat. Conversation history with compaction.
Layer 4: Front Door — Vanilla HTML/CSS/JS dashboard (no React, no build step). OS home screen with real-time thoughts, chat, intelligence synthesis. 7 tabs: Home, Chat, Intelligence, Brain Map, Settings, COSMO, Evobrew. ReginaCosmo design language (glass-morphism, space gradient).
COSMO Engine
Cognitive loops run continuously. Think-consolidate-dream cycles during idle periods. Multi-agent specialist system with 15+ roles. Not passive sleep—active cognitive work synthesizing insights across brain.
Document Ingestion Pipeline
File watcher (chokidar) monitors workspace. Binary converter (MarkItDown) handles PDF, DOCX, images → markdown. LLM Compiler synthesizes documents into structured knowledge before brain entry. No raw text chunks.
TypeScript Agent Harness
AgentLoop handles LLM interactions. 30+ tools for brain search, file I/O, web browsing, shell exec. Multi-model support (switch between Anthropic, OpenAI, Ollama Cloud, xAI at runtime). OAuth sign-in for Claude Max + ChatGPT Plus.
Dashboard / Front Door
Real-time thought feed, native chat with thinking/tool visibility, integrated research (COSMO iframe), integrated IDE (Evobrew). Settings are truthful—changes affect real runtime. No decorative UI over fake state.
Process Management (PM2)
Per-agent: 3 processes (engine, dashboard, harness). Shared: Evobrew IDE (port 3415), COSMO research engine (port 43210). Auto-restart on crash, exponential backoff. Auto port assignment (5001-5004 for agent 1, 5011-5014 for agent 2).
- Persistent Memory Brain: Semantic embeddings, vector search, knowledge graph. Memory survives context resets.
- Multi-Model Support: Switch between Anthropic, OpenAI, Ollama Cloud, xAI, OpenAI Codex at runtime. OAuth auto-refresh.
- Document Ingestion: Watch folders, drag-drop zone, binary conversion, LLM synthesis → brain nodes. Compiler understands what brain already knows via BRAIN_INDEX.md.
- Cron Scheduler: Timed agent turns, shell execution, brain queries. Agents can schedule their own work.
- Telegram Integration: Persistent channel with rich formatting. Agent always reachable.
- Sleep/Wake Cycles: ~90s naps with dream consolidation during idle. Active cognitive work, not passive rest.
- Evobrew IDE Integration: Code editing, brain exploration, multi-provider access. Shared across all agents in home.
- COSMO Research Toolkit: 11 atomic tools for launching guided research runs, querying brains, compiling findings.
- Multi-Agent Support: Multiple independent agents in one home instance, each with own brain. Federated awareness via sibling protocol.
- Identity Rooms: Agent personality in workspace files (SOUL.md, MISSION.md, HEARTBEAT.md). Loaded every turn, can evolve.
- Brain Synthesis: Intelligence tab runs scheduled synthesis on brain (every 4 hours). Consolidates insights.
- Knowledge Graph Visualization: 3D force-directed graph of brain structure. Explore connections.
| System | Approach | HOME23 Difference |
|---|---|---|
| ChatGPT / Claude API | Stateless chatbots, forget after each conversation | Persistent, learns, grows brain over time. Always-on with autonomous thinking during idle. |
| AutoGPT / ReAct Agents | Single-turn reasoning with tools | Multi-turn loops with continuity, identity, channels. Dedicated cognitive engine, not just LLM loop. |
| Langchain / LlamaIndex | Agent frameworks (you build on top) | Installable OS (complete end-to-end). Brain persistence, ingestion pipeline, channels out of box. No code required. |
| n8n / Zapier | Workflow/automation platforms | Agentic—system thinks and decides, not just executes rules. Runs autonomously during sleep. |
| COSMO 2.3 | Research-focused engine + one user conversation | Multi-agent, channels (Telegram), persistent installation, CLI-driven. COSMO bundled inside HOME23. |
| Claude Code CLI | IDE-focused, one-off tasks | Always-on agent, persistent brain, background thinking. Can use Claude Code's OAuth for API access. |
- 1. Living Brain with LLM Synthesis: Raw documents don't enter brain. Every document synthesized by LLM that understands what brain already knows (via BRAIN_INDEX.md). Result: brain contains understanding, not noise. Inspired by Karpathy's "LLM Wiki" but in living knowledge graph.
- 2. Sleep-Dream Consolidation Cycles: During idle, agents don't just sleep—they actively consolidate. Dream synthesis connecting insights across brain, knowledge reorganization, goal consolidation, temporal rhythm management. Not passive rest; active cognitive work.
- 3. Atomic COSMO Research Toolkit: Split into tools (11 atomic HTTP wrappers) and skill (COSMO_RESEARCH.md defines when/why to use them). Policy-without-code approach—update behavior by editing markdown, no rebuilds.
- 4. Multi-Layer Config System: Three-layer YAML merge:
config/home.yaml←instances/<agent>/config.yaml←config/secrets.yaml. Single source of truth. Switching models/providers doesn't require code changes. - 5. Process Architecture with Auto Port Assignment: Sequential port blocks prevent collisions. Agent 1: 5001-5004, Agent 2: 5011-5014. Multiple agents run simultaneously without manual port management.
- 6. OAuth Broker Inside System: COSMO 2.3 doubles as OAuth provider for Anthropic + OpenAI Codex. Encrypted token storage (AES-256-GCM), automatic refresh. Settings UI proxies to cosmo23 OAuth endpoints. 30-minute poller detects rotations, re-syncs secrets.yaml, restarts processes. OAuth credentials as first-class runtime state.
- 7. Identity as Persistent Files: Agent personality in markdown: SOUL.md (who), MISSION.md (what), HEARTBEAT.md (rhythm), COSMO_RESEARCH.md (research policy), BRAIN_INDEX.md (knowledge catalog). Loaded every turn—personality can evolve through conversation.
- 8. Federated Multi-Agent Inside One Home: Single HOME23 instance runs multiple agents with separate brains (different ports, state dirs), shared Evobrew IDE, shared COSMO research engine, independent Telegram channels, cross-agent awareness (sibling protocol). Not one agent per install.
- 9. Dashboard as OS Home Screen: Operator's lived environment, not control panel. Real-time thought feed, native chat with thinking/tool visibility, integrated research/IDE, truthful settings. Vanilla HTML/CSS/JS—no React, no build step. ReginaCosmo inspired.
- 10. LLM-Aware Ingestion Index: BRAIN_INDEX.md maintained by LLM, updated after every document synthesis. Tracks what brain knows (decisions, entities, technical knowledge, conversations). Available to agent during turns for context. Shows knowledge growth over time.
Model selection is tactical (which voice for this task?). Engine keeps system alive. Brain is what persists.
Engine + harness working before dashboard. Every step runnable from command line before next step starts.
Not just chat (missing engine/persistence). Not just dashboards (missing agent/autonomy). Not just raw engine (missing humaneness/front door). Pyramid must remain whole.
Changes persist. Changes affect runtime immediately. No cosmetic UI over fake state.
Copy 2-3 times before abstracting. Delete code when possible. ~3 innovation tokens per system—use them wisely.
- Languages: JavaScript engine, TypeScript harness (two-language, one system)
- Process manager: PM2 (battle-tested process lifecycle)
- Database: better-sqlite3 for brain state, JSONL for conversation history, gzip compression
- Embedding: Configurable (local Ollama, Ollama Cloud, OpenAI) with fallback chain
- LLM: Pluggable (Anthropic, OpenAI, Ollama Cloud, xAI, Codex)
- Node.js: 20+ required (ECMAScript modules, native crypto)
- Filewatch: chokidar for document monitoring
- Web framework: Express.js for dashboard/admin APIs
- Real-time: WebSocket for engine events (port 5001)
- Telegram: node-telegram-bot-api
- OAuth: cosmo23's PKCE implementation with encrypted storage
HOME23 is not another agent framework. It's an operating system that answers:
- Where does the agent live? (
instances/directory with persistent state) - What is the agent's identity? (SOUL.md, MISSION.md files—evolving, not fixed)
- How does the agent think? (COSMO engine with specialist agents, not one LLM loop)
- What does the agent remember? (semantic embeddings + knowledge graph + dreams)
- How does the agent grow? (ingestion compiler synthesizes documents into understanding)
- Where does the agent talk? (multiple channels: Telegram, web, IDE, research engine)
This is the first system to treat persistent agents as first-class installable objects, not add-ons to chat.
git clone https://github.com/notforyou23/home23
cd home23
node cli/home23.js init
node cli/home23.js agent create myagent
node cli/home23.js start myagent
Agent runs on http://localhost:5001. Dashboard shows real-time thoughts, chat, brain map, settings. Telegram integration requires TELEGRAM_BOT_TOKEN in config/secrets.yaml.
- ElizaOS — Eliza framework for crypto-native agents
- Hermes — Nous Research persistent agent (similar goals, different approach)
- OpenClaw — 50+ integrations, always-on execution
- Memory Systems — Comparison of agent memory approaches