← Back to Research Hub

LLM Steering Patterns (CODING)

⚡ Key Insights

  • Prompt assembly spectrum: brainpro (modular YAML + conditional filters) vs openclaw (monolithic static MD) vs demiurg (phase-injected) - trades assembly complexity for runtime flexibility
  • Doom loop prevention: brainpro hashes last 3 tool calls, returns error to LLM on repeat - lets model self-correct rather than killing execution
  • Workspace memory: .brainpro/ stores compacted transcripts (28KB→4KB) with semantic boundaries - enables resume/fork without bloating context
  • Model routing: brainpro auto-routes to model tier (haiku/sonnet) based on tool complexity scoring - optimizes cost without explicit selection
  • Multi-agent safety: openclaw uses prompt coordination, brainpro strips file paths, demiurg isolates via phases - different trust models