Locus

method of loci β€” the memory palace

The Visual IDE for AI Conversations. Navigate your thoughts, not just your chat.

The Problem

Developers lose 66% of AI productivity gains to context management overhead.

66%
Developers waste time on "almost right" AI suggestions
65%
Report AI loses critical context
19%
Slower with AI (but think faster)
-10%
Trust decline 2023β†’2025

πŸ“ Linear Context

Conversations are one-dimensional streams. Context windows fill and truncate. Exploration branches disappear when you try something new.

🧠 Amnesia by Design

Every new session is a "brand new hire." Memory features are opaque and platform-locked. No continuity across providers.

πŸ—ΊοΈ No Spatial Navigation

You can't see where you've been or where you could go. Research shows spatial memory improves recall by 28%.

πŸ”’ Platform Lock-in

Conversations trapped in silos. Can't move thinking between ChatGPT, Claude, and local models. No portability.

Why Context Windows Aren't the Answer

Models now offer 200K-1M tokens, yet 66% still waste time on context issues. Why?

  • Larger context = higher cost per request ($$$)
  • "Lost in the middle" problem worsens with length
  • No organization = no findability

The real problem is navigation and organization, not capacity.

The Solution

Locus transforms AI chat from linear scroll into visual knowledge graphs with branching, compression, and memory.

locus

Method of Loci

The method of loci is a mnemonic technique from ancient Greece. Orators mentally placed ideas in specific locations within an imagined buildingβ€”a "memory palace"β€”then retrieved them by walking through the space.

Locus applies this to AI conversations: place your thoughts in a visual, navigable space. Branch, compress, return. Your memory palace for thinking with AI.

Core Primitives

@checkpoint("hypothesis-v1")     # Save conversation state
@branch("security-focus")        # Fork exploration path
@switch("hypothesis-v1")         # Return to checkpoint instantly
@compress(2)                     # Free 80% tokens, keep insights
@compare("branch-a", "branch-b") # Side-by-side comparison
@inject("topic:auth-patterns")   # Add pre-built context

Key Capabilities

βœ“
Visual Conversation Graph

See your entire conversation as a zoomable, navigable DAG on infinite canvas. Galaxy β†’ branch β†’ message.

βœ“
Progressive Compression

4-level summarization (full β†’ key β†’ highlights β†’ summary). Always recoverable. Never lose context.

βœ“
Cross-Platform Import

Import ChatGPT, Claude, Gemini. Export anywhere. Your thinking isn't locked to a vendor.

βœ“
Cognitive Memory

Spaced repetition (FSRS algorithm) surfaces learnings before you forget them.

Market Opportunity

Metric Value Notes
TAM $8-10B AI productivity tools (2025)
SAM $2.4-6B 40-50M power AI users @ $5-10/mo
SOM Year 3 $1-10M ARR 15K-100K paying users (conservative)

Comparable Growth

Company Growth Notes
Cursor $0 β†’ $1B ARR in 24 months Fastest B2B SaaS ever
Replit $2.8M β†’ $253M ARR in <1 year Post-AI Agent launch

Target Segments

Developers

Managing context across coding sessions. Highest willingness to pay. Clear pain point.

Researchers

Branching hypotheses, comparing conclusions. Academic and industry R&D.

Knowledge Workers

Building persistent knowledge bases. Writers, analysts, consultants.

Competitive Landscape

Extended analysis including previously missing competitors (February 2026).

Feature ChatGPT Claude Notion AI LibreChat Locus
Visual Graph No No No No Yes
Branching Hidden No No Yes Yes
Cross-Platform No Partial No Yes Yes
Controllable Compression No No No No Yes
Cognitive Memory No No Partial No Yes
Context Window 400K 200K-1M 50-conv history Varies Unlimited*

*Unlimited via hierarchical compression + checkpointing

Extended Competitor Coverage

Competitor Pricing Key Threat Locus Counter
Obsidian + AI plugins Free + $4-20/mo Local-first, graph view for notes AI-native, not plugin-dependent
Perplexity Pro $20-200/mo Research focus, citations Branch hypotheses, compare conclusions
Poe $5-250/mo Multi-model, 200+ models Memory that persists, context you control
LibreChat Free (self-host) Open source, MCP, branching Visual graph, compression, cognitive memory
LobeChat Free (self-host) Beautiful UI, plugins Beyond UI: navigate, branch, compress

Architecture (Simplified)

Learned from over-engineering critique: PostgreSQL-only until proven insufficient.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Frontend β”‚ β”‚ React + tldraw (infinite canvas) β”‚ β”‚ Zustand (state) + TanStack Query β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ API Layer β”‚ β”‚ PostGraphile (auto GraphQL) + REST for actions β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PostgreSQL 16+ β”‚ β”‚ pgvector β”‚ ltree β”‚ JSONB β”‚ Recursive CTEs β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ LLM Providers β”‚ β”‚ Claude β”‚ GPT-4 β”‚ Gemini β”‚ Ollama (local) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why PostgreSQL-Only

  • pgvector handles millions of embeddings with HNSW indexes
  • ltree provides efficient tree/hierarchy queries
  • Recursive CTEs handle graph traversal for conversation depth
  • OpenAI scaled ChatGPT on unsharded PostgreSQL
  • Add complexity only when hitting measured bottlenecks

Scaling Path

Timeline If Hitting... Add...
Month 3 Connection limits PgBouncer
Month 6 Read throughput Read replicas
Month 12+ 10M+ vectors Dedicated vector DB
Month 12+ Complex graph analytics Apache AGE or Neo4j

Moat Analysis (Honest)

Learned from competitive analysis: most "moats" are fake.

Fake Moats (Don't Rely On)

"Moat" Why It Fails Time to Copy
Better prompts Easily reverse-engineered, obsoleted by model improvements 3-6 months
Nicer UI AI generates UIs now; competitors clone fast 3-6 months
"Smarter" context LangChain/LlamaIndex already free; models improving 6-12 months

Real Moats (Build These)

πŸ“ˆ Context Accumulation

More usage β†’ better personalization β†’ harder to switch. User context profiles compound over time.

πŸ‘₯ Team Networks

Team switching cost is 10x individual. Shared context creates network effects.

πŸ”“ Open Source Community

Free contributions improve product. Enterprise features justify commercial tier.

πŸ”— Workflow Integration

Deep integration with git, IDEs, CI/CD. Automation chains break on switch.

3-Year Defensibility

Year Focus Moat Built
Year 1 User context accumulation + open source Data flywheel starts; community forms
Year 2 Team networks + workflow integration Network effects; switching costs increase
Year 3 Vertical specialization + enterprise Domain expertise + relationships = durable

8-Week MVP Roadmap

Realistic scope for 2-person team. PostgreSQL-only architecture.

Weeks 1-2: Foundation

PostgreSQL schema + migrations. User auth (JWT). Basic API via PostGraphile.

Weeks 3-4: Core Features

Create/list conversations. Add messages with streaming. Branch creation and switching.

Weeks 5-6: Visual

tldraw integration. Conversation graph rendering. Zoom/pan navigation.

Weeks 7-8: Polish

Import from ChatGPT. Basic compression (levels 0-3). Memory (remember/recall).

NOT in MVP

βœ—
Complex DSL (subset only)
βœ—
Team collaboration
βœ—
Spaced repetition
βœ—
Branch merging
βœ—
Desktop app
βœ—
Multi-provider (Claude only at launch)

Business Milestones

Milestone Users Revenue
Launch 1K free $0
Month 6 10K free, 300 paid $2.4K MRR
Month 12 50K free, 2K paid $16K MRR
Month 24 200K free, 10K paid $80K MRR