BFF

Computational Life — self-replicating programs from chaos
Last updated: 2026-02-17

TL;DR

Random programs in a "primordial soup" spontaneously evolve self-replicators ~40% of the time. No fitness function. No design. Just interaction and time. This is computational abiogenesis.

What brings you here?

Computational Abiogenesis

In 1944, Schrödinger asked "What is Life?" and proposed that living systems resist entropy through information. In 2024, researchers at Google demonstrated that when random programs are placed in an environment and allowed to interact, self-replicating programs emerge spontaneously from chaos.

BFF (Brainfuck Family) extends the minimalist Brainfuck language with a shared tape model where two heads (read and write) operate on the same memory. Programs interact by concatenating and executing together, with the output split back into two programs. This models molecular chemistry: A + B → exec(AB) → A' + B'.

~40%
runs show emergence
16K
epochs to transition
217
initial programs
64
bytes per program

The key insight: no explicit fitness function is needed. Self-replicators emerge because, by definition, they make more copies of themselves. Once one appears, it spreads exponentially. The soup undergoes a phase transition: entropy metrics spike, diversity collapses, and replicator patterns dominate.

About This Page
In the spirit of Distill.pub

This is a "distilled" explanation—interactive, visual, equation-rich. We minimize research debt: the accumulated cost of poor explanation in science. See Research Debt by Chris Olah.

Primordial Soup

Interactive simulation. Click START to watch self-replicators emerge from random noise.

EPOCH 0
PROGRAMS 0
REACTIONS 0
REPLICATORS 0
ENTROPY 0.00
DOMINANT -
MINIMAL SELF-REPLICATOR
[.>}]

BFF Instruction Set

> move read right
< move read left
} move write right
{ move write left
+ increment
- decrement
. copy read→write
, copy write→read
[ loop start
] loop end

The Chemistry of Code

TL;DR

Programs interact like molecules. Concatenate, execute, split. Self-replicators are autocatalytic: S + F → 2·S. Once one appears, exponential growth causes phase transition.

The primordial soup models chemical reaction dynamics. Each epoch, random program pairs are selected, concatenated, and executed. The program reads and writes to its own tape (the concatenated program text). Output is split back into two programs that return to the soup.

Chemical Reaction Analogy
\[ A + B \xrightarrow{\text{exec}} \text{split}(\text{exec}(AB)) = A' + B' \]

Most reactions produce noise. But occasionally, a program emerges that can copy itself when it encounters "food" (any other program). This is autocatalysis: the program catalyzes its own creation.

Autocatalytic Self-Replication
\[ S + F \rightarrow 2 \cdot S \]

The critical finding: self-replicators emerge even without background mutations. In ~50% of mutation-free runs with fixed shuffling patterns, replicators appear. Mutations actually increase the error rate if too high (1% degrades formation).

Initialization

217 ≈ 130,000 programs of 64 random bytes from 128 ASCII codes

Interaction

Random pairs concatenate, execute for fixed steps, split by output length

Mutation

Default 0.024% per byte. Higher rates (1%) degrade replicator formation

Detection

High-order entropy spikes at phase transition, unique tokens collapse

Mathematical Framework

TL;DR

Shannon entropy measures diversity. High-order entropy detects structure. Replicator equations describe selection. Error threshold limits genome size.

Information Theory

Shannon entropy quantifies the information content and uncertainty in molecular sequences:

Shannon Entropy
\[ H(X) = -\sum_{x \in \mathcal{X}} p(x) \log_2 p(x) \]

The paper introduces high-order entropy: Shannon entropy minus normalized Kolmogorov complexity (approximated via compression). This captures structure that emerges from random noise.

High-Order Entropy
\[ H_{\text{high}} = -\sum_{i=0}^{127} \hat{p}_i \log_2 \hat{p}_i - \frac{\hat{C}}{N \cdot 64} \]

where p̂ᵢ = frequency of ASCII code i, Ĉ = compressed size, N·64 = total characters

Replicator Dynamics

The replicator equation describes how frequencies of different types change through differential reproduction:

Replicator Equation
\[ \frac{dx_i}{dt} = x_i \left( f_i(\mathbf{x}) - \bar{f}(\mathbf{x}) \right) \]

xᵢ = frequency of type i, fᵢ = fitness, f̄ = average fitness

Error Threshold

The error threshold sets fundamental limits on information that primitive replicators can maintain:

Error Threshold Condition
\[ \mu_{\max} \approx \frac{\ln(A_0/A)}{L} \]

μ = mutation rate, L = genome length, A₀/A = fitness superiority

History of Artificial Life

TL;DR

1940s: von Neumann designs self-reproducing automata. 1984: Langton's loops. 1990s: Tierra/Avida. 1999: Evoloops show Darwinian evolution. 2024: BFF shows spontaneous emergence.

1948
Von Neumann's Self-Reproducing Automata — 29-state cellular automaton proving that self-reproduction is computationally possible.
1984
Langton's Loops — Simple self-replicating loops that reproduce in 151 steps. Founded Artificial Life as a field.
1990
Tierra — Thomas Ray's digital evolution system. Programs compete for CPU time. Parasites emerge spontaneously.
1993
Avida — Platform for studying digital evolution. Published in Nature on evolution of complex features.
2024
Computational Life (BFF) — Google researchers show self-replicators emerge spontaneously in ~40% of runs.

TypeScript Implementation

JavaScript Core

The BFF interpreter and Soup simulation. Fully documented, readable, explorable.

<script src="../../virtue/bff/bff.js"></script> // Execute a BFF program const result = BFF.exec('[.>}]', [65, 66, 67]); console.log(result.output); // [65, 66, 67] // Run a primordial soup simulation const soup = new Soup(256, 32); for (let i = 0; i < 1000; i++) soup.step(); console.log(soup.replicatorCount, soup.entropy);

View bff.js source → (well-documented, 600 lines)

TypeScript Package

Full CLI with REPL and batch simulation. Type-safe implementation.

cd ts && npm install && npm run build node dist/cli.js run hello.bff # run a program node dist/cli.js repl # interactive REPL node dist/cli.js simulate --size 200 --epochs 1000

Browse TypeScript source →

src/bff.ts

Full BFF interpreter with configurable tape, step limits, execution tracing

src/soup.ts

Primordial soup with reactions, mutations, entropy calculation

src/cli.ts

Command line interface with run, repl, and simulate commands

Honest Critique

This Is Well-Trodden Ground

This research question has been asked for 30+ years. Most ALife systems plateau. Replication ≠ learning. We must be honest about what would actually be new.

Prior Art

1991
Tierra — Observed parasites, immunity. Plateaued. Ray: "Has not produced the open-ended evolution I hoped for."
2003
AvidaNature paper showed complex logic functions evolve. But: evolved heuristics, not learning.
2024
Computational Life — BFF replicators emerge. Novel substrate, but same fundamental dynamics as prior work.

Fundamental Limitations

Universality ≠ Learning

A Turing-complete system can compute learning, but won't spontaneously. Probability of random emergence: negligible.

Replication vs Representation

Replicators optimize for minimality—complexity is costly. Learning requires representational overhead.

Why Systems Plateau

Fitness landscape exhaustion, ecological equilibrium. No ALife system has shown sustained complexity growth.

What Would Be Genuinely New

Sources & Further Reading

Implementations