Defining the Question
Evolution produces complexity; intelligence requires abstraction. Are these the same phenomenon at different scales, or fundamentally different? The answer has implications for AGI, origin-of-life, and what "understanding" means.
Before asking whether evolution produces intelligence, we must define our terms precisely.
Emergence: Macro-level properties that arise from micro-level interactions but cannot be predicted from (or reduced to) knowledge of the components alone.
Self-replication in BFF is emergent: nothing in the instruction set "wants" to copy itself. Replication arises from the dynamics of program interaction.
Physical complexity: The amount of information a system stores about its environment. Not Shannon entropy (which peaks for random systems), but functional information—structure that matters for survival.
Where \(H_{max}\) is maximum possible entropy and \(H_{observed}\) is actual entropy. Complexity is constrained randomness—structure that deviates from maximum disorder.
Abstraction: The process of extracting invariant structure from specific instances. Mapping particulars to general patterns that apply across contexts.
Example: Seeing the pattern "all items move right" across different input grids requires abstracting over:
Intelligence: The efficiency of skill-acquisition on previously unknown tasks. Not how much you know, but how quickly you learn new things.
This is why GPT-4 scoring high on bar exams doesn't prove intelligence: it was trained on essentially the entire internet. The question is: can it solve novel problems with minimal examples?
Intelligence is skill-acquisition efficiency over unknown tasks. Current AI optimizes for task performance, not for learning efficiency. This is why scaling fails.
"On the Measure of Intelligence" (2019) introduced ARC to test:
ARC assumes humans are born with certain cognitive building blocks:
Despite trillions of parameters, LLMs perform poorly on ARC because:
The 2025 ARC winner was a 7-million parameter recursive model that beat 671-billion parameter systems. Architecture matters more than scale.
Life is self-organized criticality at the edge of chaos. Order emerges spontaneously when systems reach sufficient complexity. Evolution exploits, not creates, this order.
Stuart Kauffman's work suggests that:
A Reflexively Autocatalytic Food-generated (RAF) set is a collection of reactions where:
RAF sets emerge with probability approaching 1 as chemical diversity increases. Life's origin may have been inevitable.
Ordered ←——————— Edge of Chaos ———————→ Chaotic
(frozen) (complex) (random)
| | |
crystals life, minds gas, noise
Systems at the edge of chaos exhibit maximal computational capacity. Too ordered: can't adapt. Too chaotic: can't maintain information. Life operates at the boundary.
Here's the problem: Kauffman explains why complexity emerges. Chollet measures intelligence. Are these the same thing?
Does "edge of chaos" complexity eventually become abstraction? Or is abstraction a fundamentally different phenomenon requiring explicit structure (like brains)?
Possible answers:
If intelligence emerges from complexity:
If intelligence requires specific structure:
If abstraction is continuous with replication:
If abstraction is discontinuous:
The question touches on: