A meditation on compute, knowledge, and the space between
Sutton wrote the bitter truth in 2019:
that all our careful knowledge, hand-designed,
will yield to scale, to search, to the machine
that learns what we could never hope to find.
Deep Blue cared nothing for the grandmaster's art—
it searched a billion branches, cold and fast.
AlphaGo learned patterns, not from human heart,
but played, and played, and left our priors in the past.
And yet—the counterexamples remain.
Integer factors bend to clever thought:
forty-three thousand times the speed we gain
from algorithms, not the chips we bought.
In protein folds and lattice QCD,
symmetries unlock what scale cannot.
Some truths are geometric, wild and free—
the structure of the world is not forgot.
The robot, still, cannot quite learn to walk
through scaling up on servers far away.
The physical resists our digital talk;
embodiment demands a different way.
And what of transformers, convolutions too?
Are these not priors, baked into the form?
Locality, attention—structured view—
the architecture shelters from the storm.
Moore's Law is dying; compute growth will slow.
The bitter lesson loses its bitter edge.
Perhaps the answer neither camp can know:
we stand upon a bittersweet bridge.
Not knowledge versus scale, but knowledge with—
let structure guide what learning comes to find.
The dichotomy dissolves into myth:
we teach the search, and searching shapes the mind.
Richard Sutton's 2019 essay "The Bitter Lesson" argued that AI methods scaling with compute consistently outperform approaches encoding human domain knowledge. Yet research continues to find counterexamples: algorithmic breakthroughs that outpace Moore's Law, geometric symmetries that enable learning where pure scale fails, and physical domains where embodiment resists digital scaling. The truth may lie between—using human insight to structure what machines learn, rather than encoding what we have already learned.