28 April 2026 · LinkedIn
For most of human history, new knowledge required a human mind. Not access to knowledge - not the retrieval or recombination of what already existed - but the actual derivation of something that had never been known before.
That changed in February 2026.
Physicists have understood for decades that when gluons - the particles carrying the strong nuclear force - interact in a specific configuration, the scattering amplitude should be zero. Textbooks said so. The mathematics seemed to confirm it. A class of interactions was simply treated as impossible, filed away, and left alone.
Then GPT-5.2 looked at the problem.
Working alongside researchers from Harvard, Cambridge, and the Institute for Advanced Study, the model simplified expressions that human authors had spent considerable effort deriving by hand. From those simplified base cases, it identified a pattern no one had found and conjectured a compact formula valid for any number of particles. An internal version of the model then spent twelve hours reasoning through a formal proof independently and arrived at the same result. The physicists verified it. The formula holds.
This is not AI summarising existing knowledge, or accelerating a calculation a human would eventually have reached. A class of particle interactions that physics had treated as impossible for decades has been shown to be nonzero - and the conjecture came from the model first. The distinction matters enormously.
There is a version of AI progress that is, at bottom, a redistribution problem: the same knowledge, produced faster, accessible to more people. Genuinely significant, and worth taking seriously on its own terms. But still operating within the boundaries of what humanity already understands.
February 2026 was something categorically different. The machine moved past the edge of the map.
Nima Arkani-Hamed, one of the world's leading theoretical physicists, said he had long wondered whether finding simple formulas in this domain could be automated by computers, and concluded that this seems to be starting to happen in many fields. That observation, from someone who has spent fifteen years thinking about exactly this class of problem, is worth more than most of the commentary this announcement generated.
If AI can now generate physical knowledge that does not yet exist - not retrieve it, not recombine it, but genuinely derive it - then the question of who governs these systems, and toward what ends, becomes something other than a policy discussion. Our institutions were not designed to steward a laboratory that can, in principle, run continuously across every domain of knowledge without stopping.
What changes when the rate of discovery outpaces the rate at which we can decide what to do with it?
Join the conversation on LinkedIn →
← All writing