IN 1931, THE Austrian logician Kurt Gödel pulled off arguably one of the most stunning intellectual achievements in history.

Mathematicians of the era sought a solid foundation for mathematics: a set of basic mathematical facts, or axioms, that was both consistent—never leading to contradictions—and complete, serving as the building blocks of all mathematical truths.

But Gödel’s shocking incompleteness theorems, published when he was just 25, crushed that dream. He proved that any set of axioms you could posit as a possible foundation for math will inevitably be incomplete; there will always be true facts about numbers that cannot be proved by those axioms. He also showed that no candidate set of axioms can ever prove its own consistency.

His incompleteness theorems meant there can be no mathematical theory of everything, no unification of what’s provable and what’s true. What mathematicians can prove depends on their starting assumptions, not on any fundamental ground truth from which all answers spring.

In the 89 years since Gödel’s discovery, mathematicians have stumbled upon just the kinds of unanswerable questions his theorems foretold. For example, Gödel himself helped establish that the continuum hypothesis, which concerns the sizes of infinity, is undecidable, as is the halting problem, which asks whether a computer program fed with a random input will run forever or eventually halt. Undecidable questions have even arisen in physics, suggesting that Gödelian incompleteness afflicts not just math, but—in some ill-understood way—reality.

Here’s a simplified, informal rundown of how Gödel proved his theorems.

Gödel Numbering

Gödel’s main maneuver was to map statements *about* a system of axioms onto statements *within* the system—that is, onto statements about numbers. This mapping allows a system of axioms to talk cogently about itself.

The first step in this process is to map any possible mathematical statement, or series of statements, to a unique number called a Gödel number.

The slightly modified version of Gödel’s scheme presented by Ernest Nagel and James Newman in their 1958 book, *Gödel’s Proof*, begins with 12 elementary symbols that serve as the vocabulary for expressing a set of basic axioms. For example, the statement that something exists can be expressed by the symbol ∃, while addition is expressed by +. Importantly, the symbol *s*, denoting “successor of,” gives a way of specifying numbers; *ss0*, for example, refers to 2.

These twelve symbols then get assigned the Gödel numbers 1 through 12.

Next, letters representing variables, starting with *x*, *y*, and *z*, map onto prime numbers greater than 12 (that is, 13, 17, 19, …).

Then any combination of these symbols and variables—that is, any arithmetical formula or sequence of formulas that can be constructed—gets its own Gödel number.