The Deep Link Equating Math Proofs and Computer Programs
Introduction
Some scientific discoveries matter because they reveal something new — the double helical structure of DNA, for example, or the existence of black holes. However, some revelations are profound because they show that two old concepts, once thought distinct, are in fact the same. Take James Clerk Maxwell’s equations showing that electricity and magnetism are two aspects of a single phenomenon, or general relativity’s linking of gravity with a curved space-time.
The Curry-Howard correspondence does the same but on a larger scale, linking not just separate concepts within one field, but entire disciplines: computer science and mathematical logic. Also known as the Curry-Howard isomorphism (a term meaning there exists some kind of one-to-one correspondence between two things), it establishes a link between mathematical proofs and computer programs.
Simply stated, the Curry-Howard correspondence posits that two concepts from computer science (types and programs) are equivalent, respectively, to propositions and proofs — concepts from logic.
One ramification of this correspondence is that programming — often seen as a personal craft — is elevated to the idealized level of mathematics. Writing a program is not just “coding,” it becomes an act of proving a theorem. This formalizes the act of programming and provides ways to reason mathematically about the correctness of programs.
The correspondence is named for the two researchers who independently discovered it. In 1934, the mathematician and logician Haskell Curry noticed a similarity between functions in mathematics and the implication relationship in logic, which takes the form of “if-then” statements between two propositions.
Inspired by Curry’s observation, the mathematical logician William Alvin Howard discovered a deeper link between computation and logic in 1969, showing that running a computer program is a lot like simplifying a logical proof. When a computer program runs, each line is “evaluated” to yield a single output. Similarly, in a proof, you start with complex statements that you can simplify (by eliminating redundant steps, for example, or substituting complex expressions with simpler ones) until you arrive at a conclusion — a more condensed and succinct statement derived from many interim statements.
While this description conveys a general sense of the correspondence, to fully understand it we need to learn a bit more about what computer scientists call “type theory.”
Let’s start with a famous paradox: In a village there lives a barber who shaves all the men who do not shave themselves, and only them. Does the barber shave himself? If the answer is yes, then he must not shave himself (because he only shaves men who don’t shave themselves). If the answer is no, then he must shave himself (because he shaves all the men who don’t shave themselves). This is an informal version of a paradox Bertrand Russell discovered while trying to establish the foundations of mathematics using a concept called sets. That is, it’s impossible to define a set that contains all sets that do not contain themselves without encountering contradictions.
To avoid this paradox, Russell showed, we can use “types.” Roughly speaking, these are categories whose specific values are called objects. For example, if there’s a type called “Nat,” meaning natural numbers, its objects are 1, 2, 3, and so on. Researchers typically use a colon to denote the type of an object. The number 7, of integer type, can be written as “7: Integer.” You can have a function that takes an object of type A and spits out an object of type B, or one that combines a pair of objects that were type A and type B into a new type, called “A × B.”
One way to resolve the paradox, therefore, is to put these types into a hierarchy, so they can only contain elements of a “lower level” than themselves. Then a type can’t contain itself, which avoids the self-referentiality that creates the paradox.
In the world of type theory, proving that a statement is true can look different from what we’re used to. If we want to prove that the integer 8 is even, then it’s a matter of showing that 8 is indeed an object of a specific type called “Even,” where the rule for membership is being divisible by 2. After verifying that 8 is divisible by 2, we can conclude that 8 is indeed an “inhabitant” of the type Even.
Curry and Howard showed that types are fundamentally equivalent to logical propositions. When a function “inhabits” a type — that is, when you can successfully define a function that is an object of that type — you’re effectively showing that the corresponding proposition is true. So functions that take an input of type A and give an output of type B, denoted as type A → B, must correspond to an implication: “If A, then B.” For example, take the proposition “If it’s raining, then the ground is wet.” In type theory, this proposition would be modeled by a function with the type “Raining → GroundIsWet.” The different-looking formulations are in fact mathematically the same.
As abstract as that linkage may sound, it has not only changed how practitioners of math and computer science think about their work, but also led to several practical applications in both fields. For computer science, it provides a theoretical foundation for software verification, the process of ensuring the correctness of software. By framing desired behaviors in terms of logical propositions, programmers can mathematically prove that a program behaves as expected. It also provides a strong theoretical foundation for designing more powerful functional programming languages.
And for mathematics, the correspondence has led to the birth of proof assistants, also called interactive theorem provers. These are software tools that aid in constructing formal proofs, such as Coq and Lean. In Coq, each step of the proof is essentially a program, and the proof’s validity is checked with type-checking algorithms. Mathematicians have also been using proof assistants — notably, the Lean theorem prover — to formalize mathematics, which involves representing mathematical concepts, theorems and proofs in a rigorous, computer-verifiable format. That allows the sometimes informal language of mathematics to be checked by computers.
Researchers are still exploring the consequences of this link between math and programming. The original Curry-Howard correspondence fuses programming with a kind of logic called intuitionistic logic, but it turns out that more types of logic could be amenable to such unifications as well.
“What has happened in the century since Curry’s insight is that we keep discovering more and more instances where ‘logical system X corresponds to computational system Y,’” said Michael Clarkson, a computer scientist at Cornell University. Researchers have already connected programming to other types of logic like linear logic, which includes the concept of “resources,” and modal logic, which deals with concepts of possibility and necessity.
And while this correspondence bears Curry’s and Howard’s names, they are by no means the only ones who have discovered it. This attests to the foundational nature of the correspondence: People keep noticing it again and again. “It seems to be no accident that there’s a deep link between computation and logic,” Clarkson said.