Avi Wigderson, Complexity Theory Pioneer, Wins Turing Award
Introduction
For more than 40 years, Avi Wigderson has studied problems. But as a computational complexity theorist, he doesn’t necessarily care about the answers to these problems. He often just wants to know if they’re solvable or not, and how to tell. “The situation is ridiculous,” said Wigderson, a computer scientist at the Institute for Advanced Study in Princeton, New Jersey. No matter how hard a question seems, an efficient way to answer it could be hiding just out of reach. “As far as we know, for every problem that we face and try to solve, we can’t rule out that it has an algorithm that can solve it. This is the single most interesting problem for me.”
Today Wigderson was named the winner of the A.M. Turing Award, widely considered one of the top honors in computer science, for his foundational contributions to the theory of computation. Wigderson’s work has touched nearly every area of the field. His colleagues, collaborators, and mentees say he consistently finds unexpected bridges between disparate areas. And his work on randomness and computation, starting in the 1990s, revealed deep connections between mathematics and computer science that underlie today’s investigations.
Madhu Sudan, a computer scientist at Harvard University who won the 2002 Rolf Nevanlinna Prize (now called the Abacus Prize), said Wigderson’s influence on the field is impossible to miss. “It’s very hard to work in any space in computer science without actually intersecting with Avi’s work,” Sudan said. “And everywhere, you find very deep insights.” In the late 1980s, for example, Sudan worked with Wigderson on a paper investigating connections between certain mathematical functions and polynomials. That work launched Sudan’s entire career. “This is typical for Avi,” said Sudan. “He gets into some space, he asks the right questions, and then he moves on.”
Wigderson grew up in Haifa, Israel, as one of three sons of a nurse and an electrical engineer, both survivors of the Holocaust. His father loved puzzles and was intensely interested in fundamental ideas in math, which he shared with his children. “He’s the guy from whom I was infected by this virus,” Wigderson said. When he started college in the 1970s, at the Technion in Haifa, he wanted to major in math, but his parents steered him instead to computer science. “They thought maybe it was a good idea that I would have a job when I’m done,” he said.
He found a field rich with deep, unanswered questions that were mathematical at heart. One of his earliest groundbreaking efforts focused on a seeming contradiction: whether it was possible to convince someone else that a mathematical statement had been proved without showing how.
“The person who sees the proof doesn’t learn anything about the proof itself,” said Ran Raz, a computer scientist at Princeton University. In 1985, Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced this concept of zero-knowledge interactive proofs, demonstrating its use for a few statements. Wigderson, along with Micali and Oded Goldreich, later expounded on that idea, laying out the conditions showing that if a statement can be proved, it also has a zero-knowledge proof.
“This is a key result in cryptography; it’s extremely central,” Raz said. Using a zero-knowledge proof, someone can prove they correctly encrypted or signed a message using their secret key, without revealing any information about it. “Avi has some extremely important results in cryptography, and this may be the most important of them.”
But perhaps Wigderson’s most foundational result lies in another area: linking computational hardness to randomness. By the late 1970s, computer scientists had realized that for many hard problems, algorithms that employed randomness, also called probabilistic algorithms, could vastly outcompete their deterministic alternatives. In a 1977 proof, for example, Robert Solovay and Volker Strassen introduced a randomized algorithm that could determine whether a number is prime faster than the best deterministic algorithms of that time.
For some problems, probabilistic algorithms can point to deterministic ones. In the early 1980s, Wigderson worked with Richard Karp of the University of California, Berkeley, to connect the idea of randomness to problems considered computationally hard, which means no known deterministic algorithms can solve them in a reasonable amount of time. “We don’t know how to prove they’re hard,” Wigderson said. However, he and Karp found a randomized algorithm to a certain hard problem that they were later able to derandomize, effectively uncovering a deterministic algorithm for it. Around the same time, other researchers showed how computational hardness assumptions in cryptography problems could enable derandomization in general.
The unreasonable effectiveness of randomness led him to think about the nature of randomness itself. He, like other researchers at the time, questioned how necessary it was for efficient problem-solving and under what conditions it could be eliminated altogether. “Initially, it was not clear if this was only our own stupidity, that we cannot remove randomness,” he said. “But the larger question was whether randomness can always be eliminated efficiently or not.” He realized that the need for randomness was intimately tied to the computational difficulty of the problem.
For a 1994 paper, he and the computer scientist Noam Nisan illuminated that connection. They proved that if any natural hard problems exist, as most computer scientists suspect, then every efficient randomized algorithm can be replaced by an efficient deterministic one. “You can always eliminate randomness,” Wigderson said.
Importantly, they found that deterministic algorithms may use “pseudorandom” sequences — strings of data that appear random but aren’t. They also showed how any hard problems can be used to build a pseudorandom generator. Feeding the pseudorandom bits (instead of the random ones) into a probabilistic algorithm will result in an efficient deterministic one for the same problem.
Sudan said that paper helped computer scientists recognize degrees of randomness that could help reveal the intricacies of hard problems and how to solve them. “It’s not just randomness but perceptions of randomness,” he said. “That’s the key.”
Sudan points out that randomness seems to appear everywhere but is, in truth, extremely hard to find. “People tell you that the digits of pi look random, or the sequence of numbers that are prime looks random,” he said. “They are completely determined, but they appear random to us.” The perception of randomness, he said, lies at the heart of computer science today. “And that is something that Avi has promoted vastly.”
Randomness has become a powerful resource in complexity theory, but it’s elusive. Coin flips and dice rolls, Wigderson points out, aren’t truly random: If you have enough information about the physical system, then the result is entirely predictable. Perfect randomness, he said, is elusive and hard to verify.
But for Wigderson, examples of computing are everywhere — not just within smartphones and laptops and encryption algorithms but in biological and physical systems as well. In recent decades, findings from the theory of computation have yielded insights into a range of unexpected problems, from swarming birds and election outcomes to biochemical reactions in the body. “Basically, any natural process is an evolution which you can view as computation, so you can study it as such. Almost everything computes.”