New Proof Settles Decades-Old Bet About Connected Networks

Michele Sclafani for Quanta Magazine
Introduction
It started with a bet.
In the late 1980s, at a conference in Lausanne, the mathematicians Noga Alon and Peter Sarnak got into a friendly debate. Both were studying collections of nodes and edges called graphs. In particular, they wanted to better understand a paradoxical type of graph, called an expander, that has relatively few edges but is still highly interconnected.
At issue were the very best expanders: those that are as connected as they can be. Sarnak proposed that such graphs are rare; he and two collaborators would soon publish a paper that used complicated ideas from number theory to build examples, and he argued that any other constructions would be similarly difficult to achieve. Alon, on the other hand, was banking on the fact that random graphs often display all sorts of optimal properties. He thought that these extremely good expanders would be common — that if you randomly select a graph from a large bucket of possibilities, you can practically guarantee it to be an optimal expander.
Today, Alon and Sarnak are colleagues at Princeton University. The details of the bet have become hazy in the intervening 35 years. “It was not extremely serious,” Alon recalled. “We didn’t even agree on what we are betting.”
Still, the legend persisted, a subtle nudge to mathematicians to find out who was right. In December, by tapping into a crucial phenomenon in physics — and pushing it to its limits — three mathematicians finally issued a verdict. Alon and Sarnak were both wrong.
The Limits of Expansion
Since mathematicians began studying expander graphs in the 1960s, they’ve been used to model the brain, perform statistical analyses, and build error-correcting codes — encrypted messages that can be read even if they get garbled in transmission. Because expanders have so few edges, they’re incredibly efficient. But because they’re also highly connected, they’re still resistant to potential network failures. This tension, Sarnak said, “makes them both counterintuitive and so useful.”
And so mathematicians want to understand them better. How far can this tension between decreasing the number of edges and increasing the graph’s connectivity be pushed? And how common are the particularly good expander graphs for which this tension is highest?
To answer these questions, researchers need to define expansion precisely. There are many ways to do so. One is that, in order to split an expander graph into two separate pieces, you would have to erase many edges. Another is that if you wander along the graph’s edges, choosing your direction randomly at each step, it won’t take long for you to explore the entire graph.
Merrill Sherman/Quanta Magazine
In 1984, the mathematician Józef Dodziuk showed that all these measures of expansion are linked through one quantity — at least, for certain types of graphs. On these so-called regular graphs, every node has the same number of edges. This ensures that the entire graph has relatively few edges. For it to be an expander, then, you just have to show that it’s well connected. That’s where Dodziuk’s quantity comes in.
To calculate this quantity, you must first construct an array of 1s and 0s called an adjacency matrix. This adjacency matrix represents which nodes in your graph are connected by edges and which are not.
You can then use this matrix to compute a sequence of numbers, called eigenvalues, that provide useful information about the original graph. The biggest eigenvalue, for instance, gives the number of edges that are connected to each node of the graph. Dodziuk found that the second-biggest eigenvalue tells you how well connected the graph is. The smaller this number is, the more connected the graph is — making it a better expander.
Shortly after Dodziuk’s finding, Alon and Ravi Boppana showed that if each node in a regular graph has d edges, the second eigenvalue can’t get much smaller than $latex 2\sqrt{d-1}$. A regular graph whose second eigenvalue is close to this “Alon-Boppana bound” is a good expander; it’s well connected relative to other regular graphs with the same number of edges. But a regular graph whose second eigenvalue actually reaches the bound — that graph is the very best expander imaginable.


In the late 1980s, the mathematicians Peter Sarnak (left) and Noga Alon made a bet about the prevalence of an optimal kind of graph. Neither turned out to be correct.
Courtesy of Peter Sarnak; Nurit Alon
To certain mathematicians — Sarnak among them — the Alon-Boppana bound was an entrancing challenge. Could they construct graphs, they wondered, that reached this limit?
Gambling on Randomness
In a landmark paper published in 1988, Sarnak, Alexander Lubotzky and Ralph Phillips figured out how to. Using a highly technical result in number theory by the Indian math prodigy Srinivasa Ramanujan, Sarnak and his collaborators produced regular graphs that achieved the Alon-Boppana bound. As a result, they called these optimal expanders “Ramanujan graphs.” (The same year, Grigorii Margulis used different but still highly technical methods to build other examples.)
“Intuitively, it seems like you might expect” the almost prohibitive difficulty involved in constructing Ramanujan graphs, said Ramon van Handel of the Institute for Advanced Study in Princeton, New Jersey. “It seems like the best possible graph should be very hard to achieve.”
But in mathematics, objects that are difficult to construct often turn out to be surprisingly common. “It’s a general phenomenon in this business,” van Handel said. “Any example you could visualize won’t have these properties, but a random example will.”
Some researchers, including Alon, believed that the same might apply to Ramanujan graphs. The herculean effort required to find these graphs, Alon thought, said more about the human mind than it did about abundance. This conviction led to Alon and Sarnak’s bet: Sarnak wagered that if you gathered up all regular graphs, a negligible proportion would be Ramanujan; Alon, that nearly all would be. Soon, rumors of Alon and Sarnak’s bet were floating around the community, even if memories of the moment differ.
“To tell you the truth, it’s more folklore,” Sarnak admitted. “I don’t actually remember the event.”
Decades later, in 2008, an analysis of large numbers of regular graphs and their eigenvalues suggested that the answer wasn’t clear-cut. Some of the graphs were Ramanujan, some were not. This only made figuring out the exact balance more daunting. When proving a property that applies to all graphs (or none), mathematicians have a large toolkit they can turn to. But to prove that some graphs are Ramanujan, while others are not — that takes precision, and graph theorists weren’t sure where this precision would come from.
It turned out that in a completely different area of mathematics, a researcher named Horng-Tzer Yau was figuring that out.
A ‘Crazy’ Vision
As graph theorists grappled with the implications of the 2008 study, Yau, a professor at Harvard University, was several years into his own obsession with eigenvalues. These eigenvalues came from a much broader class of matrices, whose entries are randomly generated — say, by flipping a coin or performing some other random process. Yau wanted to understand how a matrix’s eigenvalues might change depending on which random process you used.
The problem dated back to 1955, when the physicist Eugene Wigner used random matrices to model the behavior of nuclei in heavy atoms like uranium. By studying the eigenvalues of these matrices, he hoped to gain insight into how much energy the system had. Wigner soon noticed something strange: The eigenvalues of different random matrix models all seemed to exhibit identical patterns. For any random matrix, each eigenvalue is also random; pick a range of values, and it has some probability of falling in that range. But it didn’t seem to matter if a random matrix consisted of only 1s and −1s, or if its entries could be any real number. In each case, the probability that its eigenvalues would fall in certain ranges of values didn’t change.

The physicist Eugene Wigner observed surprisingly universal behavior in various random systems he was studying. Mathematicians have now extended the scope of that behavior.
Oak Ridge National Laboratory, U.S. Dept. of Energy
Wigner conjectured that the eigenvalues of any random matrix should always obey the same probability distribution. His prediction became known as the universality conjecture.
The idea was “crazy,” Yau said. “Many people didn’t believe what he was saying.” But over time, he and other mathematicians proved that the universality conjecture held up for many kinds of random matrices. Over and over again, Wigner was vindicated.
Yau now wanted to see how far he could push the conjecture. “I was trying to look for problems that can sort of go beyond our understanding of a standard matrix,” he said.
So in 2013, when Sarnak proposed that Yau study the eigenvalues of the matrices associated with random regular graphs, he accepted the challenge.
If Yau could prove that these eigenvalues obeyed the universality conjecture, he’d know their probability distribution. He could then use that information to calculate how likely it was that the second eigenvalue would reach the Alon-Boppana bound. In other words, he’d be able to give a definitive answer to Sarnak and Alon’s bet about what fraction of regular graphs are Ramanujan.
“[Sarnak] just kept poking me, ‘Can you do it?’” Yau said.
So he set out to.
Almost There
Many kinds of random matrices, including the ones that inspired Wigner’s conjecture, have nice properties that make it possible to compute the distribution of their eigenvalues directly. But adjacency matrices don’t have those properties.
Around 2015, Yau, along with his graduate student Jiaoyang Huang and two other collaborators, came up with a plan. First, they’d use a random process to tweak the entries in their adjacency matrix slightly, getting a new random matrix that exhibited the properties they needed. They’d then compute the distribution of eigenvalues for this new matrix and show that it satisfied the universality conjecture. Finally, they’d prove that the tweaks they made were too small to affect the original matrix’s eigenvalues — meaning that the original matrix also satisfied the universality conjecture.

Jiaoyang Huang’s research in probability theory has led him to work on problems in math, physics and computer science.
Tong Li
In 2020, after Huang finished graduate school, the mathematicians were able to use this approach to extend the universality conjecture to regular graphs of a certain size. So long as a graph had enough edges, its second eigenvalue would have the same distribution that Wigner had studied decades earlier. But to figure out the answer to Alon and Sarnak’s bet, the mathematicians needed to prove the universality conjecture for all regular graphs, not just some.
Then, in the fall of 2022, a postdoctoral fellow named Theo McKenzie arrived at Harvard, eager to learn more about the tools that Huang, Yau and their collaborators had developed for their 2020 proof. There was a lot of catching up to do. “We had been working for such a long time,” Yau said.
But McKenzie is “quite fearless,” said Nikhil Srivastava, a mathematician at the University of California, Berkeley, and McKenzie’s former doctoral adviser. “He wasn’t afraid of attacking these very difficult kinds of problems.”
After studying Huang and Yau’s methods for months, McKenzie finally felt ready to help provide a fresh set of eyes and hands. “You want people to be able to check many details and ask many different questions,” Yau said. “Sometimes you need more manpower.”
At first, the three mathematicians had to settle for a partial result. They couldn’t perform the second step of their proof strategy — calculating the eigenvalue distribution of their tweaked matrices — precisely enough to prove the universality conjecture for all regular graphs. But they were able to show that the eigenvalues still satisfied important properties. These properties strongly suggested that the conjecture would be true.

Theo McKenzie was the last to join a team of mathematicians that settled a decades-old debate on the nature of so-called Ramanujan graphs.
Jennifer Halliday
“I knew they were at the borderline of putting this problem to bed,” Sarnak said.
As it turned out, in a separate project, Huang was already toying with the final ingredient they needed.
Closing the Loop
Huang had been independently studying a set of equations, called loop equations, that describe the behavior of eigenvalues in a random matrix model. He realized that if he, McKenzie and Yau could show that their matrices satisfied these equations with a high enough level of accuracy, that would give them the missing information they needed to get their second step to work.
That’s what they did. After months of painstaking calculations, they had their proof. All regular graphs obey Wigner’s universality conjecture: Choose a regular graph at random, and its eigenvalues will exhibit that same known distribution of values.
Which also meant that the trio now knew the precise distribution of values that the second eigenvalue would take. They could compute what fraction of those eigenvalues hit the Alon-Boppana bound — that is, what fraction of random regular graphs are perfect expanders. After more than three decades, Sarnak and Alon had the answer to their bet. The fraction turned out to be approximately 69%, making the graphs neither common nor rare.
Sarnak was first to get the news. “He told us, this is the best Christmas gift he has ever received,” Huang said. “So we feel everything is worth it.”
The result also demonstrates that the universality conjecture is even broader and more powerful than researchers predicted. Mathematicians hope to keep pushing at those limits, and to use the new proof’s techniques to tackle related problems.
But in the meantime, they can enjoy knowing just a bit more about the elusive universe of Ramanujan graphs.
“Both of us were somewhat wrong,” Alon said. “Still,” he added, laughing, “I was a little bit more correct because the probability is bigger than half.”