Q&A

The Computer Scientist Who Parlays Failures Into Breakthroughs

Daniel Spielman solves important problems by thinking hard — about other questions.
Daniel Spielman sits in front of an elaborate window at Yale University

Daniel Spielman attended Yale University as an undergraduate and returned as a faculty member in 2005, having begun to produce a series of groundbreaking results in computer science.

Brandon Schulman for Quanta Magazine

Introduction

Nestled among the impressive domes and spires of Yale University is the simple office of Daniel Spielman. His shelves are lined with tall black notebooks, containing decades of handwritten notes, and against a wall sits a large, comfortable couch that looks particularly well used.

“I’m sort of built for sitting still and thinking,” he admitted.

What he thinks about, amid the gothic grandeur of the campus, is a slightly more modern topic: computer science. And over his career, Spielman has produced a slew of influential results, although as he describes it, failure has been his most common outcome. “The key point is you have to enjoy the process of working,” he said. “As long as I enjoy that process, then it’s OK — as long as there’s success once in a while.”

Spielman first came to Yale as an undergraduate before attending graduate school at the Massachusetts Institute of Technology, where he earned his doctorate in 1995. While there, he studied the methods used to protect communications from interference, which involved so-called error-correcting codes. Robert Gallager had shown in 1963 how these codes could be built from graphs — mathematical objects consisting of dots (vertices) connected by lines (edges) — but by Spielman’s time, this approach was largely forgotten. Spielman and his adviser, Michael Sipser, were among the few who revived it to create new codes built from special graphs called expander graphs. The codes they invented became the basis for much subsequent work in coding theory, including a major recent breakthrough.

While at MIT, Spielman met the researcher Shang-Hua Teng, now at the University of Southern California, whose career would become intertwined with his own. One of their most fruitful collaborations involved explaining a widely used algorithm called the simplex method, for which they were awarded the Gödel Prize, an annual prize for outstanding work in theoretical computer science.

The pair went on to win a second Gödel Prize for coming up with algorithms that can quickly solve large sets of simple linear equations. The sets of equations they studied come up whenever scientists model simple physical systems, like heat flow or electrical currents, making their algorithm one of great practical importance.

For these results, in 2010 Spielman was awarded the Rolf Nevanlinna Prize, given every four years to an outstanding information scientist less than 40 years old. (The prize has since been renamed the IMU Abacus Medal.)

Most recently, Spielman has turned his attention to the mathematics behind randomized controlled trials, which underpin modern medical studies. The organizers of these trials try to randomly split study subjects between a test group, which receives an experimental treatment, and a control group, which doesn’t. However, finite-size groups always end up with an imbalance in some category, such as age or weight or blood pressure. Along with his research group, Spielman has worked to find algorithms for achieving a better balance. Despite a slow start, the project has gone better than Spielman expected. “We haven’t declared it a failure yet,” he said.

Daniel Spielman sits at a desk with computers, with a colorful row of books on a shelf above

“For most of the problems that I’ve solved, I can identify the moment and tell the story about the moment where a certain solution came into my mind,” Spielman said. “But that’s just because I’ve spent an absurd amount of hours working on them.”

Brandon Schulman for Quanta Magazine

Quanta sat down with Spielman in his office to talk about the power of thinking, what makes a successful collaboration, and how research is like gambling. The interview has been condensed and edited for clarity.

Editor’s note: Since 2012, Spielman has received research funding from the Simons Foundation, which also funds this editorially independent publication.

How did you get started in computer science?

In middle school there was this book in the library about computer programming. And I remember reading it and it just made it sound like the most amazing thing ever. They were talking about how you program a robot — explaining how you’d have to program all these basic tasks and come up with some way of organizing it. From that point on, I totally wanted a computer. And at some point, my parents found an old Commodore computer that an engineer was selling. What was awesome was we got not just the computer, but all the engineer’s magazines and books. I pored through them. I just spent a huge amount of time reading through all this stuff and learning to program.

Poring through books alone as a kid sounds tough. How did you get through it?

I tend to push through everything. When I was young, I really liked to think. But until I got a computer I didn’t have something I could spend that much time thinking about. I guess you could say I tend to obsess about things. I like to work on a few things very hard. I do get frustrated, but it doesn’t really stop me.

The other thing that keeps me going is probably the same thing that keeps a gambler going. I’ll think that I have a brilliant idea, I’ll think that I solved some problem. I’ll be really excited about it. I won’t be able to sleep. And my wife’s response will be: “Just go to sleep, you’ll find the bug in the morning.” She knows that almost every time I’ll think I’ve solved something, I haven’t. But it’s a big endorphin rush when you think you’ve solved something. And even if the usual outcome is that you’re wrong, that feeling of excitement is motivating.

“I have a really lousy memory,” said Spielman. “When I need to remember things, I remember better if I read my notes.”

Brandon Schulman for Quanta Magazine

What led you to your first big project, on error-correcting codes?

My adviser had suggested trying to better understand probabilistically checkable proofs — one of the major developments of theoretical computer science of the time. I had this idea of relating them to expander graphs, and it turned out that was not actually useful — but I realized it was useful for making error-correcting codes. So we were working on one problem and we failed to solve that, but what we developed was useful for something else. Expander codes felt like an accident that we stumbled into.

That’s true with a lot of my research. A lot of the time, the way I’ve done something is not because it was the problem I was setting out to solve. I was setting out to solve something else, and it didn’t work. But I was aware enough of the intellectual landscape to figure out what I could use it for.

Is that what happened with Shang-Hua Teng and smoothed analysis?

That was a really long project; at least, it felt like it then. For some of it Shang-Hua was practically living in our apartment. And it did come out of another research project which Shang-Hua and I had worked on before and failed.

How did you get started on smoothed analysis?

There are a lot of things people do in practice that they find works for them, and we don’t have a good theoretical explanation for why. We were trying to understand one of these algorithms that was used a lot, called the simplex method. Everyone knew that there were examples where it would fail, but it was still used very successfully in practice. And we were trying to figure out how to explain this. The idea that we eventually came up with was that it works because all the cases where it fails seem very, very fragile.

Daniel Spielman, partially obscured, in front of shelves of books

In performing research, Spielman uses a variety of methods, like pencil and paper calculations, computer experiments and simply sitting and thinking.

Brandon Schulman for Quanta Magazine

Part of it was that we’d been coding up all these examples. And we noticed if we weren’t really careful with the numerical precision, then suddenly things that were supposed to break the simplex method didn’t. So that’s how we got the idea that if there was a little randomness in the inputs, the method would be fine. And we were able to prove that. It was an influential idea both because it helped people understand why this one algorithm worked, and because people have used the idea and concept to understand why many other algorithms work.

Why do you think your collaborations with Teng were so successful?

When I was starting grad school at MIT, he was an instructor there. We started working on problems then and had a very compatible working style. You’ll notice I have a couch in my office. In my office at MIT I had two couches. That meant Shang-Hua and I could both work — like, literally just spend all day lying down thinking about something, and when you have an idea, get up and talk about it. He was very happy to spend a lot of time thinking about things and talking about problems. Like me, he was happy to work on absurdly hard problems that we probably wouldn’t solve. Failure was the standard result of anything that we worked on, even if we were working on it for years. But that was OK.

A pile of notes and books on a brown desk

Graphs are essential in modern computer science, and in much of Spielman’s work.

Brandon Schulman for Quanta Magazine

At first glance, the topic of controlled trials seems more straightforward than these other problems. What’s so hard about splitting people into groups?

Think of it this way. If I give you a coin and you flip it 10 times and look at those results, even though they were generated randomly, you will see patterns in it, like maybe four heads in a row. If you only give me a few quantities to measure, like just age and gender, and you give me 100 people, if you break it up at random, then there will be a disparity in one of those. Being completely random is almost never the right thing to do.

So you want to walk some kind of tightrope of randomness?

We call it “artfully random.” We describe it as a trade-off between being completely random and balancing the quantities that you care about. If the things you’re measuring [like age or gender] have even a small impact on the outcome, it’s better to balance them a little bit. I initially thought we were going to be balancing the heck out of things, but it turns out you need only a little bit of balancing and a lot of randomness. But this is a recent development. Most of the results just told you there exist divisions that are good, but they didn’t tell you how to find them. It was a breakthrough result from Nikhil Bansal from 2009 that first started getting us efficient algorithms for doing this. In our work we’re leveraging results from theoretical computer science on efficient algorithms for making these balanced divisions. People weren’t thinking about using these before.

Ultimately, why work on such hard problems in the first place, where failure seems to happen so often?

It’s a gamble. I’m most motivated to work on things that if I solve them, I’d be excited to go around and give talks. I don’t work on things otherwise, generally. And usually there’s some of those around. Other problems, I’m not as motivated to spend time on them. I also feel like all research is hard — at least for me. Even problems that might look simple I still think are hard to work on. So if I’m going to be doing something that’s hard, why not work on something that’s high-impact, or that other people think are hard too?

Comment on this article