What Is the True Promise of Quantum Computing?

Peter Greenwood for Quanta Magazine
Introduction
Quantum computing promises unprecedented speed, but in practice, it’s proven remarkably difficult to find important questions that quantum machines can solve faster than classical ones. One of the most notable demonstrations of this came from Ewin Tang, who rose to prominence in the field as a teenager. When quantum algorithms had in principle cracked the so-called recommendation problem, Tang designed classical algorithms that could match them.
So began the approach of “dequantizing,” in which computer scientists look at quantum algorithms and try to achieve the same speeds with classical counterparts. To understand the ongoing contest between classical and quantum computing, co-host Janna Levin spoke to Tang on The Joy of Why podcast. The wide-ranging conversation covered what it was like for Tang to challenge the prevailing wisdom at such a young age, the role of failure in scientific progress, and whether quantum computing will ultimately fulfill its grand ambitions.
Transcript
[Theme plays]
JANNA LEVIN: Hey, Steve.
STEVE STROGATZ: Hi, Janna.
LEVIN: I’m looking forward to talking to you about my interview with Ewin Tang, a computer scientist at UC Berkeley. I realized, talking to her, that I don’t know anything about computer science.
STROGATZ: Welcome to the club.
LEVIN: Do you feel like you’ve got a handle on quantum computing, classical computing?
STROGATZ: Not really. I mean, I can say the words. I’ve heard of Turing machines. I’ve heard of qubits. I’m puzzled by it. I can’t wait to hear your episode.
LEVIN: Yeah, it’s very fascinating ’cause she’s really working on understanding if classical algorithms can do everything a quantum algorithm can do.
STROGATZ: Uh-huh.
LEVIN: And, you know, there’s this kind of promise in quantum computing of this exponential speed up, this incredible power, not just in the hardware, but even in the algorithms, ’cause the algorithms have to be written specifically for the machine.
And so, there’s this kind of classic problem that you and I deal with every day involving recommendations and how we get recommendations from some of these websites we go to. And it doesn’t sound like it would be that important a problem, but it turns out it’s a very sophisticated problem.
STROGATZ: I’m just wondering if I’m getting you. So, is it like the problem that Netflix has where each user has only seen a small subset of the movies offered by Netflix?
LEVIN: Yeah.
STROGATZ: And they wanna tell you might like this other movie?
LEVIN: Yes, exactly. If you’re doing these recommendation problems, you have to build these large matrices. And in fact, it was one of the key problems that quantum algorithms had claimed to exponentially speed up. Ewin goes to prove that there could be no classical counterpart, but accidentally in her frustration in getting blocked, went down this other path and she sort-of inadvertently realized she might have a classical version that was doing just as well.
STROGATZ: That’s amazing.
LEVIN: Yeah.
STROGATZ: This is a really surprising idea. That’s not supposed to be possible.
LEVIN: Yeah. It’s really surprising and I think it’s also really surprising that she did this at around 18.
STROGATZ: What?
LEVIN: Now this is somebody who I believe started college at the University of Texas at Austin at 14, having skipped three grades.
STROGATZ: Well, alright. So, there’s somebody special there.
LEVIN: Yeah. Do you remember being 14?
STROGATZ: Unfortunately, I do.
LEVIN: Mercifully, I don’t.
STROGATZ: Good for you.
LEVIN: So, an exceptional person. Well, I think we should hear from her. Here is computer scientist Ewin Tang. She’s a Miller Postdoctoral Fellow at UC Berkeley.
[Music plays]
LEVIN: Ewin, welcome to the show. We’re really excited to talk to you.
EWIN TANG: Thank you for having me.
LEVIN: So, algorithms are behind a lot of things, like dating apps and social media, and when I go to watch something, whether it’s one of the streaming platforms or YouTube, I get a list of recommendations, and I have to admit, I’m unsure how they’re picked. What’s happening literally behind the scenes with these recommendations?
TANG: So, recommendation systems are, of course, proprietary algorithms. We don’t exactly know what’s the secret sauce behind Netflix’s recommendation algorithm. We do have some sense about what it might be doing behind the scenes.
For example, there is a notable Netflix challenge where Netflix released some of their data and then challenged a bunch of academic teams to try to produce the best recommendations with their data set. A lot of these techniques are, I think, more or less pretty standard in the industry.
So, some things that some companies still use is like an item-based recommendation. This is more of a, you know, item-by-item basis where it says, like, “Okay, if you like this item, then you might like these other things.” So, this is very similar to what you would see on Amazon where, like, on the page for a particular item, you see a list of other items that are related.
But there’s also other things that you could do. The most standard theory abstractions of this kind of question are around low-rank matrices. If I imagine every user’s preference as a vector, and I imagine the space of all these vectors, then this is going to be more-or-less low rank in the sense that I can explain somebody’s preference with only a few pieces of information.
Like, if you take this Netflix data, and if you plot it along certain dimensions, you can explain a lot about what somebody’s preferences are by whether they like rom-coms or whether they like action movies. This is one axis in which you can describe somebody’s preference. And it turns out you don’t need that many axes in order to really pin down what somebody likes.
LEVIN: Let me try to pick this apart for people who don’t know vectors and ranks and matrices. So, let’s imagine I have a spreadsheet and across the top are a list of movies. And I have a bunch of rows that are users. The simplest kind of matrix is I’m just plotting whether or not each of those users watches these movies, for instance. And then I can get more complicated and say, “Oh, they like all these rom-coms, or they like to watch limited series, or they like French movies.” Is that right? Is that what you mean by a matrix?
TANG: The matrix I’m envisioning is that you have all your users as rows and then I have all of my videos as columns in my spreadsheet. And Netflix doesn’t actually do this, but you could imagine asking every user to rank their opinion about every particular video. And then they give some sort of score, maybe it’s 1 through 10, maybe it’s 0 or 1, whether they liked it or not.
And this is the matrix that people have observed empirically is low-rank in the sense that you can say my preferences are like a combination of these ten people’s preferences or something. And that does a very good job of explaining what my preferences are. And that actually gives you a lot of structure with which to run algorithms.
And so you see that a lot of the algorithms that people come up with for producing good recommendations are based around this low-rank structure. Of course, you don’t actually have their preference matrix in full, but you can still use that structure to get something from the incomplete data that you have.
LEVIN: So, this very abstract subject has now become very practical in some ways, and lots of people have heard the terminology algorithm, so, what even is an algorithm? I think people throw it around and aren’t really sure. So, what do people really mean when they’re discussing algorithms now?
TANG: There’s an informal notion of the word algorithm, which I throw around all the time. But more concretely speaking, typically you think of an algorithm as something that performs a task. So there’s some particular kind of input, and you want to compute some function of that input.
For example, I’m given a list of numbers, and the goal is to output a sorted list of numbers. And then the thing that goes from the input to the output is called an algorithm. You could imagine constructing an algorithm in a variety of ways. For example, you could imagine writing it in code, but that’s the basic idea.
LEVIN: A little bit of a digression out of sheer curiosity, but do you think the human mind works algorithmically? We’re taking inputs, we’re processing, and then we have an output?
TANG: This is definitely above my pay grade, but I think people think that the universe is a quantum computer, right? People say this. And so, it’s not that far of a stretch to imagine what you’re doing is doing some kind of computation.
LEVIN: So, this issue of having an algorithm that’s trying to determine preferences is important enough that it was a big part of computer science, called the Recommendation Problem.
How did you get introduced to the Recommendation Problem? You’re a very young researcher. You’re interested in higher math, and you become interested in computer science. So, how did it come across your radar?
TANG: Right, so, I was interested in quantum computing. I think I was a junior. I took an intro course in quantum computing and quantum information taught by Scott Aaronson, and after the course I asked him whether he could supervise my senior thesis. He said yes. And then he gave me a few problems that were on his mind at the time. And one of these was this recommendation problem.
So, this was an instance where people were able to find some potential practical use for a quantum computer through this recommendation problem. And they are able to justify this sort of switch and try to argue that this thing could actually produce good recommendations and do it much, much faster than other classical algorithms can.
LEVIN: So, let me see if I understand. You start out. You’re interested in quantum computing. And around that time, there were kind of some breakthroughs. I mean, one of the most important bragging points for quantum computing was that they could really speed up this recommendation algorithm. There was some real excitement about that. Finally, quantum computing had proven it could do something that it had long claimed it might theoretically be able to do, which is speed up a computation or an algorithm.
TANG: Right. I think quantum computing arguably has some kind of problem with finding very wide-ranging applications. In the community, we have some sense that, okay, quantum computers, they could be used to perform various kinds of simulations of physical systems. And we also know that it can factor numbers and break modern public key cryptography.
But you might hope to have a much broader impact. It’s kind of striking that quantum computers can only solve what seems like maybe a fairly limited set of problems much faster than classical algorithms. So, recommendation systems is this attempt of trying to break into like a larger sphere of different kinds of algorithms, things related to manipulating data and things which are present everywhere in tech and in industry, not just a maybe more niche domain.
LEVIN: And so, is there a simple way to understand what’s meant by the difference between a classical algorithm and a quantum algorithm?
TANG: The difference between quantum and classical algorithms, I feel like it’s a nuanced question.
LEVIN: Welcome to the world of quantum.
TANG: Yeah. I think one of the simplest ways to understand what’s going on is that quantum computers work with these things called superpositions. But you can think about these as more advanced versions of just probabilities.
For example, like, if I have a list of numbers and I want to understand, okay, what’s the average of this list of numbers? If the list of numbers is really, really big, then it might take a long time for me to sum all these up and divide it by, you know, the number of numbers.
But something I could do to make it faster is I could randomly choose a bunch of these numbers and then do the average of these set of numbers. There’s like this whole field of statistics which basically says that if I choose enough numbers then it’ll more-or-less be close to the full average of the entire dataset, right?
So, this is sort of what you’re able to do with classical probabilities. You’re able to take these big, big datasets and somehow operate on them only touching like a subset of the data.
Now quantum algorithms are able to do this, but they have some additional powers, like interference and so on. There are situations in which I would really like it if two probabilities cancelled out. And on a classical computer, you really can’t do this, and so this is a genuine bottleneck that you experience.
On a quantum computer, you don’t have this issue. They’re doing these things using superposition in order to manipulate data, perform recommendations, and so on.
Even though it’s not obvious, it turns out that the quantum algorithm for this recommendation problem wasn’t really heavily using these sorts of quantum features of the superposition. And the classical algorithm is somehow able to do this by replacing the superpositions with probabilities. That’s more or less how I would explain it.
LEVIN: So this suggestion from Scott Aaronson, that you take on a range of problems and hone in on the recommendation problem, was it to specifically jump in and solve the recommendation problem, or was it to see if the quantum computing claim was irrefutable?
TANG: I was hoping we could prove that no classical computer could do just as well as the quantum one for the recommendation systems problem. This would really show that there is a genuine, exponential speedup between quantum and classical here.
Scott sent me at the time, like, this article called “Read the Fine Print,” where he lays out the challenges of actually proving this sort of exponential speedup.
The hope was that this Kerenidis and Prakash algorithm — this algorithm for recommendation systems — resolved all of the complications that Scott Aaronson had brought up, and as a result, it could be used to then finally find this sort of separation between quantum and classical that we were hoping to find in machine learning tasks.
LEVIN: So, you were hoping to prove that you couldn’t do classically what these researchers had done with the quantum algorithm.
TANG: Yeah, yeah, I was trying to do it. I was getting very stuck.
LEVIN: Do you go to your advisor and say, look, “I’m stuck. I can’t do it.” Are you feeling defeated? Or how does this turn out to be something very exciting for you? How do you go from, you know, the perils of failure, right, to realizing, “Oh, maybe I actually have something in my inability to prove that I can’t do a classical algorithm?”
TANG: This is something I was spending a decent amount of time on in senior year of undergrad. It was a pretty difficult process, I guess.
At the time, Scott was on sabbatical. We talked a few times. I was getting most of my advice from people in his lab. I was just very much hitting a brick wall in terms of trying to prove this lower bound. I feel like these experiences were kind of new to me at the time with respect to, like…
LEVIN: Failure?
TANG: Yeah, well, no, no, no. Well, I think failure in research is definitely a different beast because it’s not necessarily even about, you spend a lot of time and you fail. It’s more like if you don’t have any ideas, then you can’t even spend that much time thinking about the problem.
And so I kept having to like push myself to actually think about this. Because I just literally had no footholds to attempt anything. So, what eventually ended up happening is that I started just reading the literature on related problems in the classical world.
And I had seen one paper that seemed maybe a little bit relevant in the sense that it was claiming to do something that was also much, much faster than your normal classical algorithm. It seemed sort of related, but had these weird kinds of assumptions and it was a bit tough. And so I put it down and I stopped thinking about it.
And then much later I decided to turn again to saying, okay, I can’t actually prove this lower bound. So let me just try to break up this problem into different pieces and decide, okay, which part do I believe is the hard part of this algorithm. Like which is the part that the classical algorithm can’t do, and try to isolate what that is.
And so at that point, I break up the problem into two parts. The first one is the actual linear algebra part, like a matrix approximation problem. And the second one is about sampling a recommendation. And when I break this up into two parts, I then realize that the paper that I’d seen before actually solves the first one outright. Or at least, like, you could squint at it and maybe believe that it could.
LEVIN: And so you’re still thinking you’re going to find it now in this more clearly isolated piece.
TANG: Yeah, I was happy because the second part was very simple. And so I was like, okay, this is like a concrete problem now. And I’m just trying to show that this simple, like, self-contained task is hard. And then I think about it some more, and then I find an algorithm for that part. And then at that point I start getting excited.
I start being able to think, okay, maybe I can actually put these two pieces together. If both of these parts are parts that I thought were hard but are actually easy, then maybe you could just make the whole thing classically easy. And then after that there was like, I guess months of doing math and trying to work out the details.
LEVIN: So is there a moment where goals shift to: “Oh, I’m not proving this can’t be done. I’m actually proving it can be done in the classical algorithm.” What was that moment like for you when you realized, “Oh, I think I actually did the opposite of what I set out to do, and maybe it’s even better than what I set out to do.”
TANG: Honestly, I sort of come at this without understanding the context that well. At the time I sort of thought, this is like a reasonable candidate for a quantum algorithm that could be fast. But, you know, whether it’s actually hard for classical computers or not was not something that was on my mind as being as big of a surprise as it turned out to be.
But there was definitely a point during the research process where I started feeling like it was much more productive, just in the sense of I kept on having ideas when I was trying to build an algorithm and I kept on having no ideas when I was trying to think how to prove hardness.
And so, the time I’m thinking, well this is kind of silly because I’ve been tasked to prove hardness and I’m out here doing something completely different. You know, I was just thinking, like, how am I gonna salvage this senior thesis?
But eventually I do realize, like, okay, maybe this is actually something that I could put together and get an actual result. And at that point I decided, okay, I’m actually going to understand this paper that I’ve been like skimming, then actually try to figure out the details and start writing things up.
LEVIN: So you, go to your advisor, you go to Aaronson, and what do you say? Is he following all along, or do you kind of surprise him with this change in direction?
TANG: I think I surprised him. There’s maybe one or two emails where I’m like, I think I might have an algorithm, but I’m not sure yet. And then I sent it to Scott and was like, I think this is how you have an algorithm. Maybe he didn’t believe me at first, I’m not really sure.
LEVIN: So, there was kind of a moment of disbelief.
TANG: It’s a surprising claim, yeah. So we sent it to Yordanos Karanidis and Anupam Prakash, the authors of the recommendation system problem, and then I later on presented it at the Simons Institute also here at Berkeley.
LEVIN: So, let’s talk about that presentation. You mentioned earlier that the architects of the quantum algorithm that had made kind of a big splash were also going to be there at this workshop where you were meant to present this result that you had sped up the algorithm with equal success classically. That was not what anyone anticipated.
TANG: Yeah, it was maybe summer of 2018, I think, that I went to UC Berkeley and they were there, and some other people were there who were interested in quantum-machine-learning kinds of problems.
LEVIN: So, you’re an 18-year-old senior in college. Do they even know this? At the time?
TANG: I don’t know. I’m not quite sure.
LEVIN: Was that nerve wracking for you or is this just sort of this is just what’s done?
TANG: I definitely felt like this is a little strange. I mean presenting an entire paper, an entire set of proofs on the board. I think it’s always a nerve-wracking task.
LEVIN: At the best of times.
TANG: Yeah, yeah, even in the best of times. And, I mean I was thankful just because I didn’t know that many people, so later on I realized these are all, like, really big names in quantum computing.
LEVIN: Oh, that’s good.
TANG: But at that time, I didn’t really realize that.
LEVIN: So, you’re at Berkeley presenting this result, and then how long does this go on for?
TANG: I think it was, like, most of the day. I think it was, like, maybe, one and a half hours, and then lunch, and then another one and a half hours, and then talking after. We really did work through most of the details of this proof.
LEVIN: And what was the reaction?
TANG: Honestly, okay, I don’t remember that much. What I do remember is basically a rough agreement that it seemed correct. Thinking back about the way I presented the proof, it was kind of a mess, so the fact that people were able to parse it and sort of understand that it was probably correct is appreciated.
The main thing was they started suggesting follow-ups like, “Oh, you could use these techniques to solve x problem or y problem.” That led me down a big rabbit hole trying to solve other kinds of problems using similar techniques.
[Music plays]
STROGATZ: Very encouraging, actually, to hear the response, you know, because sometimes colleagues are dismissive or discouraging. It sounds like just the opposite here.
LEVIN: Yeah. In fact, there were two of the key people who originally had said they had cracked the problem ¾ the recommendation problem ¾ with a quantum algorithm. So, the generosity that they’re sitting there and helping her pick apart, where to go further to prove they didn’t have really a leg up with the quantum approach.
STROGATZ: Doesn’t that make you feel good to be a scientist where we do this kind of thing with each other?
LEVIN: Yeah. I think it also speaks to these kind of rabbit holes in scientific research. You end up following something which wasn’t your intended target often, don’t you think?
STROGATZ: It’s a great strategy. When you think something is true, try to think about proving the opposite thing, and sometimes that’s what’s true.
LEVIN: Yeah, exactly. And it really sounds like that’s what happened to her and that she was just open to the discovery, open to going down that path. There’s this industry now of dequantizing, of looking at these quantum algorithms and figuring out, “Hey, can I do this classically with just a little clever maneuver?”
LEVIN: So, in the same way that she went down this kind of unexpected rabbit hole, right now she’s working in a very interesting way, modeling natural systems, which kind of surprised me. More on this after the break.
[Music plays]
LEVIN: Welcome back to The Joy of Why. We’re here with Ewin Tang, who in her undergraduate thesis showed that the recommendation problem in computer science could be solved equally well by a classical algorithm as by a quantum algorithm. And that was really just the beginning…
You’re able to reveal this in front of the world’s experts. And now there’s all these applications and there’s this whole area called dequantizing algorithms. Can you explain that a little bit? What is dequantizing of algorithms?
TANG: The process of designing quantum algorithms is always kind of a push-and-pull with the usual algorithms community, because generally what happens is that you want to find some problem that you can solve much faster with a quantum computer than you can a classical computer.
And in order to do this often you have to sort of change the problem that you want to solve a little bit, so like I want to solve linear regression or something. And, well, I can’t solve it outright, and so I sort of change the problem a little bit, I change the input a little bit, I change what I want the output to be a little bit, and then I can say, okay, when I change this thing, then I can get this really fast algorithm.
But when you do that, it leaves open room that you could have a classical algorithm that actually uses a slightly different input and is able to get this slightly different output just as fast as your quantum algorithm, right? Because you’re changing the problem, you’re introducing a problem that perhaps no classical algorithms person has ever studied.
And so the question is when you change this problem, do you make it too easy for a classical computer to solve and ruin your claimed exponential speedup? So, you know, there’s the quantum algorithm side, which is okay, we found this new great algorithm. And then there’s the dequantizing side, which is we have a classical algorithm that can do just as well.
LEVIN: So, are these communities duking it out?
TANG: Often it’s the quantum algorithms people who are doing both the proving and the dequantizing. And so, in that sense, maybe not. But there’s been a couple of high-profile cases where there’s been a team of classical algorithms people to argue about certain claims.
LEVIN: Do you think that quantum computing is going to fulfill its promise? I mean, I know that’s a huge question, because I don’t think anybody knows yet. But, given what you’ve been working on, do you think quantum computing will fulfill its promise?
TANG: I think quantum computing definitely has potential. They make these really big claims that I don’t necessarily believe will come to pass. But overall, I think the reasoning for why you could expect quantum computers to work is pretty sound, and I think the only reason that it will not come to pass is if there are, like, things that are, maybe sociological in nature and outside of the scope of my expertise.
LEVIN: So, like a real scientist, you’re not taking sides.
TANG: That’s right, yeah.
LEVIN: I actually want to talk about what comes next, which are future directions. I know you’ve been working more recently on physical systems and how all the work that has led to this point influences or allows you to make progress on thinking about more natural systems. Can you tell us about this?
TANG: I started working on physical systems, kind of as a consequence of the dequantizing work. You know, this dequantizing stuff made me feel a little bit, like potentially there could be other things that are more exciting or like have a potential for near-term impact.
And so I started thinking about these applications related to trying to simulate physics and so on.
We have some belief that quantum computers can simulate quantum physics faster than classical computers, and it is true for certain kinds of contexts, at least if you believe quantum computers work at all, then certain kinds of simulation problems can be done on these quantum computers.
But there’s actually a weird kind of gap between what quantum computers can do and if you ask somebody who’s interested in solving problems related to these systems, what they actually care about — the practitioners, I guess.
For example, the thing that we know how to do as quantum computing, quantum algorithms people is we know how to simulate dynamics of systems — so how a system evolves in time. Whereas typically if you talk to somebody who cares about, like, materials, superconducting materials, or chemical reactions, they care about ground states, they care about static properties of systems. These are the states that you get if you leave a system for a long period of time. These things are sort of different from the dynamical properties. And we actually don’t have nearly as good of an understanding of these static properties as we do these like dynamical properties, these dynamical algorithms.
And so my work has shifted to thinking about various tasks around manipulating these systems and trying to understand what we can do with them. And, to begin with we don’t actually know that much about quantum information theory, the behavior of these big quantum systems. And so, designing algorithms and trying to find good applications goes hand in hand with just trying to understand the objects in the first place.
LEVIN: So, let me see if I understand; you’re trying to use classical algorithms now to model quantum physical systems or are you also using quantum algorithms to understand quantum systems?
TANG: It’s a little bit of both. So, the stuff that I’ve been working on recently has been in the realm of learning and simulating quantum systems. And here the setup is that I have some system on a quantum computer that I don’t know, some unknown thing, and I want to extract information from it. I want to understand the underlying mechanics of the system. Or conversely, I have some mechanics of a system and I want to simulate on a quantum computer what it’s doing.
LEVIN: So, can you give me an example of the kind of system? What would the system be?
TANG: So, I’ve been thinking a lot about Gibb states or systems at thermal equilibrium.
LEVIN: Does this mean a room full of quantum particles that have come to equilibrium?
TANG: Yeah, yeah.
LEVIN: So, you’re modeling them, and what kinds of dynamical properties can you extract?
TANG: So, general simulation tasks. For these like dynamical properties, you can, for example, estimate the energy, you can estimate correlation functions, like what’s the correlation between two different particles that are far away or something. So, these are things that you can do if I just care about, like, what happens when I evolve my system with time.
But if I consider my system at equilibrium, then the question becomes a little bit more difficult and you have to solve a simulation task and actually be able to say, “Okay I’m gonna prepare my system, prepare my you know, quantum particles.” And then once I have that then I can just do the measurement that I want to do. Somehow this is the easier part than actually preparing the system in the first place.
LEVIN: When people hear scientists talk about, I’m preparing a system — it’s all very abstract. It’s hard to say what the physical system is, maybe these are quantum particles spinning. Why are we doing this? To people on the outside, it could sound like we set up a little game for ourselves. Here’s the game, right? The game is to take a spin system and extract properties.
But I think that we’re all actually motivated by trying to answer some bigger question. Why are we doing this? It’s more than just spending our time on a cool game, right? So, what are those bigger questions that are at stake when you’re looking at these quantum systems?
TANG: That’s a great question. I’m currently on the job market right now, so I’m having to answer questions like this pretty regularly. I guess I could take it from two different angles. One of them is the more practical angle, which is that there are certain kinds of problems that when you abstract details away, they are about the problem that I stated. For example, something that I think the Microsoft group is hoping to do with quantum computers is understand nitrogenases.
LEVIN: I don’t know what those are. New to me, new to me.
TANG: So nitrogenases are what allows you to make fertilizer. And like, when people talk about like adding nitrogen to the soil, what they mean is they’re taking some kind of nitrogen gas and they’re doing some chemical reaction and the nitrogen ends up in the soil. And actually, you know, we do this when we’re trying to extract nitrogen from food and things like this.
And, the sort of behavior that underlies these sorts of reactions are not very well understood. For example, I think I saw a talk on this recently, there are certain kinds of transition metals that appear in your complex. And they’re somewhat far away from where the action is happening in the chemical reaction.
But somehow they’re very important in the behavior of these molecules in these reactions. And, this is in the search of finding better, more efficient ways of producing things like fertilizer. And producing fertilizer is actually really energy-intensive. So, understanding this like scientific task can be reduced down to: Can we build a quantum computer that can actually slowly simulate this chemical reaction, sort of break it down into pieces that we can understand? And if we can understand this better, then maybe we can then engineer better chemical reactions, engineer better ways to perform this procedure.
LEVIN: Ultimately, there are these real-world applications, consequences for people, not just what YouTube video they’re going to find themselves losing their day into. So, are you often thinking about this interplay or for you, is it really very much one of the blue skies?
TANG: I do think about it. I mean, it feels like a common experience to me that you start off doing something because maybe the math is cool, and then you, like, have to ask yourself, is this actually going to make some impact on some other areas?
I’m finding this especially interesting to do in quantum computing because I have some kind of understanding of what matters in computer science. And then I look at what the physicists or the chemists are interested in, and I have to try to piece together is what I’m doing actually helpful for the potential applications that I’m envisioning in my head?
What are the different kinds of things that people care about? What actually leads to downstream impact? In what sense is that actually important for understanding the things two layers of abstraction down, or many layers of abstraction down? It’s a process of finding models and models and models from, you know, the real world, which is messy.
LEVIN: Yeah, I mean, that’s kind of the whole paradigm of life, right?
TANG: Yeah.
LEVIN: You start because it’s fun, and then you wonder what the actual value is. And I wonder, with all of the conversations that have gotten really heated lately around things like AI, how much of that is something that’s relevant for you? And do you ever wonder if some of the work you do will feed into that, whether it’s taken by you in that direction or by others in that direction?
TANG: Definitely I’m cognizant of this new era that we seem to be entering with regards to AI. If you ask me whether quantum computing has a role to play in this whole landscape, I would say I’m not sure. There’s a lot of aspects of current AI that really, really is using the sort-of weird nature of the computational objects that we’ve built up, you know. A lot of the architecture for these artificial intelligence systems are based around training these linear algebra algorithms, which are then run on GPUs; these, like, graphics processing units.
But I have no clue about where the technology of quantum computing will fit into this whole new thing. Will it be better than a GPU in any regard for an algorithm? Maybe the right algorithms are not based on these sorts of current techniques that are being used for classical computers. I mean, these are all possible. I don’t know if I would bet money on it.
LEVIN: So, I want to ask you a question that we sometimes ask here at The Joy of Why. And, that is, what about your research brings you joy?
TANG: I like the collaboration aspect of it. You’re sort of able to explore these uncharted territories of research, and you’re also able to do it while spending time with people that are really smart and fun. And the sort of business of academia is one that is nice enough to support this kind of enterprise and provides the whole thing some meaning.
LEVIN: That’s such an important point, particularly with what’s going on in the world now, and the hostilities towards not just science, but literally universities. This seeming abstract blue skies work has these important consequences for the world that we’re all living in.
TANG: Right, yeah. It’s like exploring for the sake of trying to find new ideas, the space of possibilities.
LEVIN: That’s beautiful. Thank you so much, Ewin. Really fun to talk to you.
[Music plays]
STROGATZ: Now, I would’ve thought the answer, will quantum computing make a big difference in AI is, of course it will. It’s such an enormously powerful new way of computing if it ever comes to pass, how could it not have an enormous impact? But she’s more open-minded and I think that’s reflected in the results that you describe.
LEVIN: Yeah, and she described being so open-minded herself without really having a kind of a bet on it. You know, just being completely open to how it’s gonna play out. And she described that as similar of even the people who are in quantum computing; that a lot of them were playing both sides, looking at dequantizing as well as quantum algorithms, just in the spirit of openness.
STROGATZ: It’s very uplifting, actually, this discussion you had with her.
LEVIN: Absolutely, and I will say that element of blue skies research that’s so misunderstood — that’s crucial for the trickle-down practical consequences —I t’s actually imperative that we have dreamers who are just thinking about these very difficult, abstract problems.
STROGATZ: Yeah. I mean, because there’s this concept of mission-driven research where you know what you’re trying to do and then you throw money at it and you throw people and resources; and sometimes those things pan out and sometimes they don’t. But at least you know what you’re shooting for. Whereas pure curiosity-driven research, blue sky research, dreamy research often has the biggest payoff of all when it works. But a lot of times it’s a dud because you don’t know what you’re doing, you’re just dreaming.
LEVIN: But I mean, why should taxpayer money support research that people generally find incomprehensible?
STROGATZ: Yeah, it’s a fair question, and there was a visionary after World War II named Vannevar Bush, who had this idea that if you supported the research enterprise in the United States in a really big way with the generosity of the taxpayers, that good things would come from it. Really good things, but unpredictable good things, you know?
And so for a long time, that was the philosophy in this country. And I mean, we’ve got a lot of big payoffs from it, all kinds of cures for diseases. We’ve got semiconductors making the chips in our cell phones and computers. I mean, we could go on and on. All these things came from wild ideas that nobody anticipated. And it’s because genuine discovery is not predictable.
LEVIN: Mm-hmmm, yeah.
STROGATZ: And I don’t know, you gotta trust us. We do deliver, we the scientists, but if you don’t let us do our thing, we’re not gonna do our thing.
LEVIN: Right, yeah. Interesting. Steve, thanks for hanging out with me again. I really appreciate it.
STROGATZ: It’s been a pleasure.
LEVIN: We’ll see you at the next episode.
STROGATZ: I can’t wait.
LEVIN: Bye.
STROGATZ: Bye bye.
[Music plays]
STROGATZ: Thanks for listening. If you’re enjoying The Joy of Why and you’re not already subscribed, hit the subscribe or follow button where you’re listening. You can also leave a review for the show. It helps people find this podcast. Find articles, newsletters, videos, and more at quantamagazine.org.
LEVIN: The Joy of Why is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. Funding decisions by the Simons Foundation have no influence on the selection of topics, guests, or other editorial decisions in this podcast or in Quanta Magazine.
The Joy of Why is produced by PRX Productions. The production team is Caitlin Faulds, Livia Brock, Genevieve Sponsler, and Merritt Jacob. The executive producer of PRX Productions is Jocelyn Gonzalez. Edwin Ochoa is our project manager.
From Quanta Magazine, Simon Frantz and Samir Patel provide editorial guidance with support from Matt Carlstrom, Samuel Velasco, Simone Barr, and Michael Kanyongolo. Samir Patel is Quanta’s editor in chief.
Our theme music is from APM Music. The episode art is by Peter Greenwood, and our logo is by Jaki King and Kristina Armitage. Special thanks to the Columbia Journalism School and the Cornell Broadcast Studios. I’m your host, Janna Levin.
If you have any questions or comments for us, please email us at [email protected]. Thanks for listening.