The Joy of Why

Are Robots About to Level Up?

Today’s AI largely lives in computers, but acting and reacting in the real world — that’s the realm of robots. In this week’s episode, co-host Steven Strogatz talks with pioneering roboticist Daniela Rus about creativity, collaboration, and the unusual forms robots of the future might take.
Robot screwing lightbulb into socket held by a human

Peter Greenwood for Quanta Magazine

Introduction

Within just a few years, artificial intelligence systems that sometimes seem to display almost human characteristics have gone from science fiction to apps on your phone. But there’s another AI-influenced frontier that is developing rapidly and remains untamed: robotics. Can the technologies that have helped computers get smarter now bring similar improvements to the robots that will work alongside us? In this episode, Daniela Rus, a pioneering roboticist at the Massachusetts Institute of Technology, talks to host Steven Strogatz about the surprising inspirations from biology that may help robots rise to new levels.

Listen on Apple PodcastsSpotifyTuneIn or your favorite podcasting app, or you can stream it from Quanta.

Transcript

[Theme plays]

STEVE STROGATZ: Artificial intelligence systems and robots seem destined to become ever greater parts of the economy, the sciences, and even the arts. We’re all familiar with the AI doomsday scenarios that populate science fiction, but putting that aside, it’s obvious that there are better and worse ways of incorporating autonomous intelligence systems into our everyday lives.

So, how do we reduce potential friction in dealing with intelligent robots to bring out the best possible future? How can we make robots more collaborative with each other and with us?

I’m Steve Strogatz and this is “The Joy of Why,” a podcast from Quanta Magazine where I take turns at the mic with my cohost, Janna Levin, exploring some of the biggest unanswered questions in math and science today.

In this episode, we’ve got Professor Daniela Rus to talk to us about intelligent robots, how they’re built, and how we might come to value them as essential everyday collaborators.

[Theme fades out]

As a roboticist, computer scientist and director of MIT’s Computer Science and Artificial Intelligence Laboratory, Daniela is at the forefront of the science and engineering of autonomy. She’s a MacArthur Fellow, a member of the National Academy of Engineering and the American Academy of Arts and Sciences, and the co-author of an exciting new book, The Heart and the Chip: Our Bright Future with Robots.

Welcome, Daniela.

DANIELA RUS: Thank you so much, Steve. I’m so happy to be here.

STROGATZ: Well, we are very happy to have you. I am looking forward to learning all kinds of things with one of the world’s great roboticists. So, let us talk first about what robots are, because I’m not even sure I know. What’s the difference between a robot and a machine?

RUS: Well, Steve, robots are programmable mechanical devices that can exert forces. Robots take input about the world through their sensors, reason about that input, and figure out some decisions that impact the world in kind of a physical way.

STROGATZ: So the physical part is important, I guess. We keep hearing these days about bots in connection with ChatGPT, that kind of thing. But it sounds like you wouldn’t count them as robots.

RUS: Well, we could call them software robots, because they still do some activity in the digital world for us. But it’s important to realize that today we have these two extraordinarily impactful technologies. On one hand, we have artificial intelligence, which is amazing us with its decision-making and learning. But AI remains confined inside computers. On the other hand, we have robots. And robots can execute tasks independently and they have a physical presence, but they are not as intelligent according to how we consider intelligence in AI systems.

And so on one hand, you have extraordinary, powerful machines that can learn from data how to do tasks, but that stay in the digital world. Robots have a physical body, and they can do physical tasks for us.

And an interesting question is how do you put it all together? How do you get the mechanical prowess of robots to work together with a digital intelligence of AI?

STROGATZ: Oh, really clear distinction there. I’m suddenly having images of The Wizard of Oz and the character that says, “If I only had a brain.” It’s like these machines would be so powerful if they really had the kinds of brain that AI could provide nowadays, or the future AIs.

RUS: Yes, robots typically have a body and the brain. The body consists of all the hardware. The chassis, the sensors, the actuators, the computation, the batteries — all of that forms the hardware platform, or the body.

And when we think about robots these days, we often think about robots whose shape is inspired by the human form. So maybe humanoid robots or robot arms, or sometimes we talk about boxes on wheels, like cars.

But all of that is kind of the hardware package that the robot has. And the hardware package is important because the robot will only be able to do what the body is capable of doing. If you want a robot to climb stairs, you can’t have a box on wheels because that’s not going to climb stairs. On the other hand, the hardware platform is not enough, because without the brain, without the algorithms, without the software that gets the body to do tasks, you don’t actually have a capable machine.

So in order to have a physical machine that is capable of doing physical tasks, we need both sides. We need to think about the body and the brain and how they interact with each other and how they interact with people.

STROGATZ: Well, let’s talk about some of these examples now of robots in our everyday lives, just to make sure people are aware of the robots that are in current use. I recently had a trip to San Francisco and got to experience a box on wheels made by a company called Waymo. [laughs] So we were driving around in the city. At first, I was scared to do it, but we drove around with no driver in the car, just this autonomous vehicle, and it was a very smooth drive. I have to say I came away feeling like a believer.

So self-driving cars, we know, are one possible sort of example of what you’re talking about. What are some other examples of robots that are already in use?

RUS: So self-driving cars are definitely an example we have heard about extensively. I would like to say that self-driving cars operate in very special regions and under very special circumstances. They cannot work anywhere, anytime, for many reasons. But, where they are deployed, they can be very effective.

Similar boxes on wheels can deliver packages for you, or they can deliver medical supplies in hospitals. We have a lot of drones in use, and today’s drones already help with various inspection tasks. And there are companies that are using drones for delivering products in hard-to-reach areas. The products could be medical supplies or food or other products that are important for survival. We also have other interesting robots on the ground. So today we have millions of robots that are operating in factories doing everything from assembling computers to packing goods. We have drones that monitor air quality. We even have robots that milk our cows.

STROGATZ: [laughing] Is that right? Wow. Oh, my god. Now, your early work in this area had to do with robot arms, didn’t it?

RUS: Well, in fact, when I joined the Ph.D. program at Cornell — that was a couple of decades ago — I joined because John Hopcroft, my Ph.D. advisor, really inspired me about the possibility of computation that interacts with the physical world. Back in those days, computation was something you executed inside computer boxes and in the digital space.

The physical world, on the other hand, is so much more complex. And when I went to Cornell, the one robot we had was a huge industrial arm. It moved with much greater difficulty than what we see in today’s industrial robots. So we have come a very long way from those days.

But I remember, at one point, a group of us wanted to get this industrial arm to cut a cake for one of our professor’s birthdays. We spent all night writing the software. And the next day we invited everyone to come and see the robot cutting a cake. But we didn’t coordinate well. We programmed the robot for a soft, round sponge cake, and the person who was tasked to pick up the cake showed up with a square, hard ice cream cake. And the robot couldn’t cut it. The robot flailed wildly and nearly destroyed the cake. However, our professor calmly pushed the “stop” red button and said, “Yes, that was a control singularity,” which is a technical term in robotics.

But it’s actually an interesting experience to talk about because I learned then that the physical world with its physics laws and imprecisions is a much more demanding world than the digital world. And as you think about using robots in the physical world, there’s so much about the interactive aspect of robots in the world and robots and people that we need to get right. And, by the way, since then we’ve made so much progress. And today we have no trouble programming our robotic arms to cut cakes, whether they’re round or square.

STROGATZ: [laughing] That’s great. That’s such a vivid example. I really appreciate that story. It just shows how far we have come. The robots nowadays are so much more agile and flexible than what we had over 30 years ago.

RUS: We have come a very long way. We have improved our hardware. We have made so much progress on the algorithmic and software side of machines.

On the hardware side, it used to be that we built robots out of hard metals and hard plastics. And that lent itself to a certain type of machine. But in the recent past, we have begun to build robots out of a wide range of materials. We can build robots out of anything. In my lab, we have built robots out of silicone, out of paper, even out of food.

STROGATZ: For what reason? Why would you make a robot out of food?

RUS: Well, we wanted to build ingestible robots that could deliver surgical procedures without incisions, without pain and without the risk of infection. And when this project started, my students did something really funny that convinced me how important this task was.

So they told me that every year in America, over 3,500 people accidentally swallow button batteries. And the button batteries are so damaging to the tissue. Within a half an hour, they get completely submersed. So, in fact, the students bought a steak and put a button battery on the steak for me to see how in a half an hour the button battery is completely enclosed in tissue.

STROGATZ: Just to ask, a button battery is what I think of as a little lithium battery that’s disc-shaped? That’s what you’re talking about?

RUS: Yes, the kind of battery you might find in your watch or in toys.

STROGATZ: OK.

RUS: And so that actually gave us the idea of building what we called the origami robot. This is a robot that’s made out of sausage casing. It’s kind of an accordion-shaped robot that is the size of a pill. The robot gets compressed to pill size. It gets placed in a pill mold that then gets some water added to it. The water gets frozen and so you swallow this robot in the form of this frozen ice pill. The robot travels through the stomach, the ice melts, the robot deploys, and then we can control the location of the robot using a tiny magnet that is embedded on the robot.

And so we can direct this magnet to the button battery. We can pull it out of the tissue, and we can guide it down the digestive system. In this way, you can remove button batteries and other foreign objects that people accidentally ingest without pain, without incision, without the risk of infection. And you can do so much more.

You can also imagine imbuing the body of the robot with medicine. And what you can do is send the robots to the location of the wound, perhaps where the button battery was, and patch the wound or you can deliver medicine in a very precise way.

And so we wanted these robots to be made out of food because if they stay in your stomach, it’s important that they are not toxic to the body, and in fact, they could just sit there and get digested and by the time the robot gets digested your wound is healed. At least that’s the vision.

STROGATZ: So, this vision that you’re articulating, is this something you’ve tested on having animals swallowing your food robot? Or is this still just a theoretical idea, or have you actually tried it on a person?

RUS: Well, we need special approvals to do all that. And we are at a point in our research where we have developed the prototypes and we have tested them in simulated stomachs. So we have built the magnetic control infrastructure, which is very important because the robots themselves don’t have actuators, so you need to apply external actuation through the magnetic field.

And so we demonstrated that the robot structure is possible, that the control is possible. And now we’re in the process of getting approvals to do in vivo tests and, hopefully, in the future to do human tests.

STROGATZ: Wow. Now, OK, let me ask you about your process because I’m sure some of our listeners are thinking, “This is a very imaginative person.” Did you have this idea? Was this one of your grad students? What is the process of coming up with what seem like very unusual creative ideas?

RUS: Well, for me, building robots is about a few different aspects. It’s about designing the body. It’s about building the control system and the learning and reasoning system. And it’s also about the capabilities. And, to me, the most exciting projects are the ones that open new possibilities, that are doing things for the very first time, that don’t exist yet.

So in order to get there, you really have to ask big and bold questions. When my students arrive at MIT, I ask them to think about what it is that they want to be able to tell the world after a few years in the Ph.D. program. And we go through this exercise where I pretend to be a genie that grants research results. And I ask them to ask the genie for the results that they want to write about and talk about at the end of their PhD. And then I tell them, “Not bold enough, not big enough.” And so we gradually get to some big, crazy ideas that we then end up working on together.

STROGATZ: Very, very interesting. All right, well, we’re going to take a short break and we’ll be right back.

[Break insertion for ads]

STROGATZ: Welcome back to “The Joy of Why.”

We’ve been talking about the form that these futuristic robots might take, but let’s start talking a little more about the brain. I’m vaguely familiar with a class of machine learning models called liquid neural networks that I know you have worked on, and I’m not exactly sure what they are. Can you tell us first, what are these? What’s liquid about them?

RUS: Let me observe the following. First of all, if you want machine learning to run on physical machines like robots and sensors and other edge devices, we really need to innovate in the space of machine learning, because today’s models are too big.

For machine learning to run on a robot, the machine learning model has to run on a computer that can fit on the body of the robot. So we need to think about a way of building machine learning models that are capable and small.

Additionally, today’s machine learning makes mistakes. And the mistakes can be costly, especially in the case of safety-critical systems. For instance, if you have a robotic perception system that uses the latest computer vision algorithms, and these computer vision algorithms operate with, let’s say, 92% accuracy, that still means the system is going to make mistakes 8% of the time. And those mistakes are problematic.

And so inspired by these challenges, we developed a new approach to machine learning we call liquid networks. And liquid networks result in solutions that are much more compact and explainable than today’s traditional AI systems.

And by the way, we’ve gone about this research was with inspiration from a worm called C. [Caenorhabditis] elegans, which has inspired a lot of work in neuroscience. Now, in stark contrast with the billions of neurons in the human brain, C. elegans has a very happy life on only 302 neurons. And biologists understand the math behind the computation of each of these neurons.

And so we developed a new machine learning model we call liquid networks, and we made essentially two big changes over what traditional AI models have. We changed the computation of the neuron, and we changed how the neurons are wired.

So let me tell you about the computation of the neuron. In a traditional machine learning system, the kind we see in many applications today, the artificial neuron takes in some numbers, adds them up, applies some basic math, and then passes along the result, which is 0 or 1. So the neuron is a kind of an on-and-off system. And this is complicated because this kind of computation is happening across thousands and hundreds of thousands of computational units, depending on the size of the model.

Now, in liquid networks, we have fewer neurons, but each neuron executes a much more complex operation. In fact, we model the neural computation and the artificial synapse using differential equations that are inspired by the math that the biologist determined for the computation of the neuron in C. elegans. And by picking this math model and by changing the wiring between the neurons in order to allow for more effective information flow, we get extraordinary results.

In traditional AI models, the model is frozen after training. So this means that it doesn’t continue to learn and adapt when you deploy it in an application in the physical world. You have to wait for the next release if you want any changes.

STROGATZ: Yeah, yeah. Now I see where you’re going with frozen to liquid.

RUS: Because of the way the neurons of our model are organized and because of what is happening inside the neuron, liquid networks continue to adapt after training, based on the inputs that they see. And this is a very important part of intelligence and of learning. Also, in liquid networks, we have loop structures that support reinforcement learning and offer significant advantages of the more static and statistical structures that we use in traditional AI models.

STROGATZ: That’s a really interesting point, that last one. I mean, I personally have been interested in the neural network of C. elegans because in my paper with my student Duncan Watts, when we were looking for examples of small worlds in the real world, we knew that the neural network of C. elegans had been fully mapped and it has, as you say, lots of these loop structures. It’s not just a feed-forward network.

RUS: Exactly.

STROGATZ: And so it always seems like anyone who has any bio background would think, “How absurd that so much of today’s AI is based on feed-forward architecture. How smart can you be if all you can do is feed-forward and never have feedback loops?”

But, of course, it’s very hard to analyze systems with feedback loops. So what is your trick? You somehow can deal with all that complication?

RUS: It’s a very interesting question, and I have multiple answers to this question. First of all, it turns out that the equations that we use have approximate closed-form solutions that are bounded. And so we can replace the differential equation with a closed-form equation that behaves beautifully. And so that’s one advantage.

Also, because of the way we have structured the network, we can apply dynamical systems theory and control theory to analyzing the network. And finally, because these models are so much smaller, we can extract decision trees that will tell you how the network reaches its decisions.

We haven’t finished the work on liquid networks. In fact, we started working on this topic only a few years ago, and there’s so much more to be done. But it’s very, very exciting, and I’m very happy that we have access to analysis tools from a wide range of areas of math and of robotics to apply to these new models.

STROGATZ: So one thing that really caught my attention that you said a few minutes ago because it feels like it’s a holy grail in this world. You said that your liquid networks not only were “compact,” but “more explainable.” Did I hear you right?

RUS: Yes. Explainable in the sense that we can understand the role of the individual neurons and we can understand the decision-making process in the form of a decision tree. You see, if you have a traditional AI system that consists of, let’s say, tens of thousands of artificial neurons, it is usually impossible to correlate their activity with the overall behavior of the model.

For example, we have applied liquid networks to a learning-based solution for driving a car. We give this car examples of how humans drive. So we give examples of road scenarios, and we train both traditional AI and liquid networks to teach the car how to steer. The traditional solution includes on the order of tens of thousands of artificial neurons and a half a million parameters.

And you can try to visualize the decision-making engine, but it’s really impossible to correlate what happens inside that decision-making engine with the behavior of the vehicle. The liquid network solution has only 19 of the liquid network neurons and a couple thousand parameters. So orders of magnitude less.

And since you only have 19 neurons, you can actually understand the role that each of these neurons plays with respect to different behaviors of the vehicle. And then you can map their decision-making flow into a decision tree, so that you can understand better what happens to the system as a whole.

STROGATZ: I just want to underline all of this for our listeners. This is such an important problem in AI and machine learning today. This question of explainability — some people call it interpretability — is so important because if we’re going to trust these AIs to do important things like drive us around without getting us killed or injured, or do surgery on us, or whatever, you want to make sure that they know what they’re doing and that you can trust them. And so often you hear these models described as “black box” because, I mean, in some cases, they can have up to billions and trillions of parameters, I think. I don’t think that’s an exaggeration, right? Some of the large language models are now being measured in the hundreds of billions, if not more, of parameters.

So we don’t really know what’s going on under the hood. Whereas with these liquid networks that you’re describing, it sounds like they’re much more reassuring. We can understand them to a much greater extent.

RUS: Exactly, and this is a very important topic for what I call physical intelligence, for intelligence that’s associated with machines that operate in the physical world. And in my lab, we are looking at multiple ways of addressing this problem. So one is with these compact models, you can analyze and describe with decision trees.

Another way in which we are trying to ensure the safety of learning-based solutions is what we call “barrier net.” So in barrier net, we take a model, any kind of model. It could be a big, huge one. It could be a liquid network. And we’re able to add one last layer to the model that forces the output of that model into a safe region, and the safe region is determined according to task.

Now, we still don’t understand how the network reached the decision, but at least we know that it’s a safe decision according to task specifications. So these are some ways in which we’re thinking about making machine learning safe for the physical world.

STROGATZ: Excellent. I’d like to focus in this last segment with you about collaboration, the general area of collaborative robots, networks of robots, that kind of thing. So why is it that their performance and capability becomes so dramatically enhanced when they work as a team?

RUS: So, Steve, think of it this way. A single robot is like a single musical instrument. A group of robots is like an orchestra that can deliver so many more capabilities. For example, the robots in the network can share data and they can make decisions collectively. The robots in the network can share each other’s “aha!” moments and upgrades.

So let’s say, a robot has just learned something new. By using network communication, that fact can be shared broadly across all the robots. And so this is bypassing the slower individual learning processes. You can get backup bodies, and you can get solutions for tasks that a single robot alone cannot do.

If the robot has to carry a heavy box, maybe a single robot cannot lift the box. But multiple robots together can do so much more. So there are so many exciting opportunities for using this notion of a group of autonomous robots — in search and rescue, in logistics, in many areas where division of labor allows for quicker, faster and more efficient solutions.

STROGATZ: While you were just describing that, about some object that was too heavy for an individual robot but maybe a group could deal with it, I suddenly had this vision of what’s happening in my kitchen as we’re speaking. Because we have ants right now who are crawling around, doing a lot of societal and collective behavior. Very impressive, although not welcome, I have to say. Am I on the right track here thinking about like micro-robots that are insect-like?

RUS: Well, many people in robotics are using this kind of bio-inspiration, from ants and from the natural world, from insects, from fish, to develop brains for these networked collections of robots. And we can build robots that work together at all scale.

In fact, the car you drive today was probably assembled by a group of networked robots that operate an assembly line. So you can think about this task at any scale you want.

STROGATZ: So we’ve been talking mostly about the optimistic side of things here. Is there anything on the scarier side, or things we should be worried about, or as a society we need to address before these kinds of systems become widespread?

RUS: Well, look, I personally feel very optimistic from the point of view of developing technologies that support people with cognitive and physical tasks. So I imagine a world in which bespoke machines assist people in both basic and complex situations. And this is a future that is starting to mature in my lab, in the labs of my friends and colleagues from around the world, and in many forward-thinking companies.

Robots have already done so much for us. Robots have taken us to the bottom of the ocean, and we have learned so much about remote worlds like Mars. We have robots that are mowing our lawns. We have robots that help with rehabilitation and help with agricultural tasks. There are so many extraordinary ways in which robots, and also AI, can help us.

But the fact is that all the superpowers that empower us to use machines for good — and by machines I include both robots and AI — these superpowers also empower the supervillains. And so as we move forward, it’s important to remember that. And it’s important to consider the scenarios in which these machines can introduce disruptions and ethical dilemmas for us as a society.

How do we ensure that these extraordinary technologies we’re developing can be used for good and not for bad? And what kind of incentives and laws and regulation we introduce in the way society operates in order to make sure that the supervillains are not empowered?

STROGATZ: Certainly can’t argue with any of that. But there’s also a more mundane side that I know a lot of people, I think, are rightfully concerned about, aside from the supervillains. There’s just the impact on the economy. How many jobs would go to robots that are currently being done by people? What would you say about people who are worried about that? I mean, what are we supposed to do with ourselves when the robots can do everything we can do?

RUS: Oh, it’s going to be a long time until that happens, Steve. But the way I see the utility of robots is for very simple tasks that are too repetitive and that don’t make our jobs rewarding.

I visited a warehouse a few months ago, and I saw this really wonderful man who spent eight hours a day essentially taking a box from here, scanning it and placing it there. You can imagine doing that for eight hours a day is not good for your body. And it’s also the kind of repetitive, structured physical task that doesn’t give you joy, but at the same time can be done by machines.

And so the way I think about robots and AI in jobs is as automating tasks rather than professions, so that the low-level routine tasks can be done by machines, and we could spend time on interacting with each other, on applying expertise and on more critical thinking and strategic tasks.

But you’re absolutely right that the automation will come in and will have its impact. We should all worry about that and think about how to chart the path forward to make sure that the benefits of technology are good for all.

One thing I will tell you is that it is much easier for us to anticipate what goes away than to imagine what will come. In the year 2000, nobody anticipated social networking, cloud computing, social media. These are important aspects of the economy right now that were not being talked about seven years before they came online. And so now these areas of the economy are employing so many people and not just the geeks. There are jobs for designers, for artists, for content producers, for people who don’t necessarily find it exciting to learn about differential equations and how they can be used to control machines.

So I’m really optimistic from this point of view, but I do have to say that we need to approach this way forward with empathy and with care because during any transition period there are painful aspects of the transition. And so these can be addressed with more educational programs and skilling and reskilling programs in every industry and in every organization, in every company.

It used to be that in the past, people would go to school and study and study and study for a while, and then stop and say, “OK, now I’m going to enter the workforce.” But moving into the future is very important to be more flexible and rethink the study-and-work cycle. I think it’s going to change from a sequential approach to studying and working to a more parallel approach where we will have lifelong learning and working. And we will kind of blend our work with our learning. And this kind of approach will ensure that everyone is prepared, no matter what tools get introduced in the workspace.

STROGATZ: All right. Well, we’ve been speaking with visionary roboticist Daniela Rus. It’s been very, very interesting to talk to you and I thank you for joining us today.

RUS: Thank you so much, Steve.

[Theme plays]

STROGATZ: Thanks for listening. If you’re enjoying “The Joy of Why” and you’re not already subscribed, hit the subscribe or follow button where you’re listening. You can also leave a review for the show — it helps people find this podcast.

[Theme continues]

“The Joy of Why” is a podcast from Quanta Magazine, an editorially independent publication supported by the Simons Foundation. Funding decisions by the Simons Foundation have no influence on the selection of topics, guests or other editorial decisions in this podcast or in Quanta Magazine.

“The Joy of Why” is produced by PRX Productions. The production team is Caitlin Faulds, Livia Brock, Genevieve Sponsler, and Merritt Jacob. The executive producer of PRX Productions is Jocelyn Gonzales. Morgan Church and Edwin Ochoa provided additional assistance. From Quanta Magazine, John Rennie and Thomas Lin provided editorial guidance, with support from Matt Carlstrom, Samuel Velasco, Arleen Santana and Megan Willcoxon. Samir Patel is Quanta’s editor in chief.

Our theme music is from APM Music. Julian Lin came up with the podcast name. The episode art is by Peter Greenwood, and our logo is by Jacki King and Kristina Armitage. Special thanks to the Columbia Journalism School and Burt Odom Reed at the Cornell Broadcast Studios.

I’m your host, Steve Strogatz. If you have any questions or comments for us, please email us at [email protected]. Thanks for listening.

[Theme ends]

Comment on this article