Q&A

The AI Pioneer With Provocative Plans for Humanity

While some fret about technology’s social impacts, Raj Reddy still believes in the power of artificial intelligence to improve lives.
Raj Reddy in a checkered shirt and dark suit coat in front of a red metal machine.

Raj Reddy is one of the founders of what we now call artificial intelligence. He still feels optimistic about its power to help people.

Elliott Cramer for Quanta Magazine

Introduction

Before he became a decorated pioneer of artificial intelligence, Raj Reddy grew up far from a computer lab. As a child in the 1940s, he lived in rural Katur, Andhra Pradesh, India. His one-room schoolhouse had no paper or pencils, so he learned to write letters in a plot of sand. On hot nights in a home with no electricity or running water, he and six siblings cooled off by dragging their mattresses outside.

“The sky was beautifully clear, and I could see all the stars,” said Reddy, who smiles easily. “People have asked, ‘Oh my God, were you that poor?’ But I never felt deprived at all.” On the advice of an astrologer, his father sent him to college, with tuition paid by his uncle. Reddy bought his first pair of shoes for the occasion.

Reddy’s first encounter with a computer came later, at the University of New South Wales in Australia, when he was pursuing his master’s degree in civil engineering. Immediately he put it to work solving integration problems, amazing a classmate in the process. “If you’re willing to let your mind wander,” he told the classmate, “you can come up with a solution.”

Reddy soon got a job at IBM, where he read a paper by John McCarthy, the computer scientist who coined the term “artificial intelligence.” It changed the trajectory of his life. “That’s what I want to work on,” Reddy thought to himself. In 1963, he started as McCarthy’s doctoral student in the newly formed Stanford Artificial Intelligence Lab. Reddy’s early research on computer speech recognition, human-computer interactions, and robotics — depicted in an early, homegrown documentary — launched a lifetime of revolutionary work in AI.

Raj Reddy in a dark suit coat and gray hat stands outside with buildings in the background.

Reddy grew up in poverty in colonial India. He rose to become a professor at Carnegie Mellon University in 1969, a job he’s held ever since.

Elliott Cramer for Quanta Magazine

After earning the first doctorate in computer science at Stanford, Reddy joined the faculty of Carnegie Mellon University and later served as founding director of the institution’s Robotics Institute, informally known as the “Raj Mahal.” He eventually received the Turing Award for pioneering the design and construction of artificial intelligence systems and demonstrating the practical importance and potential commercial impact of AI technology.

In addition to more than 50 years — and counting — of research at Carnegie Mellon, Reddy has long been a vocal advocate of technology in service of society. In his home state in India, he helped found the Rajiv Gandhi University of Knowledge Technologies, which serves rural youth. He also often appears on high-profile stages around the world, such as the Association for Computing Machinery, where he recently spoke about user interfaces for those at the bottom of the economic pyramid. And in Hong Kong this year, he spoke about eliminating the literacy divide with AI.

Quanta caught up with Reddy in Germany at the Heidelberg Laureate Forum to discuss AI-enhanced productivity, forced altruism and government-mandated health monitoring. The interview has been condensed and edited for clarity.

Raj Reddy in a dark suit coat stands behind a green wall and in front of an orange one.

Reddy argues that AI can be used benevolently to create a world with greater wealth and more resources, and that it’s our collective responsibility to make sure they’re distributed fairly.

Elliott Cramer for Quanta Magazine

You’ve been in computer science almost since the beginning. What was it like back then, nearly 60 years ago — did anyone understand what you did?

People were asking, “What is computer science?” There was a famous paper in which the authors made up a definition. They said, “Computer science is what we use computers to do.” It was like biologists saying, “We do biology.” [Laughs.]

I proposed a definition and submitted my paper, which was rejected promptly. [Laughs.] But I still think my definition is right. I said that engineering is a field that enhances the physical capabilities of the human being; computer science and AI are fields that enhance our mental capabilities. Anything you do with your mind, you can do faster, better, cheaper using computers. But I was a lowly grad student!

You’ve long highlighted these benefits for humanity, but how could that work practically?

With AI, everyone [could theoretically] do a day’s work in an hour. What will people do with that extra time? One possibility is that everyone will do 10 times more work in a day. If everybody does 10 times more work, then we’ll have a world in which we’ve created 10 times more wealth.

We could target the extra productivity to areas where there’s a major societal need. A lot of countries, cities and villages need food, water and electricity — even today. Can we have people displaced by AI work on manufacturing and installing solar cells to ensure there’s power in every village and every home? We need to set priorities and then say to the tech companies, “We need people in these areas. If you don’t need them, retrain them to work in these areas.”

Raj Reddy stands in a suit in a white hallway, looking through a window.

To avoid future lockdowns, Reddy suggests governments monitor everyone’s health via mandatory smartwatches. “With appropriate actions, we could eliminate pandemic lockdowns,” he said. “Nobody says requiring a driver’s license is authoritarian.”

Elliott Cramer for Quanta Magazine

Tech companies are not usually known for their altruism. How would you make sure humanity is the focus, and not just profits?

[Governments] need to say that, for a period of 10 years, [workers disrupted or displaced by AI] must earn the same amount that they were earning before. Then, gradually, it can go down to 50%. It’s like pandemic-era employment assurance, except there’s a floor.

The companies might say, “Hey, this is a social policy, but I’m a capitalistic company. I can’t do that.” Then tax them more and use the money to repurpose workers in essential tasks. For example, they could teach sustainable agriculture to young people in Mali. Put the onus of repurposing workers to create more wealth on companies. That’s one answer.

There’s currently a lot of wealth in the world, unevenly distributed. How do you keep AI-enhanced productivity from only benefiting the wealthy?

The unequal distribution will always be there. But when you increase productivity and wealth by a factor of 10, there will be more money and opportunities for more people.

When I was growing up in my village, most people did not have shirts. They wore shorts and walked around semi-naked. Why? Because they didn’t have enough money to buy a shirt. Now if I go there, everyone is dressed very nicely. Where did that wealth come from? More money and more opportunity.

Raj Reddy in a checkered shirt sits at a desk with monitors and books behind him.

Reddy in his office at Carnegie Mellon.

Elliott Cramer for Quanta Magazine

Another criticism of AI today is how much energy it consumes. Isn’t that a problem?

This is a temporary phenomenon. Governments could easily say, “If you’re consuming more than a certain percentage of the national electric power generation, tough luck, you’re not gonna get any more power.” Then the companies would figure out how to optimize. Right now, the same databases are used over and over by everybody. There are many, many ways of looking at how to reduce computation. Deep learning in its full-blown glory works well, but you can’t consume all the power for optimal training. Good enough is good enough.

One specific use of AI you’ve championed is encouraging governments to eliminate future pandemic lockdowns by monitoring the health metrics of “everyone on the planet” via their smartwatches. How would that work?

In the long run, a visionary leader will say, “This is part of the digital infrastructure, fellas. You have to have a watch, and you have to give us access to your watch’s data. Here’s a group of eminent scientists who guarantee your data will be anonymized and not misused.” Anonymization technology exists now to largely protect privacy. Also, data is shared only by opting in.

Aren’t you worried about authoritarian governments taking on that role?

Nobody says requiring a driver’s license is authoritarian. Nobody says requiring a passport to enter a country is restricting global, free movement. Those are the laws. Every country has that. Even in the U.S., there are a set of states saying, “You cannot have an abortion.” It doesn’t matter why. There’s no free choice. That’s an authoritarian rule to me, but these are laws made by a democratic system.

With appropriate actions, we could eliminate pandemic lockdowns, which cause serious disruption to society. We need regulations saying that, if you want to move about or get on a bus or plane during a pandemic, you must have a clean passport on your watch.

Close-up photo of Raj Reddy in a checkered shirt.

“Engineering is a field that enhances the physical capabilities of the human being; computer science and AI are fields that enhance our mental capabilities,” Reddy said. “Anything you do with your mind, you can do faster, better, cheaper using computers.”

Elliott Cramer for Quanta Magazine

Do you expect to see such mandatory health-monitoring smartwatches in your lifetime?

It could happen if enough influential heads of state say, “There’s gonna be a pandemic in 10 to 20 years. We don’t want our people to die. We don’t want to lock you down. We want you to keep working so that the economy doesn’t take a hit. Therefore, this is going to be required.”

I’m planning to meet with political leaders in Qatar and the United Arab Emirates, where I’m convinced they might require everybody to wear a watch in the next pandemic. The cost of the watch could be $100 or less, but an individual would only pay $10 because the rest would be subsidized. Digital infrastructure, including a free smartwatch, should be publicly funded, just like roads, water, hospitals and libraries.

Isn’t this just a bit fantastical? Tech companies and governments have famously misused technology, especially among marginalized populations.

Technology has always caused problems. When cars came in, nobody had a driver’s license. People drove around and killed. Finally, the government woke up and said, “No, you can’t drive a car without a driver’s license.”

Also, hundreds of years ago, you didn’t need a passport to go from country to country. All of these things evolve. You need permissions of various kinds.

The same thing should happen — will happen — with public health. When it will happen, I don’t know. Pandemics may only happen every 10 years, so people forget.

You’re known for the “full Raj,” in which you wrap an arm around someone when asking for help with ambitious AI goals. Will you deploy that for mandatory health monitoring?

The full Raj — I didn’t even know I did it! But yes, if I need somebody to do something, I would corral them and say, “Hey, let’s do this!”

Comment on this article