Is 2020 the Year That AI Finally Exerts Its Promised Power? - WhoWhatWhy Is 2020 the Year That AI Finally Exerts Its Promised Power? - WhoWhatWhy

artificial intelligence, AI, SEER
SEER is a compact humanoid robot developed through intensive research into the gaze and facial expressions of human beings. Photo credit: Ars Electronica / Flickr (CC BY-NC-ND 2.0).

Artificial intelligence is all the rage nowadays, but it actually has been with us for decades. A leading expert wonders if this is the year when it will change the world.

Listen To This Story
Voiced by Amazon Polly

Every year for the past six years, the New York Times has held a major international conference on artificial intelligence (AI). Not climate change, or healthcare, or politics, but AI. China has made it a central focus of its national agenda. Yet US leaders have paid very little attention to what AI is or how it works. (Have you heard any of the candidates for president, other than Andrew Yang, devote a major speech to it?)

To kick off the new year, we talk with Stuart Russell, one of the world’s leading experts in the field of AI, who argues that the march toward superhuman intelligence is unstoppable. 

But will its success be our undoing? 

Moreover, he asks, what does success mean? And, if we “succeed,” what new problems does it portend that we need to think about solving? As we continue to lose faith in people and institutions, will machines take up the slack? 

While AI will certainly be the dominant technology of the future, does it mean that we really are introducing a second intelligent species to Earth? And how are we ever going to maintain power over something that is more powerful than we are? 

googleplaylogo200px download rss-35468_640

Click HERE to Download Mp3


Full Text Transcript:

As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to time constraints, we are not always able to proofread them as closely as we would like. Should you spot any errors, we’d be grateful if you would notify us

Jeff Schechtman: Welcome to the WhoWhatWhy Podcast. I’m your host Jeff Schechtman.

Every year for the past five years, the New York Times holds an international conference on artificial intelligence. Not climate change or politics or healthcare, but AI. The feeling is that this is really what can and will change the world. When HAL asked astronaut David Bowman to open the pod bay doors, it was as if our most primal fear of machines came rushing headlong into the 20th century.

Jeff Schechtman: We see on display every day our reliance on AI, on algorithms. We saw it in the worst case in the 737 Max, and many see its promise in the soon to be self-driving cars. It’s the full blossoming of the long-promised, brave new world of technology, but is there anything we should or could do about it? Is it out of control, or do we just need to lie back and surrender?

Jeff Schechtman: What are the consequences of creating machines that are smarter than we are, and at what point will we lose control of the work? We’re joined today on WhoWhatWhy Podcast by one of the world’s experts, Stuart Russell. Russell is a Professor of Computer Science at the University of California Berkeley. He served as Vice Chair of the World Economic Forums Council on AI and Robotics, and he is an advisor to the United Nations on Arms Control. He’s a fellow of the American Association for Artificial Intelligence, the author of numerous books and papers including Human Compatible: Artificial Intelligence and the Problem of Control. And it is my pleasure to welcome Stuart Russell to the WhoWhatWhy Podcast. Stuart, thanks so much for joining us.

Stuart Russell: Nice to be here.

Jeff Schechtman: Talk a little bit about the degree to which those that are actively engaged in working in the field of artificial intelligence today are thinking beyond the experiments, thinking beyond the technology, and really thinking about the consequences of extreme artificial intelligence.

Stuart Russell: Well, that’s a great question. We’ve been doing AI officially for 63 years now. That was the birthdate of the field in 1956. And for most of that time it’s been a real struggle. We were able to solve fairly simple problems, word puzzles, board games and so on, and things like the self-driving cars seemed to be a distant dream. And now some of these dreams are becoming reality. We have systems that understand our speech very well so we can talk to our phones, we can talk to our cars.

Stuart Russell: We have systems that can translate between different languages. I have a flat in Paris, so I have a lot of French tax documents that I have to deal with and it’s very helpful to be able to translate those into English so that I can’t understand them in English either. So things are moving very fast, and that’s causing a lot of people to actually sort of look up from their desk, so to speak, and look ahead to the future, and anticipate what happens if we actually succeed.

Stuart Russell: Our goal in the field for the last 63 years has been to create human level or super-human intelligence in the general sense, not just something that’s good at one particular thing, like playing chess while driving a car, but something that’s general in the same way that human intelligence is. That’s the dream. And the answer to the question, what if we succeed, simply hasn’t been found. What is the nature of the problem that seems to arise if we do succeed, and then can we solve that problem? So this is starting to be something that the field is paying attention to, but there’s still a lot of defensive reactions. So it’s not surprising perhaps, if you went to, let’s say someone working in cell biology and you said, “You know, what you are working on could be the destruction of the human race.”

Stuart Russell: They’re likely to first of all, tell you to go away and then tell you that you have no idea what you’re talking about. And then, once you’ve convinced them that you know what you’re talking about, they’ll give you some technical explanation why cell biology couldn’t possibly be a threat. And then when you explain more about disease organisms and the sort of gain of function research that people are doing to see if you can make these disease organisms even more virulent and even more dangerous and produce pandemics, well then maybe they might pay attention. So when AI was still going through that multistep process where people first of all are in denial, and then they’ll say, “Okay, well maybe you have a point.” And then they actually start to take our point seriously.

Stuart Russell: And how much of the concern among lay people and even within the realm of science, is driven by kind of popular imagination and the way this has been thought about and talked about in that arena?

Stuart Russell: So in the media and in the movies, the threat that’s always described and always in the newspaper is accompanied by a picture of a Terminator robot. The threat is always that the machines might become conscious and then, they might do things that we don’t like. And they usually seem to decide that they hate human beings and want to get rid of us. And that’s a complete mistake. In fact, if you think about it, some software running on a computer is operating according to the laws of the computer. Of the programming language and how that language runs on the computer, whether it’s Python or C++ or any other programming language. And if I write some software and I want to predict what it’s going to do, I analyze the software according to the laws of how the computer works and I predict what it’s going to do.

Stuart Russell: And if I find, based on my prediction, that in fact that software, just like a chess program, formulates a plan to defeat the grand master, perhaps this program is going to formulate a plan to take over the world and destroy the human race. But that prediction is based on the software, on the rules of C++ or whatever programming language. And if someone then said, “Well when you run that program, it’s going to become conscious.” Is that going to change your prediction? Does that mean that somehow magically the rules of the programming language no longer apply and some mystical physics takes over? No, I think this is nonsense. So the fact that the computer may or may not be conscious actually doesn’t change anything about your predictions of whether or not it’s going to take over the world. The taking over the world part just comes from the fact that the machine is competent, not from the fact that it’s conscious.

Stuart Russell: So they’ve got completely the wrong end of the stick. And usually when people within AI want to pooh-pooh these issues of they being an existential threat to the human race, they will often say, “Well, there’s no chance that the machine is going to become conscious and decide that it wants to take over the world. Therefore, there’s no risk.” So they actually buy into the consciousness myth, rather than looking at the real problem, which is that if we succeed in making machines that are more intelligent than us, then they’re going to be more powerful than us. Intelligence is what gives us power over the planet, over all the other species. So if you make something that’s more intelligent and more powerful than human beings, you face that question, how are you going to maintain power over something that’s more powerful than you forever? And so that’s the question that I pose in the book and the question that I try to answer.

Jeff Schechtman: The other part of that question that makes it different, I suppose that something like cell biology or other sciences, is that it is possible within the design of the AI, within creating the algorithms, to really anticipate what the worst case scenarios might be, and design that as part of the AI design.

Stuart Russell: Yeah. The natural way that we have built AI since the beginning is to say, what do we mean by human intelligence? Okay. For human intelligence, we mean that what you do can be expected to achieve your objectives. So if you want to win a chess game but you play moves that are designed to lose, that’s not very intelligent chess playing. If you want to drive to work, but instead you crash your car into a tree, then that’s not very intelligent driving. So this is a very natural idea and it goes back actually thousands of years in philosophy, and then later in economics, this idea of the rational person who behaves in a way that is expected to achieve their objective.

Stuart Russell: And we just took that idea and we translated it directly into the machine. So we said, okay, a machine is intelligent if it acts in a way to achieve its objective. And of course in that design, you have to put the objective in. It doesn’t invent its own objectives. And so that basic engineering approach that you create machinery that knows how to achieve objectives, and then you put the objective in and off it goes. That’s how we build our chess programs. We basically tell them to checkmate the opponent. The app on your phone that tells you how to get to the airport, you put in the destination, and it figures out how to get there and tells you how to get there. So it’s a very natural way to basically transition this idea of human intelligence into the machine. The problem is that it’s wrong.

Stuart Russell: It’s fine when the machine is stupid and the machine is in the lab working on toy problems or it’s just a little app on your cell phone and the only thing it can do is to show you the answers to the questions that you ask it. But the thing that we’ve known for thousands of years is that we cannot specify the objective correctly. So the way that things go wrong when we succeed in building human level or super human AI, is that we are unable to specify the right objective. Thousands of years ago there was the legend of King Midas who asked that everything he touch should turn into gold. And he gives this objective to a super powerful machine. In this case, the gods. And the gods grant his wish and they carry it out exactly. And of course his family and his food and his wine and his water all turn to gold and he dies in misery and starvation.

Stuart Russell: He didn’t get a do-over. In the stories with the genie, the genie gives you three wishes and then carries them out very literally, and your third wish is always, please could you undo the first two wishes? Because I got them wrong. So this is something we’ve known for thousands of years, in pretty much every human culture there are legends like this. And the problem is as the machine gets more powerful, as it goes out of the lab into the real world, as it is already, then you start to get real negative consequences from specifying the objective incorrectly. And with a sufficiently intelligent machine that understands, for example, that it’s not going to be able to carry out the objectives if someone switches it off, then that machine is automatically going to defend itself against any attempts to interfere.

Stuart Russell: Whereas in the lab you can usually just reset it and “Oh, let me fix the objective and you know, have my second wish.” In the real world, you may not get a second wish because the system will be more powerful than you and it’s carrying out this objective that turns out to be the wrong one. So that’s the nature of the problem that we face. And then the solution actually is to take a different approach to AI altogether, not to build machines that require us to specify the objective correctly and implant it into the machine, but actually to have the machine understand that while we have objectives and preferences about how the future should unfold, the machine knows that it doesn’t know what those are. But nonetheless, its obligation, its purpose in life, is to be a benefit to us and to help us see the future that we want. But it knows that it doesn’t know exactly what that is.

Jeff Schechtman: Isn’t that more frightening in some respects? The fact that we’re relying on the machine to determine its own objective.

Stuart Russell: What we’re doing actually is saying that the objective is the future that we want, and we’re asking the machine to behave sensibly when it doesn’t have complete information about the objective. And what happens actually, it’s sort of almost counter-intuitive that that uncertainty that machine has is what makes it behave in a way that’s deferential to human. For example, if it knows that it doesn’t know what we like and don’t like, then it will allow itself to be switched off because it doesn’t want to do whatever it is that would cause us to switch it off. It sees the ability of humans to switch it off as actually a good thing, because it prevents negative consequences of humans and its goal in life is to be a benefit to us. So uncertainty is what gives us that margin of safety that will make sure the machine always allows us to switch it off.

Stuart Russell: And as soon as that uncertainty goes away, once the machine believes that it has the true objective, then it doesn’t need us anymore in some sense. It’s whatever actions are the right thing to do to achieve that objective is what it’s going to do. And that will typically include disabling its own off switch, <so> that we can’t switch it off anymore.

Stuart Russell: To what extent does this relate to the way in which these machines will learn, that they have their own learning curve once they’re sophisticated enough?

Stuart Russell: Yep. So in fact this is the way that we build most of the AI applications these days. Rather than programming explicitly all the good ideas we might have for playing chess, we now build systems that by themselves learn how to evaluate a chess position, which then they use to guide their play. And they learn that entirely by playing against themselves. So systems that are able to recognize pedestrians and stop signs in a self-driving car, again, those are trained by a learning process where we give them examples. Okay, this is what a stop sign looks like. We don’t program in a stop sign recognizer by hand. So the learning process is really taking place in two ways. One is the machine is just learning more about how the world works, and in some sense the human race has been doing that for thousands of years. But that’s why we do science so that we know more about how the world works.

Stuart Russell: And that’s fine. That leads the machine naturally to be more effective in the decisions that it makes. The second kind of learning is learning about what it is that humans want. And this is really the part that’s been neglected in the field so far because we’ve assumed that the humans are just going to plug in what it is that you want. And this is the mistake that we’re trying to avoid now. So to learn what human beings want, the evidence for that is basically what we do and what we don’t do. Every choice that humans make is evidence for what we want the future to be like.

Stuart Russell: Now, the problem with that, and I’m sure that the listeners will have cottoned onto this problem already, is that sometimes there’s a bit of a gap between what we really want the future to be like and the actions that we take, they’re not always the right ones. For example, when back in 1997 Deep Blue was the chess program that defeated the world chess champion Garry Kasparov. So Garry Kasparov in those games chose moves that guaranteed that he was going to lose. So if you were trying to learn what it is that Garry Kasparov wants and you looked at his behavior and he’s making this losing move, then naively you would say, “Oh, well when he made a losing move, I guess he wants to lose.”

Stuart Russell: But actually no, that would be wrong. The right answer is that Garry Kasparov wanted to win. But there’s a gap between his preferences and his actions, and he’s not able to always choose the right action because chess is a really complicated game. And even he, perhaps the greatest chess player in history, was unable in all cases to choose the right action. If we have to allow for Garry Kasparov failures in decision making, then we have to allow for everyone. Because everyone is making decisions that are not actually in their own long-term best interest. Whether it’s because of a sort of computational failure that’s too difficult to make those decisions, or that your emotions take over and you do something that you regret. This is a ubiquitous property of human behavior. So probably the biggest technical challenge is how machines can sort of reverse engineer human behavior to get an accurate picture of our underlying preferences about what we want the future to be.

Jeff Schechtman: Is there universality at this point in terms of these approaches to AI and the approaches to the dangers that we’ve been talking about? Or is it a more competitive landscape that really has within that competition the potential for danger?

Stuart Russell: So that’s a great question. I think there is beginning to be a consensus, although we’re sort of having to drag many people in the field kicking and screaming towards this consensus that in fact the current approach to doing AI, this standard model based on fixed objective, needs to be revised into an approach that actually guarantees safety in the long run. And I think it’s possible that we can achieve technical consensus within the field of AI that this is the way forward. But as you point out, it is a competitive landscape. And many giant corporations see enormous economic upside, translate that into huge profits for them in creating general purpose AI. And many nations now see this as the technology that would give its owner domination over the world. And it’s another story why you’d want domination over the world. But let’s assume for the sake of argument that that’s what people are after.

Stuart Russell: And so companies wanting to be the first to develop this, or nations wanting to be the first to develop this, might well want to cut corners and create general purpose human AI without first solving this control problem. And kind of sticking to the current model, which I’ve argued is a flawed model. So that’s a danger. Fortunately, many major nations have already understood that this is a danger. And at the highest level, the United Nations, the Politburo in China and so on, there is awareness of this issue, and people are trying to figure out how to develop some kind of international agreement. Corporations have their own organization called the Partnership on AI, where they’ve agreed to work on safety and to share all of their work on safety. Because for a rational actor in a corporation or a country, it doesn’t make sense to be the first to create human level AI and therefore to be the first to wipe out the human race.

Stuart Russell: That’s not what your shareholders wanted. That’s not what your citizens want. But sometimes people in the heat of competition will forget the risks and want to push ahead to get the benefits. But I think the fact is, and the evidence is already showing, that there are these real risks that have real downsides, and you won’t get the benefits. So just as the nuclear industry, in the safety failures that Chernobyl and Fukushima ended up destroying the nuclear industry. So yes, they in some sense got to build cheaper nuclear power stations and then they made more money for a while. But in the end they destroyed their own industry, and it was literally decimated by those events. And people for most of the last 40 years have stopped building nuclear power stations. And so we didn’t get the benefits of nuclear energy because they didn’t pay enough attention to the risks. I believe that wisdom will prevail.

Jeff Schechtman: I guess one of the central points to this is that unlike the nuclear industry, the barriers to entry are arguably substantially lower, which allows for potential rogue development.

Stuart Russell: Yeah, I think that’s a good point. And I pointed to sort of unsolved problems where I don’t really have much traction, and I don’t have much to offer. One of those problems is certainly the deliberate misuse. The rogue actor who literally doesn’t care whether the world is accidentally destroyed, but thinks they have a chance at world domination and accidentally releases an AI system that basically we lose control to. And then things go south. The other issue is sort of in very much the other direction, where we have nice, safe, powerful AI that’s a benefit to human beings and we overuse it, that we become overly dependent on the machines to run our civilization for us. And we kind of forget how to do it ourselves. And you have to remember that the only way our civilization propagates over time is by putting it into the next generation of human beings.

Stuart Russell: And we have about accumulative a trillion years of teaching and learning that that has been carried out just to keep our civilization going forward in time. And if we don’t have to do that anymore, if we can manage our civilization by just passing all the knowledge into the machine instead of into the next generation of humans, that’s kind of an irreversible process. And basically a process of enfeeblement. And if you’ve seen the movie Wall-E, where everyone is basically on a cruise ship that goes on forever, and they become obese and sort of stupid and totally unable to look after themselves, that’s not a desirable future. But it’s a slippery slope down which we might slide quite quickly as we develop machines that can take over more and more of our civilization.

Jeff Schechtman: Stuart Russell. Stuart, I thank you so much for kicking off the year with us here on the WhoWhatWhy podcast.

Stuart Russell: Thank you, Jeff.

Jeff Schechtman: Thank you. And thank you for listening and for joining us here on radio WhoWhatWhy. I hope you join us next week for another radio WhoWhatWhy podcast. I’m Jeff Schechtman. If you liked this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.

Related front page panorama photo credit: Adapted by WhoWhatWhy from www.vpnsrus.com / Mike MacKenzie / Flickr (CC BY 2.0).

Author

  • Jeff Schechtman

    Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts

Comments are closed.