There are two distinct questions: Is AI good or bad, or both? And, however we answer that, are its unfettered growth and dominance inevitable?
Listen To This Story
|
At this week’s WhoWhatWhy editorial meeting, attention was turned to the swirl of events and repercussions surrounding the firing of OpenAI co-founder and CEO Sam Altman. Editors and support staff discussed the implications of the turmoil and, more generally, the current and prospective role of AI in our culture, its potential upsides and downsides, and how to approach it as journalists. The transcript of our conversation has been lightly edited for clarity.
Jeff: There are two things going on here with the firing of [OpenAI co-founder and CEO] Sam Altman. One, it’s a great business story that we can get into the weeds on or not. But the other part of it, which underlies that, is really a debate about the future of AI, and a group within the board that felt that it was moving too fast.
Another group, led by Sam, really thought it needed to move faster, and needed to be commercialized faster. Amid all the boardroom negotiations that are going on, and the horrible way this played out because of how the board handled it — i.e., the business part of it — that’s really the core issue that underlies this, that has, I think, a larger significance to the public.
I mean, there’s a limited number of us that are interested in the minutia of the board battle. But the speed at which AI moves, and the risks that it takes is really the underlying issue that is relevant to everybody.
Gerry: Your two uses of the word “need” sum up the difference. I mean, I’m taking you as the expert on what the board is interested in, and they said it needed to be slower because it was dangerous. Dangerous to humanity. And the other view, that it needed to go faster, was about profit. It had nothing to do with humanity one way or the other. I mean, that’s what Altman wanted. He wanted some profit.
Jeff: I would disagree with that — partly — in that profit is certainly a motivation, that it needed to go faster in his view for that reason; but also the development needed to go faster, that there were other companies that were moving in on it, that they would lose first-mover advantage, and that there was also the Chinese competition. So it’s more than just the profit motive. The need, as it’s seen by Altman and [OpenAI co-founder Greg] Brockman, et al, is a little larger than just the profit motive.
Russ: If I understand correctly what Jeff is saying, it’s that there’s a competitive consideration that is not necessarily based primarily on a profit motive. In other words, they want to be in the driver’s seat, controllers, and not have China control it, or [have it] get out of hand. There’s another reason [besides profit] they believe that they need to move fast. That’s what I believe Jeff is saying.
Bill: I think what everybody had difficulty understanding is that the board does not relate to the investors. In other words, the investors have no control.
But the other question is, it now looks like OpenAI is being reconstituted inside Microsoft. I mean, the company is basically the people working for it. They’ve all said they’re going to join Sam Altman at Microsoft. And so it’s going to go on over there. So I think what we could do, in writing about it, is to look at what are the long-term implications. In other words, what are the dangers that this thing can represent? And what is anybody going to do about it?
Jeff: Just to put two points to that — remember, with respect to Microsoft, Microsoft is the single largest investor in OpenAI. They have $10 billion committed to it, although they haven’t put all that money in. Thrive Capital is the second largest investor. So there’s a reason for Microsoft’s involvement, and [Microsoft CEO Satya] Nadella was really the broker trying to put this Humpty Dumpty back together over the weekend.
But the other part of it is, to address Bill’s last point about looking at the dangers. Because that’s the knee-jerk reaction. And I don’t mean that pejoratively, Bill. But the knee-jerk “What are the dangers?” How about: “What are the positive things?” How about: Yes, there are dangers, but there’s also an equal number — and maybe more, in my view — of positive things that will come out of AI, that will be really fantastic in terms of the culture, in terms of business, in terms of a lot of things? You can’t just start out by saying, “All right, let’s just talk about the dangers.”
Bill: I agree with you completely. But I think that the dangers are enormous. And one of the things that is happening right now in governments around the world — not just the United States or Israel or Ukraine or Russia or China, but also Argentina — is the question: “Can democracy work when the communication system is unreliable?” And what this [the rapid development of AI] threatens to do is to shake up the entire communication system. And you can’t have a democracy if everybody is being fed a different picture of what’s going on and it’s impossible to decide what’s right or what’s wrong.
I mean, [by way of analogy], the BBC just did a thing on lab-grown diamonds. You can now manufacture a diamond that’s indistinguishable from a real diamond. So the entire market of diamonds is, you know, going through the floor. So these things, these disruptions, have a really huge effect on the entire world.
And I think that what we can do is certainly point out the good things. But also some of the things to look out for. I think in law, in medicine, ChatGPT is going to be a tremendous thing because you can look at an immense amount of data and come up with a rational analysis of it. But it can also cause serious problems.
Jonathan: There are two very distinct stories here, and they’re related, but very distinct. The one story is: “Is AI good or bad, or both?” But I think the more provocative story involves inevitability. That is to say that we know behind all this lurks the profit motive, broadly defined. These are people who want either money or fame or some sort of power. Just group that under profit. And there’s this new toy, and the new toy is a very powerful toy, and it cannot be stopped. I think that is the interesting thing. You fire Sam and 12 hours later he pops up at Microsoft. Everybody goes with him. There’s a power here in technology that is almost impossible to keep in the bottle.
Jeff: You’re absolutely correct. It is like trying to hold back the ocean. You embrace it or you embrace it.
Bill: The Wall Street Journal on Saturday had a 49-minute video of Sam Altman and Mira Murati, [OpenAI’s] Chief Technology Officer [and now Interim CEO], being interviewed, and it’s very clear that they’re interested more in AI [itself] than they are in profit. They have an idea that this is going to change civilization, and they want to see the idea followed through. I don’t think it’s just profit. But I think it’s inevitable that the profit angle will drive this thing in the open market.
Ashley: I tend to agree with one of the more recent points that it is inevitable. And that there are really great things about it, but there are also potentially dangerous things about it. But I think I’m interested in seeing how, in trying to ride the wave of it coming, there can be more done around the ethics of developing AI, and how to prevent some of these problems that are concerned.
Rather than just kind of throwing our hands up and saying, “Oh, well, it’s just coming. And it’s just going to get out of control.” Instead of having more of a view of how to be intentional about getting controls in place, or ethics boards, or governance, or the way to grow it responsibly, or have checkpoints in place so that it doesn’t become all of the negative or potentially negative things. So that dangerous things are curtailed, or at least managed in a way that we’re not waking up one day and kind of in a sci-fi movie with some of the more negative expectations that are coming.
It’s something that I have been very fascinated with as it’s developed, and over the course of a couple of months where I’ve used ChatGPT. And seeing how it’s become more sophisticated, I just think that it’s about really kind of trying to have it growing in a responsible way. It almost reminds me of when corporate social responsibility was really becoming a big thing. I think it’s another way of corporate social responsibility having an opportunity to come to the forefront.
Jyoti: There are good things about AI, but we have to use it very cautiously.
Laura: To me, it seems that applications of AI that are specifically built and tailored for specific jobs and industries, to be used by people doing those jobs, feel safer and more helpful than uses of it that are just available for everyone to use. I don’t really have any information to back that feeling up, but that’s just kind of my gut feeling.
Russ: That’s a perfectly good gut feeling you have. So let’s just make a note of that as a subset of the CSR [corporate social responsibility] idea. The idea of maybe, is there a way to kind of bifurcate the different uses of AI? You know, AI for work, and then AI as a more general thing, and see where that goes.
Sean: One of the things I wanted to bring up in this conversation about AI, and I think I’ve made this point before about technology, is how we’re using a current society’s value system to make judgments. And we have no idea about the value systems that have made those judgments over the course of a million years.
I’m thinking specifically [as an example] about genetic alterations that are available through technology now. And most doctors and scientists still use the term “junk DNA” because they have no idea what it does. So with AI, those decisions are no longer going to be in the hands of the people, even with the values of the current society. Those [decisions] are going to be given away. And we can’t take that back.
So to Jeff’s point that there are good things: Absolutely, but the risks are so enormous that it undermines those really good things that could happen. Yes, if you look at every single technology that’s ever been developed, it’s put us in this same place. We are where we are right now on the planet because of commercial uses of those technologies. You can shake your head [Jeff was shaking his head], but the reality is climate change and plastic pollution, which is in everything, we have no idea what that is going to do to life on the planet in the next few decades, because there’s no way to get it out, and it’s only going to increase.
Now AI is going to exponentially increase the threat. And there doesn’t seem to be that sense — Jeff might disagree with this, but there doesn’t seem to be that sense of urgency about this issue, and it’s enormous.
Jeff: In my opinion, I think we are risk-faring people, and we need to take those risks. That’s what moves progress and society along.
Bill: An example is this situation in Argentina. It just had a wacko [the far-right libertarian Javier Milei] elected as president, who basically wants to switch the entire economy over to dollars — which will end up being a money laundering thing for drug money coming out of Ecuador. So there are real problems around us that we need to look at. The AI thing is interesting. But it’s really a side issue. I mean, there are immediate threats that we have to deal with.
Jonathan: I want to go back to what Ashley said about the ethics of the development, the development process, the R&D process. And it strikes me that that’s exactly what the [OpenAI] board tried to do. The board tried to tap the brakes and the pushback was instantaneous and fierce.
So where are we going to get this ethics command and control, if just the slightest sort of effort to stop the greasing of the wheels meets this kind of pushback? It doesn’t bode very well for how this is going to be handled, and I think that fits in with the experience we’ve had of regulating all these new toys — whether they’re plastics or certain kinds of engines, or nuclear weapons for that matter.
It’s very dicey and very difficult, and my instinct, my intuitive sense, is that this one is going to be the most difficult of all to get any sort of command and control over.
And then there are the issues of if somebody has command and control, some entity has command and control that gives them outsized power, that also flies in the face of what we’re aspiring to in terms of democracy and the way we govern ourselves. So there are real problems. Whether AI is restricted or unrestricted may just come down to our fate. We may just be at that place on the [evolutionary] growth curve that is approaching the asymptote. After centuries of relative static stability, we are accelerating, and that just may be where we are.
But if we’re going to think about it, I think we do have to consider that the problems of not restricting it and the problems of restricting it, such that it gives privileged users power over everyone else, are both very, very existential threats.
Gerry: One thing to look at is history, and it’s something that we can do and journalists should do. And I recommend anyone who’s thinking about this, the latest issue in The New Yorker, the AI issue.
One of the immediate worries about AI is polluting the information stream and making it impossible to tell what’s real and what’s not real. There is a history to that, starting with the invention of photography. I won’t go into the details of it, but it’s interesting. And it’s not quite what you imagine.
And in general, looking to history, the other thing is controlling technology. The famous example of the ozone layer, which was controlled by government intervention in what was a fairly successful business enterprise. But can we do things like that with AI as well?