David Kaye, UN special rapporteur on freedom of expression, distils the global challenges of our most consequential free-speech issue.
There are few First Amendment issues more pressing today than how online speech should be governed. It impacts our interpersonal relationships, our views of almost every aspect of society, and of course our politics. Now, absent an easy solution, Congress wants to dive in and claim that they actually have a clue.
The internet was supposed to set a million voices free… It didn’t work out quite that way. In this week’s WhoWhatWhy podcast we talk to David Kaye, a UC Irvine law professor and the United Nations’ leading voice on freedom of expression and human rights. He serves as the UN’s Special Rapporteur on the promotion and protection of the right of freedom of opinion and expression.
In our conversation, we examine the balance between free speech and the regulation of the internet and its leading companies, the impact that these companies have on public life, and the question of who should decide whom gets censored.
Facebook’s refusal to take down the recent doctored video of House Speaker Nancy Pelosi’s (D-CA) “slurred” speech shows how social media companies have set their own rules and how the rest of us have no clue what those rules are.
The goals and standards of these profit-making companies, Kaye says, are going to be almost impossible to reconcile with the wide variety of international and global rules
Kaye expands on the idea that these companies can never have enough people to moderate all their content, and why, contrary to hopes, artificial intelligence is not the answer.
While there has been a lot of talk recently about breaking up these companies, Kaye explains that, in fact, they may just need to be “broken down” — by which he means brought even closer to their end-users. He says that, only if there is a sense of a close and common community among users and the companies can speech be self-policed.
Click HERE to Download Mp3
Full Text Transcript:
As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to time constraints, we are not always able to proofread them as closely as we would like. Should you spot any errors, we’d be grateful if you would notify us.
Jeff Schechtman: | Welcome to the WhoWhatWhy Podcast. I’m your host, Jeff Schechtman.
We’ve all heard the hype sometimes in promoting a movie or a novel or even a new show or documentary that it was ripped from today’s headlines. Usually, at best, it’s a bit of an exaggeration, but with respect to my conversation today with David Kaye, it is absolutely true. |
Jeff Schechtman: | Think about the real core issues today, the impact of social media, 24/7 bespoke news, live feeds of murder on social media, fake and doctored images pushed as real, even by the president of the United States, the self-feeding and self-perpetuating anger on Twitter, conspiracy theorists being censored and thrown off social media, whistleblowers and leakers feeding information to the press and the public, and the lack of media literacy with which to understand it. With websites catering to every social and political fantasy and fetish, never has the line between images and reality, between speech and action, been so blurred, and never has this problem so profoundly impacted our values, our core beliefs, and the fundamentals of democracy. |
Jeff Schechtman: | We’re going to talk about this today with my guest, David Kaye. David Kaye is the United Nations special reporter on the promotion and protection of the right of freedom of opinions and expression. He’s a clinical professor of law at the University of California Irvine and began his legal career with the State Department. His most recent work is Speech Police: The Global Struggle to Govern the Internet. |
Jeff Schechtman: | David Kaye, thanks so much for joining us here on the WhoWhatWhy Podcast. |
David Kaye: | Jeff, thanks for having me and thanks for that wonderful introduction. |
Jeff Schechtman: | Well, thank you. The Internet was supposed to be something that was going to liberate us, that would free us, that would increase the value of democracy, that would be very liberating and democratizing in terms of voices heard. What happened? |
David Kaye: | Well, that Internet still exists. It doesn’t always seem to exist on the surface, but I think that it continues to be the case that people get a lot of value out of online sources. People do get a different kind of debate. They get a democratized debate. Particularly outside the United States, particularly in environments where the State controls the media, sometimes it’s social media that provides the only access to honest information. |
David Kaye: | But as your introduction highlighted, there’s a different kind of Internet out there as well. There’s a lot of fakery, there’s a lot of abuse, there’s a lot of hate speech and propaganda and so forth. Part of that, I think, is because of the democratization of speech online, the cheapness of speech online, and the structure of the Internet today, which has become extremely centralized so that it gives tools both to companies to decide what’s legitimate and what’s not and to governments to decide what they want to censor. |
Jeff Schechtman: | How has that democratization of the Internet … What is the nexus between that and the increase in globalization? How have those two things grown up together in ways that have been both positive and negative with respect to the Internet? |
David Kaye: | Gosh, that’s a really fantastic question, and I haven’t really thought about it in those terms, but I think it’s really useful to think about it in that way. I think one of the things that I try to explain a little bit, and this is maybe kind of pointed most directly at the American audience, which is part of what has happened over the last 15 years or so, has been American companies gaining real dominance over public space around the world. In the United States when we think about social media issues, we often think about, well, these are private companies. They can compete with public media, they can compete with broadcast media and so forth, so they have the responsibility and really the right to do what they want on their platforms. |
David Kaye: | Globalization, though, in a way, gave the opportunity for these companies to expand their reach well beyond the borders of the United States. In many places, these are companies that have, in a sense, displaced the local public square around the world. In a place like Myanmar or in many parts of Europe, it might be the case that these platforms, which have provided access to all sorts of media, they really are making the rules for what’s legitimate and what’s not in these public spaces. That is, in part, a function of globalization, the possibility that American companies can have this kind of access to global markets. The truth is that these markets, these countries and individuals in these countries are pushing back now. They’re not happy being governed by people who are based in northern California making a lot of money off of their public space. |
Jeff Schechtman: | I guess what we have to ask ourselves, and we tend to use these words interchangeably, whether these are media companies or they’re merely platforms. That seems to be at the core of some of the discussion. |
David Kaye: | Yeah, absolutely. I’ll be honest, and I tend to find that discussion to lead us into almost theological discussions about how we want the platforms to be regulated. I think in some ways, it’s better to step away from the distinctions between are they publishers or are they mere conduits and just hosts for information and think instead about the impact that they have on public life. If we start there and don’t get too wrapped up in the way we typically think of media, then I think we can still get at what we want from social media and how we think government should regulate them. Just as one example, it’s pretty clear that in places around the world, including in the United States, social media companies have a huge impact on what’s considered legitimate speech and what’s considered sort of the boundaries of public debate right now. And yet, when they make those decisions, they’re incredibly opaque to the public and they’re opaque to government. |
David Kaye: | One way of thinking about regulation, rather than saying, well, they should be treated like broadcast media, for example, another way to think of them is, well, because of this impact, we need much more clarity about the rules that they’re making, about how they enforce the rules, about the consistency of the application of their rules. I think there’s a lot of room for government to regulate in that space. |
David Kaye: | But having government go in and regulate the content of what’s online, I think we could imagine maybe that being acceptable in certain countries that adhere to rule of law and to democratic principles and to human rights, but I think, at least my experience from seeing this happen in many, many places around the world, is that that kind of content regulation leads to places that very quickly get to censorship that I think many of your listeners and many people in democratic societies would be uncomfortable with. I would just sort of frame it a little bit differently rather than that binary and really try to understand instead what is it that social media is and what is it doing. |
Jeff Schechtman: | Is there a problem, though, that is inherent in the fact that social media itself, that the Internet itself, has an impact and we’re seeing almost every day how profound that impact can be on the very governments that then have the ability and the potential to regulate? It’s a little bit like the uncertainty principle, that when you begin to measure it, it impacts the thing itself. |
David Kaye: | I think that’s right. There’s kind of a feedback loop that you’re describing, and that’s absolutely true. In part, maybe that’s why I think that we all do need to step back and really assess these questions about the impact and have public debate about how we want them to be regulated, how we want these companies to be regulated. That’s something, and this is where it gets really complicated, I think, it certainly gets complicated for the companies. That conversation should be happening not just at a global level, but it should be happening in every country where these companies have an impact, and that really means pretty much every country in the world with a few pretty big exceptions. |
David Kaye: | But those kinds of discussions about what we expect of social media should be taking place and they should be public and publicly accessible and available to sort of all sorts of stakeholders, that is, users, governments, corporate actors, advertisers and other experts, in particular, in terms of freedom of expression and media. All of those actors should be participating in the debate. It shouldn’t just be about government making decisions. |
Jeff Schechtman: | Of course, the other part of it that goes to the globalization discussion that we were talking about before is that every government is looking at it in a very different way. The impacts that social media and the Internet have is different, depending on what the political environment and political framework happens to be. |
David Kaye: | That’s absolutely true. In the book, I try to distinguish how the debates are unfolding in different environments. To give one example, in the European context there’s quite a bit of debate around issues like disinformation and propaganda and hate speech and terrorist content, and to a large extent, I think that’s a really important set of conversations to be taking place. Some of them are taking place in the European Union, some are taking place in capitals like Berlin and Paris and London, and those are important. |
David Kaye: | One of the problems, though, is that sometimes the nature of the European discussion, which really tends to perceive the dark side of the Internet, particularly around false information and hate speech, that debate gets taken out of context in places outside of Europe, outside of democratic space, in places like Singapore, Kenya, many other places. What those countries often take from that debate is, oh, the Internet is a dangerous place, so we need to regulate, for example, false information and not just regulate it but we need to criminalize it. |
David Kaye: | I think you’re exactly right to identify that this debate is unfolding in different ways in different places. One of my concerns is that the nature of the debate in democratic societies is actually having a harmful effect on the ability of human rights advocates, democrats, small-d democrats around the world, to influence the debates in really productive ways because the debate gets framed around the dark issues and not around how do we preserve the good stuff that the Internet was kind of built to protect. How do we protect that while also dealing with the difficult problems that are very clearly dominating the discussion today? |
Jeff Schechtman: | Equally important in this discussion and kind of the overlay to this is that these companies that are involved, the Facebooks, the Googles, the Twitters, et cetera are all companies that are profit-making public companies with their own stakeholders and employees and shareholders, et cetera. They have a different fiduciary responsibility, which doesn’t always line up with these things that we’re talking about. |
David Kaye: | That’s exactly right. That’s one of the reasons why government regulation is going to be important because these companies, although I think they probably over-state the importance of maximizing shareholder value, because there are still principles in corporate law and corporate governance related to responsibility of companies as well. At least the internal corporate discussions may be deformed a little bit by a discussion that focuses only on shareholder value, but I think you’re identifying a really important problem, which is these companies have a particular way of making money. Let’s be honest, they become like cash machines. They’re making massive amounts of money and given that, they have not only a responsibility to those who are shareholders, but they also have a responsibility to the public. |
David Kaye: | Part of this has been developed over the last 20 or 30 years in the context of corporate social responsibility, which is something of a movement in the United States, but there’s also globally something called the UN’s Guiding Principles on Business and Human Rights, which basically says that companies have to consider what kind of impact they have on the rights of people in the places where they operate. I think that’s something that companies should be taking into account just as much as they’re taking into account how they’re going to be making money. |
David Kaye: | Just to kind of wrap up the point, I don’t think that their business model, their business approach, can be an excuse for protecting human rights and protecting freedom of expression and protecting public institutions in the face of things like disinformation. They still have those responsibilities and, in a way, because they do make so much money, they have the resources to deal with many of these problems. They clearly need the push of government to get them to do the right thing. |
Jeff Schechtman: | The other thing that seems to be lacking is a depth of understanding of the problem. It’s fine for these companies to say, and you’ll hear Zuckerberg say it and others will say it, not to pick on him, about, yes, we understand now that we’ve been punished and that all this has happened, that we have a responsibility to the public. It’s fine to say that, but what you don’t get the sense of among most of these companies is any real depth of understanding of any kind of social and political and historical context to that understanding. |
David Kaye: | Yes. I agree with you. I think that all too often, first of all, many of the companies have been in reactive mode for just far too long. Now one thing is true, all of the companies have developed, and by all the companies here I’m particularly talking about Facebook and YouTube and Twitter, they’ve all developed these, in a way, bureaucratized ways of approaching content on their platforms. They all have very extensive rules. They have up to tens of thousands of contractors and employees who are working to basically clean their platforms of content that they don’t want up and to enforce their rules. They do all of this, and yet, when they’re dealing with particular countries and the politics and the language that’s used to discuss politics in different countries around the world, often they have very little insight. |
David Kaye: | Maybe they have some language experts who know Amharic in Ethiopia or Burmese in Myanmar, but do they have the real ability to understand what’s happening in those countries? These are countries where they dominate the public space. Do they really understand the impact that their creations are having on politics, on lives in these places? I doubt that they do. I think the question of whether their ability to actually deal with difficult content in all the places they operate around the world is really, it’s a stretch. I’m not sure how they can do it without really the full engagement and devotion of much more in terms of resources to these kinds of efforts. |
Jeff Schechtman: | Right. When you’re talking to them, just to use Facebook as an example, what, two and a half, three billion users around the world, there’s no way that there can ever be enough moderators anywhere in the world to really look at that content. Of course, what we hear is that AI is the promise, that that’s how it’s going to get sorted out. That’s questionable as well. |
David Kaye: | Well, the thing with AI is that at the end of the day, you still need to have input into those rules. You still need to have, for example, a way to distinguish hate speech from reporting on hate speech. You still need to have a way to distinguish terrorist content from reporting on terrorism. You still need to have a way to distinguish the false information from satire. AI on its own doesn’t solve the actual problem of deciding what’s legitimate and what’s not. |
David Kaye: | At the moment, it’s the companies that make those decisions. And you’re right, at the scale at which these companies operate, it seems virtually impossible for them to do this, which is why I think in some respects, the debate that we’ve seen over the last several weeks around whether Facebook should be broken up, and this was one of the co-founders, Chris Hughes makes this long argument in the New York Times about this, that that might be appropriate. |
David Kaye: | But another way of thinking about it also might be rather than break up, is there a way to break down the companies? Is there a way to get them closer to their users, to the environments in which they operate so that there can be a kind of autonomy, almost a self-governance around the world? Because the communities are going to have a better understanding of what’s problematic and what’s not. The risk is you give government more control and you want to avoid that, but this is really at the center of the debate in many ways. How do the companies get to these communities so that they can do a decent enough job to regulate the public space that they have so much power over? |
Jeff Schechtman: | Talk a little bit about what people talk about in terms of this self-policing idea, because you can make the argument, and I’ve heard it made, that if you threaten to take it all away or if you threaten government’s censorship of it, that it will create pressure on individual users to do the policing for the companies. What does that argument look like? |
David Kaye: | Yeah. Really, from the beginning of the Internet, we’ve had a kind of self-regulatory model, and on social media that’s something that anybody who’s been on any of these platforms understands because they see it as flagging. Anybody can report content that they think interferes with the platform’s rules. They can report that to the platform. That’s putting the onus on users to police the platform. There’s some good in that, right? That allows for some measure of autonomy and some measure of kind of crowdsourcing enforcement of the rules. |
David Kaye: | But that’s never going to be enough, and it’s also problematic in its own way because it’s very easy for there to be a kind of mob rule around flagging, coordinated flagging of content that is just, maybe it’s critical of the government, but the government has basically developed a way to get the companies to take it down by making it appear that there’s a public outcry over certain material. That’s one potential way of thinking about it. |
David Kaye: | The other way that we’ve really been working this over the last several years, really since the beginning of social media, has been much more of a self-regulatory environment in which the companies, which have immunity both under European law and in American law, to make decisions about what content is legitimate and what’s not. They have immunity to take down any content they want and they have immunity to leave up content, with some exceptions like child exploitation and terrorism. That’s the environment that we live in right now, and basically, people are not happy with that. |
David Kaye: | People are not happy with companies that, as you point out, basically operate on a profit motive for them to have the responsibility to do this. That’s why I think the discussion has moved really pretty quickly over the last couple of years from thinking only about self-regulation into this space of maybe government regulation at some level, maybe other forms of regulation that involve companies and governments and civil society working together on these issues. The field is, in a way, quite open right now in terms of the debate over what regulation looks like and who should be doing it. |
Jeff Schechtman: | It’s also a moving target because the nature of social media continues to shift and change and will continue to shift and change over the next several years. All we have to do is look at … YouTube is a good example in video and Facebook now, those things are growing exponentially and there’ll be something else changing two or three years from now. |
David Kaye: | Yes, that’s absolutely true. The way I think it’s important for us to think about some of these issues is what are the rules that we want to be applicable online? Who do we want to be making those rules? How can we frame them so that we’re not continually rethinking the rules as the platforms and as the Internet more generally evolves? Because I think you’re absolutely right. What we think of right now as a kind of static environment, as social media, it’s recent. It’s incredibly recent. |
David Kaye: | The whole environment is very, very new, and it’s likely going to change over the next five to ten years. We might, for example, have a new entrant into the social media market. We might have new models of decentralization such that people can be freer to choose how they engage online. Maybe there will be a new market for algorithms so that people can have more control over the kind of information that they see. There’s a huge amount of uncertainty over the coming years, and so as we think about regulation and we think about who we want to be doing that regulation, I think it’s really critical for those principles to be developed thinking in terms of the possibility and the likelihood of change over the coming years. |
Jeff Schechtman: | Yeah. To your point, just think about the change, if users could control their own algorithms, how that would change things. |
David Kaye: | Yeah. That’s the kind of approach that I think the companies would strongly resist, because it would interfere with their business model of getting, basically getting users and advertisers together. That would be a deep problem for the companies. On the other hand, that might be one of the approaches. I don’t think there’s any kind of one-size-fits-all approach here. I think there’s going to be a combination of approaches, but that might be one part of the approach in which government can basically say, as a matter of regulation, that people have the right to choose the information environment in which they exist. They have the right to see in clear, accessible formats what is the algorithm presenting to them and how does that algorithm work and how they can change that. I think those are the kinds of things that government and that the public thinking about what principles they want to govern in this space, that’s what we need to be thinking through in a really careful way. |
David Kaye: | Unfortunately, kind of careful debate is not the best way to describe what’s happening in Washington these days, and we’re missing an opportunity at a moment of real change to think through what we want and in terms of our self-governance and the impact of online sources. Some of that is happening in Europe, but the problem in Europe is that they’re basically saying, “We want these rules to apply. Now you companies go and figure out how to adjudicate them,” which that’s also a problem because that’s giving the companies even more power. But we really do need to think about the principles that we want to apply in this space and how we should think about those moving forward. |
Jeff Schechtman: | Of course, the other thing that we know doesn’t work … We’re almost out of time, but the other thing we know doesn’t work is punishment because the punishment that’s been meted out is even in the billions-of-dollar category is like a rounding error to these companies. |
David Kaye: | Yeah, exactly. I really don’t think that penalties are going to be the way to move forward. There’s the antitrust kinds of penalties, there’s privacy violation penalties. There’s different penalties that we’ve seen develop in Europe. But the penalties themselves, I think you’re right, they’re basically chump change to most of these companies. It’s hard to get the penalty high enough to have an impact on the companies. |
David Kaye: | Rather than structuring the approach to the companies in that way, I think we need to be thinking about what’s the form of regulation? What is the change in behavior that we want from the companies, and how do we do that in a way that’s consistent with our democratic values, our human rights values, our constitutional values? And how do we do that in a way that really takes into account the public’s need for protection and the public’s need to continue to have space for freedom of expression and other basic fundamental rights? |
Jeff Schechtman: | Finally, David, where should this conversation take place? Is there a framework that you think is ideally suited to have this conversation? |
David Kaye: | Well, I think we all need to be having this conversation. I think that right now, and the conversation around the doctored Nancy Pelosi video was pretty instructive around this. There was a lot of debate, a lot of pushing for it, take the video down, take the video down, which I get. I totally, I get that at an emotional level. I share that. But we still need to be thinking as a society who should make the decisions for what is doctored, what is false, what is satire, what is harmful. Right now, we’re saying the company should make those decisions. |
David Kaye: | I get that people think that that’s where the outcome should be, but I do believe that we need to be having this conversation as a public. We need to have regulation around transparency so that the companies are required to share exactly what they’re doing, and that allows all of us as a public, as users, even those of us who are not users of online social media, to participate in the conversation and to understand exactly what’s happening, because the way it’s happening now, the companies have all the information and we have very little of it. That’s a real key thing that needs to change for the public to be participating legitimately in this debate. |
Jeff Schechtman: | David Kaye. If you want to understand more of some of David’s ideas, his book Speech Police: The Global Battle to Govern the Internet is just out from Columbia Global Reports. David, I thank you so much for spending time with us here on the WhoWhatWhy Podcast. |
David Kaye: | Jeff, thanks so much. I really enjoyed it. |
Jeff Schechtman: | Thank you. And thank you for listening and for joining us here on radio WhoWhatWhy. I hope you join us next week for another radio WhoWhatWhy Podcast. I’m Jeff Schechtman. If you liked this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate. |
Related front page panorama photo credit: Adapted by WhoWhatWhy from Facebook / Wikimedia, edisona / Pixabay, YouTube / Wikimedia, Instagram / Wikimedia and geralt / pixabay.