The Internet Is Killing Democracy

Facebook Is the Shiny Object, but the Danger Is Much Larger

Statue of Liberty, code

We have seen endless stories about Facebook and Cambridge Analytica. We have endured two days of Mark Zuckerberg explaining the Facebook business model. Social media and its role in politics is on everyone’s mind. However, none of the current clamor speaks to the broader impact of the internet, or of big tech in general.

In this week’s WhoWhatWhy podcast, Jamie Bartlett, director of the Center for Analysis of Social Media, reminds Jeff Schechtman that the internet was supposed to be a democratizing force. The widespread availability of digital technology was to allow freedom of information and communication on a scale never thought possible before.

The reality, Bartlett argues, is that every aspect of the internet and its culture is feeding the worst of humanity’s tribal instincts.

It’s becoming clear, says Bartlett, that internet technology is simply antithetical to democracy. Perhaps it’s no coincidence that, as the internet grows, authoritarian regimes proliferate. As tech companies get bigger, the institutions of democracy come under greater pressure.  

Bartlett talks about how monopolistic practices are built into the DNA of tech companies, and reveals the intended and unintended consequences of maintaining those monopolies. How hyper-targeted internet advertising is merely the most recent iteration in a long history of efforts to manipulate our decision-making. As those efforts advance in sophistication, the ability of a small minority to distract and ultimately control a technically naive majority poses a grave threat to the fundamental exercise of free choice.

This week’s podcast is a sobering look behind the headlines and noise about tech.

Jamie Bartlett is the author of The People Vs Tech: How the Internet Is Killing Democracy (And How We Save It) (Ebury Digital, April 5, 2018).


googleplaylogo200px download rss-35468_640

Click HERE to Download Mp3


As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to time constraints, we are not always able to proofread them as closely as we would like. Should you spot any errors, we’d be grateful if you would notify us.

Jeff Schechtman: Welcome to Radio WhoWhatWhy. I’m Jeff Schechtman. Emma Gonzalez and David Hogg understood how to maximize social media to achieve the highest in democratic ends. The Russians and Cambridge Analytica used that same social media to undermine democracy, to spread lies, and to manipulate facts. This week, we’ve seen Mark Zuckerberg and members of Congress musing about the business model of Facebook and the holy grail of hyper-directed advertising. All of this, good and bad, misses the larger point.
In a world that is totally interconnected via the Internet, when every aspect of the Internet and Internet culture feeds steroids to the human tribal instinct, when information moves at the speed of light, and when there is more of it than we have the evolutionary ability to process, is this technology simply antithetical to traditional ideas of democracy? Particularly to the system that our founders passed down to us.
Perhaps it’s no coincidence that as the Internet grows, so do authoritarian regimes. As tech companies get bigger, democratic institutions become smaller. What is the nexus to all of this and if it’s true, do we have to change tech or change the very idea of democracy?
We’re going to talk about this with my guest, Jamie Bartlett. He’s the bestselling author of The Dark Net, an examination of the hidden corners of the Internet. He is the Director of the Centre for the Analysis of Social Media at Demos, where he’s written about the Internet and democracy. He also writes on technology for The Spectator and several other publications and has focused on how the Internet is changing politics and society. It is my pleasure to welcome Jamie Bartlett here to talk about the people versus tech. Jamie, thanks so much for joining us here on Radio WhoWhatWhy.
Jamie Bartlett: Well, Jeff, thanks for having me. What a brilliant opening summary that was! I mean, I think I might have recorded that, and I’m going to use it myself. You summed it up perfectly. It is antithetical because it’s a completely new system, and it’s antithetical to that old model of democracy that we’ve built, especially you’ve built with the constitution.
Jeff Schechtman: In many ways, if we look back to the earliest days of tech, it was supposed, the Internet was supposed to set us free. Information wanted to be free, it was supposed to be and sold to us as the ultimate democratizing force. What happened?
Jamie Bartlett: What happened indeed, and you’re absolutely right. It was always sold to us that way and I don’t think it was sold to us by devious people. I think the people involved in the early technology boom, especially of the ’90s, but even before that in the late ’60s, truly did believe in its emancipatory powers. They imagined above all that with more information, more connectivity between people it would be a naturally, kind of a naturally democratizing force if you just let it be.
Unfortunately, that is not exactly how things have turned out and two very obvious reasons is one, that digital technology, I think, tends towards monopolization because it allows for a kind of rapid winner takes all sort of market where, for example, Google gets better the more data it has, and the more data it has, the better it gets. Then the more data it gets, which means the better it gets. Very, very quickly, because this is known as network effect, you end up with these enormous monopolies emerging very, very quickly.
“Why should I use the second best search engine when I can use Google for free as well?” The second reason, main reason anyway, is that I think a lot of the early pioneers had a slightly naive view of human nature. They just imagined that if we had more information and we were more connected with each other, we would become more informed, our politics would be nicer, it would be wiser.
Maybe that’s true of the highly, highly educated professorial types who were behind a lot of the early Internet, but for a lot of us, it’s kind of overwhelming when you’re just given so much information and rather than turning politics rational, I think it’s turned it more emotional and more tribal, which is one of the reasons we’re seeing this drift towards ever more angry, divisive politics.
Jeff Schechtman: That was one of the things that nobody arguably could have anticipated that it would play into the worst of our tribal instincts. That the more information there was, the more data there was, that it would become so incredibly self-referential in feeding that tribal instinct.
Jamie Bartlett: Well, Marshall McLuhan predicted it. The great Marshall McLuhan, the great cultural theorist of the 1960s, while he was a great techno optimist and he was excited about it, he did also say that as we transition into what he called “electronic man,” we’ll become more tribal because we’ll … The medium of electronic information is more naturally emotional, it’s visual, and it will lead us into our little tribes as our traditional, I guess, our traditional identities collapse and we desperately search around for groups to belong to.
This is what I think is happening to politics today. Of course, the problem was in the early ’90s, what techno optimist, what early business person, what future looking politician wanted to say, “I think we’re heading towards angry, divisive, tribal politics”? It’s much easier to say, “It’s great, it’s wonderful, it’s going to work, it’s fantastic.” Because that’s our faithful view on technology. A few people, a few voices were skeptical, but they were drowned out by, I guess, the rest of us.
Jeff Schechtman: Do we make too big a deal in worrying about how big these tech companies have gotten today? Because if we look historically, both in the tech sector and in other business sectors as well, that there is a natural churn to these things. I mean, you look at some of the huge companies like IBM or AT&T or Kodak that once dominated so many areas, and look at today, they’re mere shadows of what they used to be, and it’s certainly possible that the Facebooks and the Googles and others will be in that same position 20 years from now.
Jamie Bartlett: It’s certainly possible, and you’re right. There’s always the possibility of new upstarts turning up and disrupting the disrupters. However, it is always worth being on high alert for this because once established, monopolies love to maintain themselves. You’ve already seen a great transfer of economic into political power from Facebook, Google and the others. They’ve massively increased the amount of lobbying dollars that they spend, but also, I think, the directional travel of tech is towards much more artificial intelligence, much more Internet enabled devices, and I think both of those technologies also tend towards big monopolies.
Here’s the other thing that I think’s really important, Jeff, which is if you were to split Google up, unlike traditional companies, Google would become far less efficient. The market wouldn’t be as good if Google was split. The reason Google is so good is because it’s like a monopoly, because it has all the data, which means it can get better and stronger results all the time. The weird thing is with some of the modern digital tech firms is they become almost natural monopolies, and at that point they are very hard to break up because it leads to a decrease in efficiency.
Mark Zuckerberg, in his testimony, said something very, very interesting hardly anyone picked up on, which was that in the competition we face with China, Facebook is going to be very, very important. In other words, national security and economic well-being might even one day depend on these massive tech monopolies, so there might be a degree of reliance on them that we’ve not really seen before with traditional companies. Bear in mind as well, of course, that some of these platforms really are the places where we have our public debates.
I was never on a big oil firm having a public debate, but these companies own a lot of the places where we talk politically about them, and that gives them, I think, an even greater power.
Jeff Schechtman: There’s another thing I want to talk to you about with respect to artificial intelligence that also came out of Zuckerberg’s testimony the other day, where he constantly referred to AI as the solution to some of these problems, that it’s impossible to monitor all of this by humans, but that really more and more Facebook and, I guess, other companies are going to have to rely on AI to do the monitoring. What he leaves out is that for that AI to be successful, it means more and more and more data.
Jamie Bartlett: Absolutely right. I think he’s right about that. When you’re dealing with a hundred billion interactions a day, you can’t possibly, even with the 20,000 moderators that they’re going to be employing, you cannot possibly manage all of that content. You have to rely on artificial intelligence. For artificial intelligence to work, the more data you have, the better it gets.
A lot of the recent hearings was about him talking about data and how he views his own data, and how he doesn’t mind maybe if there’s more rights for users over their own data, but in the end, the platform depends on data, artificial intelligence depends on data. There is this tension here that I don’t think is easily resolved, which is that Facebook won’t exist if it doesn’t collect personal data.
Jeff Schechtman: Not only doesn’t it exist without personal data, according to what Zuckerberg said and further to what you’re saying is that it can’t even police itself without more data.
Jamie Bartlett: Yeah, that’s a really good point, it’s a really good point. The tendency towards more artificial intelligence you see in all sorts of different directions. I think what’s going to be quite scary in the years ahead is that I don’t think we’re going to be able to police society as a whole without artificial intelligence. Forget about just Facebook. I think our police forces are going to come to rely on it more and more. Because let’s be honest, I can’t speak to what’s going on in Northern California, but in the UK at least, there is a crisis in policing. They cannot control the digital streets. So much crime is going unpunished online. One solution to that is going to be more and more police AI.
I can see the big parts of the positive improvements in health. Where’s that going to come from? Artificial intelligence, again. More and more of our well-being in society will come to rely on AI and then the question gets even more important. How do we hold AI accountable? How do we monitor it? How do we make sure it’s not being manipulated? How do we make sure everyone gets the benefits from it? These, I think, are the really big questions for us in the next five to ten years.
Jeff Schechtman: What is the nexus, if any, that you see between the rise of all of these issues that we’re talking about and the clear rise of authoritarian regimes around the world?
Jamie Bartlett: Yeah, this is the kind of scary thing about it. The prospects of, the potential for authoritarian regimes to exercise unbelievable forms of hard and soft surveillance on its citizens is frankly fairly terrifying. We are introducing more and more devices into our homes, into our bedrooms, into our cars, into our pockets, which can easily be turned around and used to track us everywhere, everything we do. Then as well, using artificial intelligence that we might one day do, that we might, crimes that we might commit. I know this sounds like a Hollywood movie, like “Minority Report,” but science fiction often does predict the future.
This is, I think, where it’s going, which is why I fear greatly for the prospects of democracy in authoritarian systems because I think the potential for surveillance in places like China is so great now that it’s going to be very, very difficult to break out of it.
What worries me is that the democracies of the world will follow that pattern because it will seem to offer stability, law and order, tax raising powers, and people will look over at China and think, “Wow, they seem to have everything great, whereas our society’s really struggling here.” I worry that technology is not in fact going to liberate us to the extent that society collapses to some anarchic paradise, but it flips the other way and that we’d accidentally end up following China into some kind of horrible techno-authoritarianism.
Jeff Schechtman: I guess the question that that brings up is which is the more powerful force, democracy or technology? I mean, technology always seems to find a way, as nature does in some respects. Is that the more powerful force and do we have to change governance to adjust to technology?
Jamie Bartlett: Yeah, it’s a great question. Well, what do you think, Jeff? What do you think is the more powerful force?
Jeff Schechtman: I think technology is the more powerful force. I think to the extent that technology implies or creates the illusion of a certain degree of freedom, it’s the more powerful force. The only thing that’s more powerful than technology as a force would be the force for freedom, but if people think they’re free and have their technology, it’s an unbeatable combination.
Jamie Bartlett: Yeah, that’s a really good point and that’s kind of the, that’s exactly the sort of combination that I think we’re facing at the moment. I agree with you. I think the two things pitted against each other, which is essentially democracy, a system not just of freedom but of control and law, and technology, that is the great battle of our times.
I don’t think that technology can be beaten or will be beaten by democracy, but I think it can have its edges smoothed, that if it’s more regulated, if it’s more somehow accountable, if there are active measures taken by governments to make sure the benefits of technology are spread more fairly, then at the very least, we don’t have a kind of collapse of trust in democracy as a system of government.
That’s my big concern, that people stop trusting and believing in democracy as a way of organizing society in an age of massive big data, intense micro-targeting, powerful machines and artificial intelligence. Democracy has to somehow still be able to work in that context too. Which means it’s going to have to get a bit of an upgrade, but it’s going to have to exert some kind of control over technology as well. Which is why I think we will probably, in the end, as the senators were suggesting when questioning Mark Zuckerberg, may need some more regulation.
Jeff Schechtman: More regulation for technology, but the other side of it, which you touched on, is what do we have to do to create democracy 2.0?
Jamie Bartlett: This is such a difficult question. I think it’s going to be the most important one for the next decade or so. Like how can we upgrade democracy? One, a very, very obvious way of doing it that I hear people suggest a lot is more and more votes. For the first time since ancient Greece, it is now going to be possible to have secure, regular voting for citizens on every single subject. It’s not particularly difficult to do, we could do that, and I think there’s going to be more demand for that.
We have a representative democracy, so I don’t think we want to go down the path of having this direct democracy system where every subject is voted on, because I think it will make us even more divided as a society. I think we need to look at, how do we improve representative democracy? Are there ways of making and allowing parliamentarians and senators and congress people slightly more informed about tech?
Are there ways that they can have some more plebiscites but not too many? Are there going to be new ways of governments using bitcoin or blockchain technology to improve the way they raise taxes? Are there going to be ways that governments can embrace tech and really push forward the exciting innovation, but build a mechanism by which they can go in and check the algorithms of these powerful technologies to make sure they are democratic and fair and just? These are the things that I think we need to look into.
Jeff Schechtman: Did you get a sense in listening to the questions that Zuckerberg answered, that most of the people that were questioning him had a clue as to the depth of these issues that we’re talking about?
Jamie Bartlett: Yeah, well, there was the one senator who seemed to be, didn’t know how Facebook’s basic business model worked, right? I think he said, “How do you make money?”
Jeff Schechtman: Right.
Jamie Bartlett: Mark Zuckerberg said, “We sell adverts, Senator.” He said, “Oh, okay, fine.” I almost couldn’t believe that. I think you had a great variety of knowledge, and I think the politicians that had been really been thoughtful, that had taken the time to understand it, you could really tell because they were able to grill him more effectively. I don’t want every politician to be a computer scientist. That would be the worst thing imaginable.
Non-technologists have to be able to be involved in this discussion. They need to understand the basics of how this technology now works and what it means because if they don’t, how on earth are they supposed to reflect people’s concerns in a meaningful way and hold these companies to account? I think that Congress is slightly better, but still a lot of room for improvement.
Jeff Schechtman: Jamie Bartlett, thanks so much for spending time with us here on Radio WhoWhatWhy.
Jamie Bartlett: Really enjoyed the conversation, Jeff. I’m going to take away some of your ideas, actually, and use them for myself. Hope you don’t mind.
Jeff Schechtman: Feel free. Thank you very much. Jamie Bartlett, thanks a lot.
Jamie Bartlett: Thank you.
Jeff Schechtman: Thank you for listening and for joining us here on Radio WhoWhatWhy. I hope you join us next week for another Radio WhoWhatWhy podcast. I’m Jeff Schechtman. If you liked this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to WhoWhatWhy.org/donate.

Related front page panorama photo credit: Adapted by WhoWhatWhy from Jamie Bartlett (TED Conference / Flickr – CC BY-NC 2.0) and  The People vs. Tech (Ebury Press).

Where else do you see journalism of this quality and value?

Please help us do more. Make a tax-deductible contribution now.

Our Comment Policy

Keep it civilized, keep it relevant, keep it clear, keep it short. Please do not post links or promotional material. We reserve the right to edit and to delete comments where necessary.

print

4 responses to “The Internet Is Killing Democracy”

  1. Larry says:

    Pre-crime, Pre-election results, Pre-wars….

    Thanks, future AI.

  2. MacKenzie says:

    What’s up with the use of the word “democracy”? The USA is a republic, not a democracy. Why would that word be so celebrated anyway? As they say, “Democracy is tyranny of the majority”.

    Also, where did the guest come up with this concept that the early pioneers of the Internet believed that giving people more information would make politics nicer? The Internet was a military project and Google (and possibly Facebook) was funded by the CIA’s venture capital firm In-Q-Tel and Amazon has large contracts with the CIA. I don’t understand why Jeff Schectman never seems to challenge his guests on these types of assertions.

  3. VoxFox says:

    Digital destroys (memory, brains, bodies (via EMF), society, jobs, stability.

    Enough already.

  4. seco says:

    The internet is designed as a recursive cybernetic system.

    A recent study found that, people who had a “smart” phone within arm’s reach declined significantly in cognitive ability.

    In other words, having a network connected device near you (pc, phone, tablet, speaker, watch, tv etc) makes you dumb.

    But beyond the multi-sensory cognitive loss, the AI Bayesian belief network algorithms and classifiers of the internet are designed to feed you back to yourself.

    Late-stage capitalism has created an atomized culture and neoliberalism leaves individuals alienated and anxious under the precarity of the “now” and the fallacy of “market” dogma.

    The internet and especially “social” media can provide for a steady stream of dopamine in a way that no real world experience can.

    We increasingly seek our cybernetic reflection in the hyper-reality of snapchat or the spectacle of Avocado toast on instagram or youtube, because the artificial intelligence algorithms juice our dopamine loops better than the real thing and protect us from the vagaries and complexities of the real world.

    No human interaction can match the dopamine release we get from interacting with our technologies.

    By engaging the internet we complete dopamine feedback loops and in the process we can lose our power and become voluntarily controlled subjects rather than informed independent agents.