AI’s Dirty Secret: Sweatshops, Carbon, and the Race to the Bottom - WhoWhatWhy AI’s Dirty Secret: Sweatshops, Carbon, and the Race to the Bottom - WhoWhatWhy

Google, Data Center, Council Bluffs, IA
Google Data Center in Council Bluffs, IA. Photo credit: Chad Davis / Flickr (CC BY 2.0)

Behind AI’s rapid rise: exploited workers in digital sweatshops and mounting environmental costs.

AI’s rapid advancement comes with a hidden human cost. Not just the vast number of jobs that may be eliminated, but the little known digital sweatshops that are crucial to the ongoing development of AI itself.

In an eye-opening conversation on this week’s WhoWhatWhy podcast, James Muldoon, associate professor of management at the University of Essex and co-author of Feeding the Machine, argues that the rapid advancement of AI technology comes with a significant human toll. Research by Muldoon and his colleagues exposes the often-overlooked exploitation of human capital in the Global South that lies behind the high-tech rollout of AI.

Muldoon describes factories where workers earn less than $2 an hour, performing tasks like data annotation and content moderation. Since the tech giants wield immense power over these workers, the result is a competitive race to the bottom for wages and working conditions. Additionally, AI data centers consume massive amounts of electricity, with dire environmental impact.

While Muldoon acknowledges some of AI’s potential benefits in areas like health care and scientific research, he emphasizes the failure so far to count the debilitating consequences for workers and the environment.

iTunes Apple PodcastsGoogle PodcastsGoogle PodcastsRSS RSS


Full Text Transcript:

(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)

Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman. Artificial intelligence is undeniably one of the most transformative technologies of our time. Its impact is being felt across industries, reshaping how we work, communicate, and even think. While concerns about new technologies are as old as innovation itself — from Plato’s objection to the written word to fears about the automobile’s effect on society, to early skepticism about the internet — AI has persisted and continued to evolve at a breathtaking pace.

History has shown us time and again that technological progress is a force unto itself. It finds a way to move forward, often despite our best efforts to control or contain it. Attempts to halt or even slow down such progress frequently lead to unintended consequences, not all of them positive. We’ve seen this pattern repeat with various innovations throughout human history. As with any paradigm-shifting technology, AI has its proponents and its critics. Today we’re joined by Dr. James Muldoon, associate professor of management at the University of Essex and co-author of the book Feeding the Machine — a book that takes a critical look at the global impact of AI, particularly focusing on the human cost.

While opinions on this topic vary, and many would argue that the benefits of AI far outweigh its drawbacks, it’s crucial that we examine all perspectives as we navigate this new technological landscape. Dr. Muldoon’s work provides a counterpoint to the often-utopian visions of AI’s future, reminding us to consider the human elements in this digital revolution. It’s worth noting that, despite criticism, AI is here to stay. Like the internet before it, AI has become deeply ingrained into our daily lives and global economy.

The question isn’t whether AI will continue to grow, but how we can shape its growth responsibly — acknowledging that our ability to control its trajectory may be limited. It is my pleasure to welcome Dr. James Muldoon to talk about his new book, Feeding the Machine. James Muldoon, thanks so much for joining us here on the WhoWhatWhy podcast.

James Muldoon: Thanks for having me.

Jeff: Well, it is a delight to have you here. It seems that so much of our conversation, particularly this dystopian conversation about AI, sounds familiar, that we have been here before so many times. Even going back, as I say in the introduction, to the times of Plato, when new technology comes along there is always resistance. There is always this drive to find its problems.

James: Yes, I think that’s correct. And really, what we wanted to show with this book was that it’s not just a question of, are you for or against technology. It’s about understanding the hidden cost of how this technology is produced. So in our book, Feeding the Machine, we wanted to tell the story of AI from the perspective of the workers who produce it.

And so we take the reader behind the scenes to look at all the myriad workers that are involved in producing AI — from the data annotation to the content moderation, those who work at data centers — and try to get to the bottom of what kinds of conditions are they working under. And to what extent can we think that AI is actually people?

Jeff: Is that similar to the conversation that we had for so long about our iPhones, for example, and the way they were manufactured, and so much of technology and the way it has come to evolve and be accepted today?

James: Yes, I think there are really interesting parallels there. When we think of your standard consumer products like chocolate or coffee, I think most people are accustomed to the idea that there are these supply chains that make those products possible. We can think of the plantations where cocoa is growing. We can think of the people who are working, often under pretty miserable conditions, to get these products to you. We have a whole industry to think about how you can consume these products in an ethical manner — buying from certain sources, making sure that people have certain conditions.

I think it’s less common for us to think about these kinds of topics when we imagine technology, because we think about technology generally as something that exists ephemerally up in the cloud through the internet — something that’s just floating about.

And what we want to show in this book is that often AI and other forms of digital technology rely on this very deep-seated infrastructure that has this human element to it. These human beings who are doing millions of hours of work to make AI possible, often under really appalling conditions that you could relate to digital sweatshops.

Jeff: Is it fair to think of AI and what you refer to as digital sweatshops in the same way that we think about the meatpacking industry, or that we think about other kinds of labor? Life in a steel mill, for example. Is that a fair comparison?

James: Yes, I think there are similarities and differences with how this type of work takes place. When we visited some of these so-called digital sweatshops, we went to rural Uganda, northern Uganda, where the Lord’s Resistance Army used to be. We went to Nairobi, Kenya, to some of the slums in that city. And what we saw are these really large factories with rows and rows of computers with people who are working 10-hour shifts.

They’re going back for unpaid overtime in certain cases. They’re getting paid less than $2 an hour to do this kind of work. And their every step is being tracked and monitored, both through computer software that’s monitoring their screens and through swiping cards that’s monitoring all their movements around the factory. So there are certainly parallels with all kinds of low-income work where people are working under these really grueling conditions.

Jeff: Is it fair to also think about what this kind of work did in transforming China, for example, from a Third World country to a country with an exploding middle class?

James: Yes, I think that it brings back a lot of the debates that were happening, really, in the 1990s, around the idea of sweatshops and outsourcing, where American companies or European companies would send various types of work and production facilities offshore to where labor conditions were cheaper. To where they could get this work done for less money.

And I think that it does raise really interesting points because something we did find when we spoke to a lot of these workers is that, in some cases, yes, this digital work they were doing, in many respects, was preferable to the available alternatives that they had.

So we met people who, their previous work was selling juice at the market, or vegetables at the market, or mobile phone chips on the side of the road. And many of them really liked, in theory, the idea of participating in creating this technology.

I think the real problem lies in the fact of the completely uneven relationship between these tech companies that are centered predominantly in the United States and workers who are based in the Global South. And that could be in Kenya, the Philippines, India, all these different countries where a lot of this digital work is outsourced to.

And the problem is the tech companies have a really big say in the conditions of these workers. It would be very, very easy for them to set minimum standards, to set conditions that were fair and equitable, and gave them a chance to have a future in these jobs and to have a sense of dignity in their work. And I think when I talk to these workers, this is the real element that’s missing, that the tech companies are basically trying to simulate a global race to the bottom on pay and conditions, where they’re pitting these countries against each other, and they’re trying to make these outsourcing centers do the work as cheaply and as efficiently as possible, without much concern for the workers’ well-being.

Jeff: Talk a little bit about the kind of work you’re talking about with respect to these digital sweatshops.

James: Yes, I can give you one example. So a lot of this is different forms of data annotation or preparing the data sets that help train AI models. So let’s take the example of an autonomous vehicle, a self-driving car that we have seen a little bit around San Francisco and elsewhere. For these cars to be able to see the road and understand the traffic conditions, the AI models that drive them need to be trained on hours and hours of road footage in order to be able to interpret what they’re actually seeing. So they need to basically have a camera that’s facing the road and be able to tell the difference between a child, and a tree, and a street sign.

And in order to do that, these videos of street scenes have to be manually annotated by human beings who have to essentially draw boxes around different objects in the videos. And so, to produce one hour of annotated video footage of some kind of street scene for these AI models, it takes 800 human hours to manually code all the different objects that are being seen in the video. So this is one example, the example of computer vision, of the type of really menial grinding work that a lot of these data annotators have to do.

Jeff: Is there the possibility, though, that this work will ultimately be able to be done by AI itself?

James: Yes. That’s a really interesting question, and I think the jury is still out on what will happen five, ten, even more years into the future. There are some limitations on the possibility of AI automating this, as it were. The first is, there have been certain studies that have shown that, what’s called in the industry, synthetic data, which is data that’s essentially produced by computers, when it’s fed to a language model like ChatGPT, it eventually leads to the model being corrupted and collapsing. So we might not be in a situation in which AI can start to do all the work.

There’s certainly the possibility that some of this work can be taken over, but what we saw on the ground was that it’s really tricky to do this kind of stuff. And it’s very difficult to automate because a lot of it is very fiddly. It requires a lot of back and forth. It requires a lot of contextual understanding of what is being shown in the data, which computers are just not really able to understand and to grasp.

And this whole story that we’re telling of human beings having to step in and effectively do all these tasks that we probably imagine are done by computers, is really showing you some of the limitations of AI — that actually, behind the scenes, what we think is automated is so often just something that’s being done by a human worker.

Jeff: We are still at really the earliest stages of AI. Your point being correct in terms of the way some of these LLMs [large-language models] get corrupted over time, we still don’t completely understand why that’s happening and how to deal with it. And the assumption, I suppose, has to be that eventually, we will figure that out.

James: Well, I think that’s a pretty strong assumption. I think you’re asking the right questions, and I think there’s a real point to what you’re saying, but I think it’s too early to say what exactly will be the future of LLMs. There are a lot of different possibilities that are open. We might actually just not get that much further with the current generation of LLMs, with the transformer architecture that they have and the kind of technology that’s behind them. It might actually require a bit of a step change to think of a completely different way of understanding and building artificial intelligence in order to get another jump.

It’s certainly possible that chatbots and the kind of technology that’s being built off the back of LLMs could significantly improve from what we have at the moment. But I don’t think there’s any guarantee that it’s going to be radically different from what we have today.

And I think what we can say is, there’s no immediate sign that in the next decade, we will no longer need human workers to do this kind of work. So the pressing need for addressing how this AI is being produced and the conditions of workers right across the world that are doing it is still a really important one.

Jeff: To what extent should we be looking at the competition for these workers, the increasing need for these workers, as a way that that rising tide of need lifts up wages and conditions?

James: Well, I think what we actually saw in our fieldwork was the exact opposite, that actually, what we see is that the more people get involved in this, the more conditions are actually reduced across the globe. And I’ll give you an example of this.

A large tech company — let’s say Meta, or Tesla, or one of these American companies — will give out this work to let’s say eight or ten different suppliers across the globe. And one example that we witnessed was Meta outsourcing its content moderation. And the outsourcing center and the managers that we spoke to said that they were in direct competition with seven other suppliers, and every month they would get ranked.

And the top firm would receive a slight bonus and a slightly better condition in their contract. But it required them to constantly be monitoring their workers, to be trying to cut costs where they could, to make things more efficient. And it led to what we saw as a global race to the bottom in terms of pay conditions.

Jeff: Again, coming back to what we said at the outset, we saw this with the manufacturing of so much other electronic technology, and eventually those wages and conditions did improve, were forced to improve.

James: Well, that certainly hasn’t been the case with what we’ve seen here. I think that the story of China’s rise out of poverty, and the import substitution method that a lot of the Southeast Asian countries adopted, raises really complicated questions about global economics, and how trade works, and things like that.

But I think with the story of the hidden human cost of AI, the question is more of a simple one. We have these very poor working conditions that are part of supply chains that are being led by these tech companies, and they have all the power in the world to have a say in how those are organized.

They exercise so much power in these global networks that the companies themselves and the governments that are behind the scenes both have the possibility of setting these minimum standards for workers. What kinds of working conditions do we want to see here? And the reason they do this, and the reason they get away with it is that they think “out of sight, out of mind.” That the majority of the wealth and the value of their consumers in the North American and European markets is not going to care very much about how some workers are treated in Kenya, in Nairobi.

And so the question is really, what can we do to incentivize these companies to make their conditions better, to encourage governments to create regulations to improve these conditions, and to support these workers in their own struggle for more dignity in their work and for better conditions in how their work is organized?

Jeff: When you talk to these large tech companies that are at the core of this, is there any kind of response to this issue?

James: Well, companies don’t really want to go on the record about this because they want this to be out of the media. When the big story broke by Time magazine about these Meta moderators in Kenya, it was a big embarrassment for the company. And so most people will not go on the record about this. They don’t want to talk about it.

Sometimes you’ll hear people like Sam Altman, CEO of OpenAI, talk about how in five or ten years’ time, we won’t need any of these people. Everything will be automated. But we’ve seen from how this work takes place on the ground that that’s just not the case, that there’s no sign of that in the immediate future.

There are companies that are setting up entire industries around this process that do not see themselves as being out of a job in a few years’ time. So it’s not something that the companies want you to think about. No one wants you to know how the sausage gets made. They just want you to enjoy the barbecue.

Jeff: These are companies that are equivalent to the Foxconn of the future.

James: Yes. And we already touched upon how there are similarities in the working conditions of the digital workers — people who, a decade ago, would’ve been making iPhones in the special economic zones in China. And we see great similarities in how that work is organized in these kinds of digital sweatshops or outsourcing centers that are now all over the world.

Jeff: One of the other things that you talk about in the book is the environmental impact of AI.

James: Yes. So it’s also something that isn’t really at the forefront of the debate about AI, but what is really coming into view at the moment is just how much of an impact a lot of this AI has on the environment.

One example is when we do a search on ChatGPT it uses 10 times as much electricity as a Google search. We already know that the internet and data centers require incredible amounts of energy — and, in particular, electricity. But this is only going to increase with the increasing adoption of AI right across the world. So we know, for example, global data center electricity demand is set to double by 2026. It’s entire whole nother level. And the world will need 10 to 20 times more data centers by 2035.

So when you think about just the crazy demand that this is going to place on the environment, you can see that it’s already starting to have enormous political implications for countries right across the world.

We did some fieldwork in Iceland. And in that country, the country’s data centers actually consume more electricity than the entire households combined. Ireland, which is a country that is home to a enormous amount of big tech companies and is a huge center for data centers, is already trying to place a moratorium on the building of new data centers because they just don’t think that the electricity grid will be able to handle it.

And when you hear some of these tech people and executives talking about the energy demands of these AI data centers, their plans are through the roof. They know exactly what it’s going to take. Sam Altman was talking about a very audacious plan to spend up to $7 trillion to produce energy from nuclear fusion.

So these things have an enormous carbon footprint that everyone is very well aware of.

Jeff: And then the question is, the degree to which alternative energy sources will become more available for these data centers.

James: Yes. So that’s something that a lot of the tech companies are very interested in. They are making a lot of noise about various forms of renewable energy. And it will have to be the case that a lot of these facilities will have to be powered by various forms of renewable energy. One of the issues is the discrepancy between the press releases of these tech companies and what actually happens in practice, because it was in 2018 and 2019 that all of the big tech companies were making a lot of noise about their desires, and their promise to go carbon neutral or carbon negative.

I think four or five of the biggest companies all signed pledges that by 2030 they would’ve reduced their emissions to almost zero, that they’d all be operating on renewable energy. And I think Microsoft perhaps pledged that it would even deal with emissions that had been emitted since the historical origins of the company itself so that all emissions throughout Microsoft’s lifetime would be countered. But the reality is, and what we’ve seen is, that their emissions have gone up radically — that at Microsoft, their emissions have increased by 30 percent over the past three years. That Google recently reported an increase of 50 percent, I believe.

And so the AI is, despite what they’re saying, radically increasing emissions, and to a very large extent. So it’s, we can talk a lot about renewable energy and the possibility of this helping to counter some of the increase in carbon. But the reality is that it’s just not enough and that this is still leading to huge increases in the carbon emissions of these leading tech companies.

Jeff: Do we need to look at this in a larger frame? Do we need to also look at it, not to take anything away from these problems, but look at it in the mix of the way AI is potentially valuable in solving environmental problems in more efficient resource allocation and climate modeling, and even the optimization of energy use?

James: Yes. And this is often the terms of the debate. And I do think there is a role for AI in essentially optimizing energy use. It has been found to be incredibly useful in internal company operations in finding out more efficient ways of using energy and using systems.

That will certainly be a slight offset in this, but there are limits to what can be saved through those kinds of optimization processes. I think it’s an important part of the debate, but really, what we’re seeing on the ground is that AI is actually leading to a much larger number of emissions and greater amounts of emissions due to just how energy-intensive using the technology is.

Jeff: The other side of some of these broader arguments is the way in which AI is potentially leading to a boost in global GDP and its value in healthcare and scientific research, education, and a number of other areas. We have to balance all of this together.

James: Well, yes, I think that it remains to be seen precisely what impact AI will have on global GDP. Let’s see, basically, I would like to be optimistic and say that they’ll lead to these huge efficiency boosts for companies and a huge productivity increase. The creators of AI would like us to believe that’s the case, but we haven’t seen that in the economy just yet.

And I think that there is a debate to be had on precisely how useful a lot of these products will be, because there are a lot of promises. And I’ve experienced the kind of utility of chatbots like ChatGPT, but there are also quite hard limits on precisely what roles can be delegated to chatbots.

There are huge issues with trust, with hallucinations, with accuracy.

So let’s basically see what the benefits will be. But the costs are much more quantifiable at the moment because we know very well how much energy is being used. We know the human costs that we see on those producing it, the greater carbon footprint, the impact on workers, the impact on artists and creatives, and writers who are all having their creative products used in ChatGPT and other software’s training products.

So the costs are very real, and they’re here right now. I think we need to be careful when we weigh those against perceived benefits that might arise in the future but may prove to be illusory. I think it’s an open question.

Jeff: It certainly may prove to be that, but part of it is, as we referred to before, we’re really still in the early minutes of this game, and we really don’t know where all of this might be going. And in fact, some of its highest and best uses may not even have made themselves available or visible yet.

James: That’s completely true, but we may also be in a situation where we’ve actually discovered a lot of the use cases, and there might be a decade or so of relative stagnation.

And one example could be the creation of social media. When it first came onto the scene in the 2000s, people were talking about all these fantastic use cases for social media and how it was just the beginning, and it was going to be this revolutionary new technology. And when it came to things like Google search engine, Facebook’s social network, all of that foundational technology that emerged in the 2000s, we didn’t really see that much innovation over the next two decades.

It wasn’t really until AI and large language models that we had a next big thing for Silicon Valley. So, we can be optimistic, but I think we need to be realistic about the possibility that not all technology keeps exponentially improving. Not all technology is taken up and used in the ways that its founders would like it to be. And the future does remain open to a range of different possibilities.

And so I think it’s important to focus on what’s happening right now, what evidence we have before ourselves, and really base our analysis on that.

Jeff: Only to the extent that that doesn’t stand in the way of continued development to find what these opportunities might be, that the techno-optimism really does have a place in this.

James: Well, I think, yes, we need a degree of curiosity and willingness to explore the potential use cases. But I think we need to be hardheaded and realistic about the negative consequences and the side effects of this technology.

I wouldn’t want my doctor or my lawyer just to be typing everything into ChatGPT, nor would I want my financial advisor or teacher to do the same. So I think we need to understand the limitations of the technology, the real human cost that it’s causing to those who are building it, and the environmental damage as well that’s being done — because none of those things are illusory, none of those things are uncertain. They’re all happening right now.

Jeff: There are areas, though, that certainly are positive. You mentioned doctors a moment ago. Certainly, we’ve seen, with respect to radiology, that there’s a huge potential benefit there.

James: Yes. And I think it has some great use cases in the medical profession in general. And yes, some doctors and nurses do warn about its possible use in caring for patients and perhaps not being able to provide the best possible care. But I think that certain cases of, particularly medical imaging and diagnosis, tests have shown that it performs better than a human doctor. And of course, we would want to use and develop that technology in those use cases where it’s proved really beneficial for us.

Jeff: And finally, is there a concern, or do you have any concern, that overregulation can prevent the evolution of this in ways that make the current problems and the current sacrifices ultimately pay off?

James: It really depends on the case-by-case basis. So when we look at regulation, in the early days a lot of what we’re seeing is attempts to really just regulate the absolute leading actors in this — the largest models, those who we perceive to be building things that could be a potential threat to humanity. So it’s really only touching a few of the largest companies when we think about what’s happening, at least in the US.

The EU is a slightly different case because there’s, as you know, the new EU AI Act, which is regulating a range of different use cases for AI and is really trying to classify them into different levels of risk, such that those that are producing the highest risk will probably be banned from doing that. And those producing medium risk will have a high level of regulation, and those producing low-risk software will have a much lighter touch.

But I think that we need a lot more regulation of this area at the moment. It’s a real Wild West right now, but the regulation has to be fit for the cases that it’s intending to cover. So we don’t want blanket sweeping things that are not really taking into consideration the different use cases in different industries and the different types of technology, and how they’re being developed. So I think what we want to see more of in the future is regulation that is being much more precise and specific about how the AI is being made.

But we also need more regulation of how it’s being produced because, currently, none of the AI acts that are being written or implemented across the globe are taking into consideration the kind of supply chains that are necessary to produce AI.

So none of this is going to cover all of the workers that we interviewed as part of our book, because it’s really just focused on the products and how they’re being implemented. So I think we need more regulation when it comes to AI supply chains and the kinds of workers that are needed to produce the current generation of AI systems.

Jeff: Dr. James Muldoon. His book is Feeding the Machine. James, I thank you so much for spending time with us today here on the WhoWhatWhy podcast.

James: Thank you very much. It’s a pleasure to be here.

Jeff: Thank you. And thank you for listening and joining us here on the WhoWhatWhy podcast. I hope you join us next week for another radio WhoWhatWhy podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


Author

  • Jeff Schechtman

    Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts

Comments are closed.