Subscribe

The Objectivist Drug Party, MU, exhibition
MU exhibition. Photo by Hanneke Wetzer. Photo credit: M U / Flickr (CC BY-NC-ND 2.0)

Why and how algorithms are taking over our lives, why we should care, and what we can do about it.

When the computer “Hal 9000” refused astronaut David Bowman’s request  to “Open the pod bay doors, HAL” in 2001: A Space Odyssey, it was as if our most primal fears of thinking machines came rushing headlong into the 20th century.

Today, algorithms and artificial intelligence are creating a much more frightening world. The planes we fly in, the cars we drive — as well as those that will soon drive us — and every convenience in our homes are all governed by algorithms that the average person knows very little about, but that have enormous and growing control over us.

In this WhoWhatWhy podcast, we talk with Kartik Hosanagar, the John C. Howard professor of technology and digital business at the Wharton School, and discuss how algorithms have become self-learning, more unpredictable, biased, and potentially harmful.

In many cases today, Hosanagar reminds us, algorithms are supplanting expert-systems, even as they do their job “serving humankind.”

Hosanagar argues that we need an algorithmic bill of rights and more insight into all the artificial intelligence that surrounds us, and that anyone “impacted by decisions made by algorithms should have the right to a description of the data used to train them, and details as to how that data was collected.”

googleplaylogo200px download rss-35468_640

Click HERE to Download Mp3


Full Text Transcript:

As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to time constraints, we are not always able to proofread them as closely as we would like. Should you spot any errors, we’d be grateful if you would notify us.

Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman.

When Hal asked astronaut David Bowman to open the pod bay doors, it was as if our most primal fear of machines came rushing headlong into the 20th century. Today in our 21st century world, we understand the artificial intelligence behind Hal. We see it on display every day in our reliance on automation and AI. And algorithms, and flying airplanes, and soon our self-driving cars. It’s the full blossoming of the promised brave new world.

Jeff Schechtman: But is there anything we should or could do about it? Do we have any control left, or do we just surrender to the algorithm? Joining me to talk about this, I’m joined by Kartik Hosanagar. He is the John C. Howard Professor of Technology and Digital Business, and a professor of Marketing at the Wharton School. He’s the co-founder of four different ventures, and his writing has appeared in numerous publications, including Forbes and the Harvard Business Review. He’s the author of the book The Human’s Guide to Machine Intelligence: How Algorithms are Shaping Our Lives, and he is co-host of the SiriusXM show, The Digital Hour. He earned his PhD in management science and information systems at Carnegie Melon. And it is my pleasure to welcome Kartik Hosanagar here to the WhoWhatWhy podcast. Kartik, thanks so much for joining us.

Kartik Hosanagar: Jeff, thanks for having me.

Jeff Schechtman: When we think about algorithms, we think about them in kind of these scientific terms, that somehow they are neutral in so many respects. But as you point out, did any of them that are created reflect the biases and the inherent proclivities both of the individuals or individuals creating them, and of society?

Kartik Hosanagar: That’s right. I mean I think a lot of these algorithms, and machines in general, with being objective, rational decision making engines. And in many ways they are, but also we’re starting to see money examples where these algorithms are prone to the kinds of biases and limitations that we see in human decision making. For example, there was a study done by ProPublica of some of the algorithms used in code rooms in Florida. And they found that these algorithms were being used by judges and parole officers to make sentencing and parole decisions. Because these algorithms essentially predicted the likelihood that a criminal will re-offend, and would guide these decision makers on what kind of sentences to give the defendants.

Kartik Hosanagar: It turns out from the study they found that the algorithm had a race bias, and it was twice as likely to mislabel a black defendant as being likely to commit a crime. Then last year there were examples of resume screening algorithms having a gender bias. And so now we’re starting to see that these algorithms also have some of the same kinds of limitations that we sometimes see in human beings. And so clearly they provide a lot of value, and we should use them. But I think we also need some checks and balances in place because they are prone biases and misjudgments as well.

Jeff Schechtman: I guess the broader question is, is there a way to really tease that out of the algorithm? Can we create algorithms that are not biased?

Kartik Hosanagar: Yeah, so I think that’s an interesting question. To tease out the bias, we need to understand where that bias is coming from. And there are many ways in which these kinds of biases can come. But I think we can explain algorithm behavior in much the same way that we explain human behavior. With human behavior we think of nature and nurture as being two drivers of our actions and our behavior. Nature meaning our genetic code and nurture meaning our environment, our friends and our other influences.

Kartik Hosanagar: And algorithms are similar. They have nature and nurture as well. Nature is essentially their code, the logic that a human programmer puts into the algorithm. Nurture is the data from which they learn. And it’s highly unlikely that when we talk about biases, that an engineer is coding in those biases. It’s not like there’s a line of code that says, if gender is female, then don’t invite them for a job interview. It’s actually often in the nurture because increasingly we are moving towards algorithms that are learning more and more from data. So for example, what we might do is we wanted to create an algorithm to drive a car, we might instead of writing every rule in the algorithm, we might use a data set which has, you know, people, thousands of people driving videos of thousands of people driving, and we might have the system learn the patterns in there. So it looks at the videos, and it learns when to brake, when to change lanes, and so on.

Kartik Hosanagar: If we want to screen job applications, we again have data on hundreds of thousands of people who might have applied for a job in an organization. Who was interviewed, who actually got the job, who was promoted at the workplace. And we say, learn from this data. And now what happens is f there’s biases in the data, the algorithm picks it up. So if there was a gender bias in the organization, and women were less likely to get the jobs, or are less likely to get promoted, algorithms learn that. And almost institutionalizes that bias. And that’s where the problem comes in. And that’s why I like to say sometimes when we see rogue algorithmic behavior, I kind of say that it’s kind of because our algorithms are hanging out with the wrong data.

Jeff Schechtman: When we create larger and larger and larger data sets, is that a way to get by this problem?

Kartik Hosanagar: Well, having more data is clearly helpful in an era algorithms are learning from data, because more data implies that we’re going to more likely see unique cases. And that we’ll more likely learn that there are situations that are rarely encountered, but they come up. Data is helpful. But having more and more data alone will not solve these problems, like the kinds I mentioned, like gender bias, race bias, or you know, if we talk about, let’s say Facebook’s news feed algorithm circulating fake news stories. Some of that requires additional checks and balances. And engineers usually have, you know, very direct measures and tests of the prediction activity of these algorithms, but rarely go beyond technical measures to socially important measures, like is the algorithm biased in some way? Is it violating privacy in some way? So we need to think about auditing algorithms in a more holistic way.

Jeff Schechtman: You mentioned Facebook, for example, and that’s just one. But one of the things that we see that that does is, and you talk about this, it creates these echo chambers.

Kartik Hosanagar: Yes. And we have all witnessed this, and it did get a lot of airtime around the 2016 presidential elections, where a lot of people expressed this concern that we’re each essentially seeing more and more of the views that we already hold, and not the other perspectives, and that’s causing society to become more polarized. And one reason, excuse me, one reason that was happening is that the algorithms tried to personalize the media they curate for us. And they’re trying to find for us more and more of the kinds of content that we already consume. And that puts us in an echo chamber. That makes it likely that we’ve seen other perspectives, and that makes it unlikely that we can appreciate other viewpoints, and that makes society more polarized.

Kartik Hosanagar: And so we need to make sure when algorithms are designed, we’re not looking just for is this content the user will engage with? We have to think about other measures, like is there a social contribution, and what is the impact of this algorithm on society?

Jeff Schechtman: Is there a danger that the algorithms and the processes by which they work continue to become more and more complex, and therefore so much harder not only for people to understand, for the average person to understand, but they become complex even for the people that are creating them.

Kartik Hosanagar: I think we’re already at a point where algorithms are certainly getting so complex that the users cannot understand, but increasingly to a point where even the people creating them struggle with it. And let me mention a couple examples. There was the crash of the Ethiopian Airline; there was a problem with the autopilot algorithm. And one of the things that’s happening now is that these systems are so complex that highly trained pilots also don’t know sometimes how to react fast enough when the algorithm fails. And they have been used to, for a long period of time, working in a setting where the algorithm takes complete control, and now they don’t know how to take back control in some senses.

Kartik Hosanagar: But going back to the algorithms themselves, we are also moving towards a class of algorithms. For example, there’s a type of machine learning algorithm known as neurolabs, and these are modeled on the human brain. And they’re very complex. And essentially what’s happening in these algorithms is they’ve taken lots of data, and they learn how to make decisions like humans have made. But the engineer cannot explain why the decision was the way it was because the engineer did not code the rules, the engineer just gave massive amounts of data, and the algorithm picked up the pattern. So why was a person denied the loan? Why did the algorithm suggest this person should get, you know, higher sentencing than another person? The engineer might not even know that because there’s lots of data, and the algorithm is almost a black box, even to the engineer. And so that becomes a problem, and that’s why I’ve argued for things like transparency as a way to address some of the challenges with highly opaque algorithms.

Jeff Schechtman: I mean, I guess the question is, how do you create transparency in a world that is getting so complex that as you say, even the engineers that create them don’t fully understand them.

Kartik Hosanagar: Yeah, so I think transparency, we can discuss that concept in terms of who the transparency is for. And we can talk about transparency for end users, and transparency for auditors differently. Transparency for end users actually is very basic. It’s not like an end user wants to know the inner details of every algorithm, you know, we use. But we would actually benefit knowing what’s going on at the high level. For example, what kinds of data are being used by the algorithms to make decisions? Even, for example, just knowing that an algorithm made the decision. Often we don’t even know that. You apply for a loan, and the loan was rejected or you apply for a job, and you did not get invited. Was an algorithm used?

Kartik Hosanagar: And secondly, what kinds of data did the algorithm use? Knowing, for example, that the algorithm not only used the data I provided in my job application, but it also looked at my social media posts. That’s helpful information. Knowing what are some of the critical variables that the algorithm focuses on. For example, for credit approval here are the five most important variables the algorithm looks at. Those kinds of explanations can be provided even today.

Jeff Schechtman: Mm-hmm (affirmative).

Kartik Hosanagar: Now as far as auditors are concerned, that audit has to be more detailed than the end user. I’ve argued that companies should be having their algorithms audited, especially when they’re deploying these algorithms in socially important settings. And that audit could be done by an internal team, but that team has to be independent of the team that developed the algorithm, and that will help ensure that, you know, there is some level of audit done, and those audits will also look into things like explanations regarding decisions, even though engineers cannot today explain decisions, there’s a whole field of research in artificial intelligence known as interpretable machine learning, or explainable AI, and it’s focused on how do you get explanations out of black box systems. And I think there is hope for progress, and I think we should require firms to look more carefully into those kinds of explanations. Otherwise, you know, we will lose control of transparency completely.

Jeff Schechtman: Are we looking at algorithms that in fact audit the other algorithms?

Kartik Hosanagar: Absolutely. That’s one of the areas I’m personally pushing forward, and I’m working on research related to that. I’m also advocating this to companies that, you know, if the algorithms are constantly evolving with data. As they get more data, they become more sophisticated. It also means that you know, the audit process has to be somewhat continuous, because the algorithm is constantly evolving as it gets more data. And such continuous audits will be hard and expensive for humans to accomplish. So you might need algorithms to monitor algorithms as well. And I think that will be part of the solution.

Jeff Schechtman: Is there a field that looks at the idea of programming into these algorithms a more human approach? That there is a way to create algorithms that reflect more humanity, in a way, I suppose?

Kartik Hosanagar: Well, today I’m going to say that there isn’t one field that’s focused on that. There are few fields that are looking at related ideas, but not exactly what you’re saying. So there’s the whole idea of human computer interaction, which is looking at how do people use the technology and trying to modify the software or the machine interface so that it’s based on how users actually use it in practice.

Kartik Hosanagar: But I think there’s a greater need for a study of how humans and algorithms can work together. And that will include understanding not just the engineering details, but also understanding social sciences, understanding the psychology of decision makers. Understanding what drives trust in these machines. Understanding what are some socially important outcomes of interest, so that we order these algorithms against these socially important outcomes, like fairness and so on. I think that field hasn’t yet emerged. That’s an area where I work, and it’s at the intersection of multiple disciplines. But it is going to be extremely important as we roll out algorithms in more and more important settings going forward.

Jeff Schechtman: It really relates to this whole idea you touched on a moment ago of decision making, and really how human decisions are made, how machine decisions are made, and how the two, as you say, can work together.

Kartik Hosanagar: That’s actually right. We have a whole field in psychology that’s focused on human decisions, called Judgment and Decision Making. And I think we need to think about decision making with algorithms in a similar vein, especially keeping in mind that these algorithms are being deployed and used by humans, and for humans. And so we need to bring in those perspectives into the design. We need to also not only make sure the design is informed by knowledge of the human users, but also correspondingly the knowledge of the design is suitably conveyed to human users. I think we have to figure out how to train users to use these systems better, and help ensure the algorithm serves them better.

Kartik Hosanagar: Think of, for example, ads. You know, today in some places we’ll see, you know, we have this ability to click and say don’t show me ads like this. Or with music recommendations, we can say, “Hey, show me music like this, but don’t show me music like that.” And that’s a way for us to give feedback. And I think we should think about our interactions with algorithms similarly in many other settings, where you know, we don’t eliminate the human from the loop completely. The goal is, I don’t think to be completely autonomous and completely eliminate humans. I think the goal is to figure out how do we complement the human, and how do we make sure the human and the algorithm are working very efficiently together. And so that means keeping the human in the loop for the things where the human can be helpful.

Jeff Schechtman: And I guess the biggest, or one of the biggest obstacles to that is that human interaction is pretty constant. Humans aren’t changing that much. But the algorithms, the way they’re created, the technological side of it, continues to change, continues to evolve. And trying to keep those things in sync seems to be the greatest challenge.

Kartik Hosanagar: It’s a big challenge. It’s a big challenge for users, it’s a big challenge for regulators. By the time we’ve caught up on our technology, that technology is evolved even further. And so it’s a hard challenge, and I think, you know, at some level we need to make sure algorithm and technology literacy is part of school curriculum, and it’s sort of basic knowledge that people have. Because we are going to use these systems so much that we have to understand them at a deeper level, and we can’t be passive about it anymore because the consequences are very significant, whether we’re talking about a democracy or you know, I’m curating news stories for citizens, or we talking about use by doctors in medicine, or used in the courtroom, and so on. So I think we need to rethink how we do education, we have to rethink how we do regulation, and firms also need to stand up and do a better job of auditing and taking responsibility as well.

Jeff Schechtman: Kartik Hosanagar, thank you so much for spending time with us.

Kartik Hosanagar: Jeff, thanks for having me on your show.

Jeff Schechtman: Thank you. And thank you for listening and for joining us here on Radio

WhoWhatWhy. I hope you join us next week for another Radio

WhoWhatWhy podcast. I’m Jeff Schechtman, and if you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes.

You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


Related front page panorama photo credit: Adapted by WhoWhatWhy from Viking and Anna Rchie / Flickr (CC BY-NC 2.0).

Comments are closed.