Subscribe

Data security, data breach, cyber security
Photo credit: Top 10 website / Flickr (CC BY 2.0)

A look at modern data breaches — and the misguided laws and technology that are failing a society for whom online data is essential.

All your personal data is online somewhere. If you didn’t put your financial information online, your bank did. If you don’t think your health data is already in the cloud, ask your health care providers why they put it there. 

Today, to live a full life, your online data is required, and your mobile phone will soon be the only way to travel, arrange for health services, or transact business. 

Meanwhile, the number of data breaches increases every year. The embarrassing breaches at Yahoo, Facebook, Marriott, First American Financial, and Equifax are telling examples.  

On this week’s WhoWhatWhy podcast we talk with Northeastern University professor of law and computer science Woodrow Hartzog, whose latest work is Breached! Why Data Security Law Fails and How to Improve It.

He explains how data, privacy, and cybersecurity are all very different things that need different legal and technological approaches.

Hartzog contends that humans are the weakest link in the data security chain. Because most legislation and technologies to protect data are designed with an inadequate understanding of human behavior, he argues that we need a whole new legal vision of data security — one that holds all participants accountable, legally limits the data that companies gather and store, and provides a roadmap for how our laws must change to reflect today’s reality. 

Oh yes — he also makes a case against biometrics and against Crypto, which he says is actually contributing to ransomware attacks.  

iTunes Apple Podcasts   Google Podcasts Google Podcasts   RSS RSS   MP3 MP3


Full Text Transcript:

(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)

Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman. In an era where our financial, medical, and everyday data lives online, when we are, in fact, required to access and use it via mobile just in order to live our lives, the opportunity for that data to be breached or hacked creates new levels of personal anxiety. The data breaches at Yahoo!, Facebook, Marriott, First American Financial, and Equifax to name a few are telling examples.

So where does the fault lie and what, if anything, are the solutions? Is the fault in our own online behavior? Is the solution in legislation that can get companies to do more to protect us? Is it with technologists who have to deal with hackers who are always one step ahead, or does the reality of international hacking and technological sophistication on all sides simply make this something we have to live with as an acceptable risk of the modern age? We’re going to look at all of this today with my guest, Woody Hartzog.

Woody Hartzog is a professor of law and computer science at Northeastern University School of Law and the Khoury College of Computer Sciences. He is the author of the previous book, Privacy’s Blueprint, and his research on privacy media and robotics has been published in both scholarly and popular publications. He’s testified multiple times before Congress and has appeared in numerous media, including NPR, the BBC, and The Wall Street Journal. He’s also the author of the new book, Breached! Why Data Security Law Fails and How to Improve it. It is my pleasure to welcome Woody Hartzog here to the WhoWhatWhy podcast. Woody, thanks so much for joining us.

Woody Hartzog: Thank you so much for having me.

Jeff: One of the things that certainly seems to be true or certainly feels that way is if, every year, the number of hacks, the number of these breaches seems to grow geometrically. Talk about that first.

Woody: Sure. So when Dan Solove and I first started to research this book, we started to want to create a history of data breaches. It turns out that data breaches is actually a relatively new term that we use to describe when unauthorized parties access our personal information. And it didn’t really start coming up until the ’90s and the 2000s as a normal term. And one of the things that we noticed is that when we started doing some research about data breaches and they called the year “The Year of the Data Breach” starting in about 2007, it was the year of the data breach, and then 2008 was the year of the data breach.

Every year would go on and they would say, “No.” Actually, 2012 was the year of the data breach, and then we were wrong. 2017 is the year of the data breach. And it turns out that every year since probably around 2006 or 2007, someone has been calling at the end of the year looking at all the breaches that have occurred, “The Year of the Breach.” And it’s only getting worse and the line is only going up. And so what Dan and I wanted to look into is, why are we having this one-way direction of breaches only escalating, only getting worse while our laws are not changing?

And we wondered, how can we put that together? Because it’s not just the number of breaches that are occurring, but it is the size of those breaches. So while in 2004 or 2005, a data breach of 1,000 personal records, maybe 2,000 personal records would have been front-page news. Now, businesses can suffer breaches of millions of records and it barely makes the back page and we’ve almost become inured to it, and so we think that that is also a problem. And a lot of the reason why is because we’ve really failed to learn the lessons that all these breaches have to teach us.

Jeff: And to what extent is that fact that we have become almost numb to them at this point? To what extent does that have an impact on the way we approach them, the responsibility that individuals take for them, and the way in which legislators and those that shape public policy look at it?

Woody: That’s such a great question. So one of the ways in which our repeated exposure to breaches becomes problematic is that we’ve learned to only look for the harms that are really visible for data breaches, for example. So most people listening have probably, at some point in their lives, and if you haven’t, you’re one of the very fortunate few who have come home one day and opened up a letter that says, “We’re sorry to inform you that we suffered a breach,” right?

Someone lost a laptop or someone accessed credentials that they shouldn’t have. And they offer things like free credit monitoring or they say, “Change your passwords,” or something like that. And we maybe make those changes, maybe we take credit monitoring, which is, I think, of very limited effectiveness. And then we’ve learned to say, “Okay, well, so maybe no harm, no foul,” right? And what we don’t see in the background is the way in which our personal information is leveraged to breach other people or maybe in the future can conduct a harm that we wouldn’t trace directly to the potential breach.

And courts also struggle then to recognize that. Courts are also taking a no-harm-no-foul approach, but what they’re missing is that the harm is really either delayed significantly in time or imposed upon someone else or some other attenuated kind of harm that’s still very much worth addressing in our laws. But because we look around and say, “Okay, they lost my information in the target breach or whatever and I seem fine,” then it must not be a problem, but it very much is.

Jeff: And that is another part of this, the fact that the way it is reported is without the follow-up, there is never a story about the harm that’s caused or, occasionally, there’ll be the one individual whose information got used. But, generally, there’s this sense that nothing came of this, so why should I worry about it?

Woody: Yes, exactly. Right. One of the things that we try to talk about in the book is the misguided way in which law and industry try to respond to these data breaches where you get the letter and they say, “Hey, a bad thing happens. Maybe just watch out for some stuff.” And it’s very individually focused as well. It’s like, “Watch out for your own accounts and your own things,” and they don’t then tell you more what to do about it, right?

They give you the credit monitoring, but that’s a very limited sort of thing, and then it becomes out of sight and out of mind. And what you don’t think about is the fact that maybe the breach contained a very particular kind of record that could be used, for example, to send you an email in the future disguised as someone that was in your contact address book, right?

And you might think it was a real link and you would click on it, right? And that causes an additional breach years from now, but, of course, that’s so far removed from this approach of breach notification, which is one of the main ways in which the law tries to respond to data breaches, and it does have some good components. But by and large, breach notification, even if it’s necessary, it’s far from sufficient.

Jeff: The other part of this, not to put too fine a point on it, is that legislators and those that shape public policy respond to public pressure. And to the extent that our response to this has become almost routine, there seems to be less of an incentive for legislatures and public policymakers to really do the things that need to be done.

Woody: I think that’s exactly right. I think that even though we are– and I’d carefully use the word. But even though Dan and I say we’re in the middle of an epidemic of breaches, with it only getting worse because it is so dispersed and because the sort of harms are hidden from view or attenuated, we really haven’t gotten the kind of public pressure that you would hope would be necessary to encourage lawmakers to respond at the structural level, at a holistic level.

Dan and I make the argument in the book that we’re thinking about data security breaches the wrong way, that we’re really focused on this individual breach and what’s wrong with this one person’s data being exposed, and what’s the worst that could happen. And we and some other scholars, we draw upon the work of other scholars, who have proposed what they call a public health approach to data security law.

The idea is that we need to stop focusing on individuals like, “Did you get attacked or hacked or breached?” and think about the overall health of information systems. We even use some of the same language like viruses happen in public health law as well as data security law. And, of course, public health scholars will tell you, it’s also very hard at the structural levels to convince lawmakers to make these big sweeping changes.

Jeff: And part of this is that there are differences as you write about them between the way we look at cybersecurity and the way we look at security in general and the way in which we look at privacy. And those are really two different things.

Woody: That’s right. So the law tends to draw this line between cybersecurity and privacy. And for many reasons, that’s a good idea, right? So in the book, we actually distinguish the concept of cybersecurity, which has everything to do with literally securing all of the information systems, including the power grid and machine-to-machine IoT devices for supply chain, right? There are all sorts of information technologies that need to be secured under the realm of cybersecurity.

It also involves going after hackers and nation-state cybersecurity concerns, but data security specifically lives in a weird place between cybersecurity and the law of privacy. And the reason why is because it has to do with protecting our personal information, which is typically the area of privacy law. And what has happened is that because there’s this hard distinction between cybersecurity and privacy, data security law has evolved on its own without incorporating the best wisdom of cybersecurity and privacy law.

And what we try to do in the book is make an argument that cybersecurity professionals and privacy professionals should give each other high fives more often and share their wisdom, and that lawmakers should consider more holistic approaches that actually combine the wisdom of privacy to data security laws. For example, one of the most important data privacy rules is the concept of data minimization. And data minimization is very simply the idea that companies should not collect more information than they need to provide a particular kind of service.

And so there’s a lot of opportunities to collect personal information, but privacy law holds that maybe companies should hold off on collecting everything just because they can. And we think that that’s also a very important data security component. And the reason why is because information that doesn’t exist cannot be breached. And so we think that these are mutually-reinforcing goals in many different ways. And we make the case to better incorporate security and privacy into a more healthy system.

Jeff: How important is it for those that shape policy in this area to understand the technology that is inherent in this? One of the things you talk about in terms of these breaches also is the way into these systems, through the front door, through backdoor, et cetera, and that there’s a technological component. And without an understanding of that and the way humans interact with it, it’s hard to shape legislation.

Woody: I think that’s exactly right. And you really hit the point when we talk about the way in which humans interact with technology. And so the understanding that Dan and I argue for is actually even more about human nature than it is technology. And so there are some when you hear about hacking– so when many people hear about so and so got hacked or we read about that, often what comes to mind is the theme from popular television shows where someone in a hoodie sitting in front of a screen and types five strokes on a keyboard and say, “I’m in,” right?

And that’s the vision that we have of the modern hacker and data security law. But one of the things that we try to do in the book is demystify some of that and show that, really, where a lot goes wrong is that we have rules that are not built around the way in which people react to dealing with information technologies, right? And we all make mistakes and we all make errors.

I don’t know about you, but I certainly have a lot going on. And so when lawmakers create rules, they surround unrealistic notions of what humans are capable of. Things go poorly for data security. And let me give you an example of that. Many of you probably have had the experience of having to change your password regularly for your login, maybe at work or some other service that you use.

And there are all these complex rules about, “Oh, you have to capitalize. Use a capital letter and an exclamation point and you can’t repeat it. You have to change it all the time.” Those are well-intentioned rules meant to protect against a very specific kind of attack, which is, essentially, what’s known as the brute-force attack, which is just hackers trying to guess your password through repeated attempts.

But the problem is, is that when you make those rules without thinking about what people are actually going to do with it, you actually create vulnerabilities because one of the things that we saw happen regularly when these new password requirements were put into place is that people just started writing their password down on a piece of paper because they can’t remember them.

Who among us could remember all of the different passwords that were asked to create for our services? And so we either use the same password across all the services, which is not good, or, and we saw this happen a few times, people write it down on a post-it note and they put it to the side of the monitor, which is how certain breaches have occurred. And that resulted not necessarily from a lack of understanding about the way that the technology works.

But, certainly, a fundamental understanding of how the technology works is great, but more about failure to consider how people would respond to these constraints put on them or these obligations put on them when they’re just trying to live their lives, right? I don’t know about you, but I feel like all of us are just barely holding it together on a day-to-day basis. The last thing we need is to try to have to memorize 45 different passwords.

Jeff: Is biometrics an answer to any of this?

Woody: So I will say that in certain instances, I think that biometrics can help increase the security of systems. For example, using the thumb reader for people’s phones, I think, has probably been good, not necessarily because of the biometric itself, though that’s helpful, but that it adds an additional factor of authentication. You may have heard the term “two-factor authentication,” and we argue for this a lot in the book. We say that this is actually a very good intervention.

And just to explain it very briefly, a two-factor authentication means that to log into any particular kind of service or to authenticate someone’s identity, you have to have more than just one thing. You have to have two things, right? And the one thing we usually have is something that we know. Typically, a password. But the additional thing, the additional factor, is usually something that we have where we get a text or we get some other sort of thing like maybe a fingerprint, for example. It could be a second factor of authentication.

Now, I want to be a little judicious when I say this because I also have been very vocal about the pernicious creep of facial recognition technology. I’m also worried about normalizing facial recognition because I think it’s actually a very dangerous technology, but I do see a place for two-factor authentication and potentially use of thumbprints, for example, as a way to help keep our information safe.

Jeff: Talk about your concerns with respect to facial recognition.

Woody: So I think that facial recognition, this is a little bit beyond the scope of the book but something that I’ve done in some of my other research, is a very dangerous technology that offers, in my opinion, relatively limited benefits that are really outsized by the dangers of surveillance, of the chilling effects that can come when being watched all the time. We know that facial recognition can lead to harassment, that it’s an easy sort of thing to plug into existing systems.

And so a lot of people will be using it very quickly. There’s a lot of very unreliable science being used to tout the benefits of facial recognition like their companies that say that they can use facial recognition in job interviews, for example, over Skype or Zoom to determine whether you’re a good job applicant based on whether you frown at certain questions or something like that. And so I think that it’s incredibly dangerous and that we should be really skeptical of implementing them into existing systems.

Jeff: How much of the concern goes to the software and the hardware in some cases that companies and individuals are using today and how much of the solution to any of this lies in both the software and the hardware that we use?

Woody: Oh, that’s a great question. So I think that any solution to the data breach epidemic that we’re talking about is going to have to draw upon both technological protections as well as legal protections. I do know that there are some very interesting interventions being done at the technological level. And this is actually, again, where good privacy engineering can actually help benefit data security. A really good example of that is the concept of on-device processing.

So as computers get a lot faster and our storage becomes larger, they’re able to take on a lot of processing that typically is done on other people’s computers, which is another word for the cloud or servers or what they might call server-side. And keeping information on the phone is actually a really good way of protecting our data. Because if some remote company servers get hacked into, that information and, actually, the biometric information for lots of phones like your face or your thumbprint are, in fact, stored on your phone.

And so if that phone company were to get hacked, you wouldn’t have to worry about your faceprints being lost to the cloud, which, by the way, is really important. One of the reasons that Dan and I actually caution against relying too much upon biometrics for security purposes is that they don’t have the benefits of passwords, which is that if it’s compromised, it’s easy to generate another one.

If your face or your faceprint or your thumbprint get compromised in a data breach, it’s not as though we live in some sort of science-fiction world where we can easily replace our eyes or our fingerprints, right? That’s just compromised forever. And so we think that combining legal rules with good technological developments is going to be key to, I think, solving this problem.

Jeff: Does encryption go anyways towards solving this problem?

Woody: Yes. So we argue that encryption is an incredibly important tool for lawmakers to rely upon to help secure our data. And just to explain encryption really quickly for those that don’t know, it’s essentially really fancy math that renders legible characters illegible. It turns it into unreadable gobbledygook of symbols and numbers and letters. And one of the things that we really argue for in the book is for lawmakers to quit trying to break encryption because that’s really important for data security.

Part of the data security debate needs to be about preserving encryption because, for years, government occasionally, at least, makes the arguments that they need what they call a backdoor into encryption or a golden key where they make the argument, “Dear technology companies that create encryption, we would like for you to give us backdoor access to all this encrypted data.” And the reason why they want that, of course, is that it could be evidence of wrongdoing. It could be evidence of a crime and very important to catch and prosecute wrongdoers.

And then they make the promise, “Just give us the golden key and we won’t share it with anybody else and you can trust us with it.” But the problem with that is that you can’t just create a technological system that only keeps the bad guys out and lets the good guys in, right? That’s just not how it works. And there’s been an ongoing debate about that. And we incorporate that into the book and say part of how to improve data security is to, once and for all, make a commitment to not weaken encryption because it really is important.

Jeff: To what extent has this become part of the discussion with respect to crypto and blockchain and that concern about security? And to the extent that that is something that we’re dealing with today, does that have more practical benefit to everyone?

Woody: Yes. So, actually, I think it’s quite the opposite, I’m sad to say. So cryptocurrency has probably been actually an unfortunate development for data security and cybersecurity purposes. And the reason why is it is driving the rise of what’s known as ransomware, whereby criminals will break into and then encrypt entire folders or whole computer systems denying access of the owners of that system to it unless they pay some kind of ransom.

This is why it’s called ransomware. And that ransomware, it’s typically some form of cryptocurrency, which is the chosen form of currency for criminals and fraudsters precisely because it is relatively difficult to trace and relatively easy to exchange into different kinds of currency. And so we talk a fair bit about ransomware in the book. And our ultimate view is that, unfortunately, the rise of cryptocurrency has probably contributed to the rise in breaches that we’ve seen here and otherwise compromises of our personal information.

Jeff: Much of the law that exists today that relates to these breaches really, as you talk about, comes into play after a breach has happened. What do we need to do in terms of specific policy and laws at this point in your view?

Woody: So the main thesis of our book is that in order to improve the law and policy of data security, we actually need to get beyond the breach. And the reason why we say that is that if you look at the existing data security laws right now, they almost all are focused around this very particular moment who some entity is holding the hot potato and gets breached holding the personal information, and the law springs in action.

It says, “Hey, you breached entity. You have to send out the breach notification and you’re going to get sued. And there’s going to be an action taken against you for failure to protect the personal information.” And, of course, all of this, we argue, is all well and good, but it’s myopic. It really misses the bigger picture. And so the things that we argue and there are really four that the lawmakers should focus on to put us in a better state.

The first thing they should focus on is distributing responsibility among the actors that create data security risks. So it takes a village to produce a data breach. And in the book, we talk about lots of different actors, designers of software, those that create insecure hardware, those that create systems that amplify data security risks. We know that ad networks have often acted as distributors of malware even unintentionally, but they do so.

And so the law needs to take a broader view as to who needs to be held accountable for data breaches. The second thing we argue for is that the lawmakers should look to reduce the harm of inevitable data breaches. So one of the things that we learned while researching is that you can do everything right as a company and you can still get breached. If a hacker has enough resources and incentive, then they can still get in even if you do gold-standard data security.

And so we need to start focusing on systems that will blunt the impact of those breaches. And let me give a specific example just to make this more clear. We talk about in the book, the worst password ever created. And when I talk about this with my students, I say, “What do you think is the worst password ever created?” And I always get some sort of comical answers, something along the lines of, “Oh, 1-2-3-4-5.”

Jeff: 1-2-3-4, right. [chuckles]

Woody: Right, exactly. Right. Yes, exactly. And all of that is true. But, actually, the worst password ever created is Social Security numbers. Somewhere along the way, entities decided to start using Social Security numbers as authentication mechanisms, as passwords. And that has actually been really bad when those numbers ultimately get breached. And the reason why is because one of the virtues of passwords is that they’re disposable, right?

We can just create a new one and it’s easy, but Social Security numbers are, in fact, very hard to change. And so what that does is it makes the data breach containing that information worse. Another thing that we focus on in the book is how easily it is to extend credit based on almost no information at all. We talk about how this guy once got his dog a credit card, right? And that also makes data breaches worse because criminals were then able to take that information and make the impact of a data breach significantly worse by amplifying the harm.

The third thing that we argue that lawmakers should do in the book is remove the silos between privacy and security, which I’ve already talked about a little bit. But, specifically, one of the things that we argue for is, in fact, a robust embrace of the concept of data minimization. This should exist broadly in the United States, meaning there should be a default rule that companies cannot collect more information than what they need to accomplish a stated purpose that they provide for.

And then, finally, we need to accommodate rather than deny human behavior. And this has to do with making sure that when we do create rules, we fully consider the ways in which humans will respond to them rather than creating password rules that nobody can focus on. And by this, if the law does all these four things, then we think that they will be focusing on the whole data ecosystem rather than just the one entity that was subjected to the breach in any given scenario.

Jeff: To bring it back though to where we started in terms of almost the apathy about this and the degree to which it’s at least taken for granted, one of the things that we see in the corporate world is the degree to which this is approached from a risk management perspective and that going so far becomes costly at a certain point and it’s a risk that companies are willing to take.

Woody: Oh yes, absolutely. So we think that a couple of things really are necessary to change that state of affairs because you’re right. One of the companies maybe takes the risk because it can be extensive to implement meaningful data security practices that we argue actually less extensive than some might think, depending upon a couple of factors, including how much personal data you’re collecting and the risk that you’re overall creating and your resources, but there does need to be some increased accountability.

We argue that administrative agencies like the Federal Trade Commission should be not only given more resources to hold companies more accountable, but they should be holding companies accountable before the breach even happens. Because right now, what seems to be the state of affairs is that it makes sense to sit back and be like, “Well, as long as we don’t get breached, we should be fine,” right? And then if you do, then you scramble and you pay the money. It’s like, “Oh, well, we were unlucky.”

But that’s not the way in which we treat restaurants, for example, right? We don’t wait until someone gets salmonella and then send in the food, the health inspector, right? We have a system where we hold all public dining establishments accountable regularly for being up to code essentially with sanitation and health. And we think that we need to turn to that sort of system in order to have a better approach. And, of course, we should also focus at this point that companies don’t have to be perfect, that we wouldn’t even want perfect security if we had it. And that’s an important thing to remember as well.

I tell my students, there’s really only one perfect way to get data security. And that’s to take the database, the hard drive that has all the data in it. And you take it and you put it in a wood chipper, right? That’s the way in which you can do it, but, of course, that denies us the benefit of data. So it does have to be an approach that allows companies to absorb a certain amount of risk. But, frankly, the law simply needs to be better about articulating what the amount of tolerable risk is, and then holding those companies accountable for responding and being responsive to that sort of minimal amount of risk they can take on.

Jeff: Does the law have to be structured in a way in which minimization is incentivized?

Woody: Yes, I believe that it does and I believe that data security can actually help out privacy in this regard. This is when we think that the relationship between privacy and security is much closer than many have originally theorized. So as many of you know, the debate in privacy can be very contentious. Some people want very robust rules. Others say these rules are not necessary.

Congress has yet to create a national privacy law in the United States, much to many people, including my own dismay. And so we’ve had trouble getting traction and data minimization in my opinion would be a very important part of a national privacy law. However, one of the things that we have seen on Capitol Hill recently is that data security tends to be very much a bipartisan issue. Everyone looks at data security and says, “This is bad. We don’t want hacks. We want our information systems to be safe and secure.” And so there seems to be a little more political will.

And so one of the things that Dan and I have proposed is that instead of trying to make robust nationwide data minimization part of an omnibus privacy law, which may never happen, perhaps the Federal Trade Commission can make rules requiring data minimization from all companies as part of a data security package. And we think that that would be in line with what the FTC has already articulated constitutes fair and reasonable trade practices and truthful trade practices. And we think that that would be a really good way, a really important step in moving us in the right direction for data security.

Jeff: And, finally, is it folly to think that the solution to any of this beyond the laws is technological, or is it simply, no matter what has evolved from a technological point of view for software or hardware or for these companies, that it’s just part of a race that the bad guys will always catch up with?

Woody: Yes, so I think it is always part of a race, but that’s a very important race to be run. And the better way of thinking about that is probably to think about data security as a process rather than in-state using technology, right? It must always be adaptive to new threats because there will always be new threats. And one of the things that we should really be responsive to is making sure that our approach to the design of information technologies is broad enough to include all of those threats.

So, certainly, things like encryption, things like good password protocols, standard technological cybersecurity approaches in the law, they call them administrative, physical, and technical safeguards. All of that will remain important, but we need to make sure that we also are including things like really confusing user interfaces or even the– When I talked about having systems where we use Social Security numbers as authentication mechanisms or granting credit too easily, there are lots of things that contribute to data security risks that are actually non-technological. And we need to focus on those things as well.

And to close, I’ll just say the real narrative that I want to push against is this idea that we as individuals are responsible for protecting our own security. Because while there are things that we can do, which is protect our passwords and use two-factor authentication and don’t click on suspicious links and all of those sorts of things that we’ve been trained to do, there’s only so much that people can do. And so we really need a much more broader, more human approach to data security so that we’re not asking too much of either companies and their technological approaches or individuals and the limits of us as humans.

Jeff: Woodrow Hartzog, his book is Breached! Why Data Security Law Fails and How to Improve it. Woody, I thank you so much for spending time with us here on the WhoWhatWhy podcast.

Woody: Oh, it’s such a pleasure. Thank you so much for having me on.

Jeff: Thank you. And thank you for listening and joining us here on the WhoWhatWhy podcast. I hope you join us next week for another radio WhoWhatWhy podcast. I’m Jeff Schechtman. If you liked this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


Comments are closed.