Subscribe

Frustration, Anxiety, Woman
Photo credit: Monstera Production / Pexels

Feeling anxious in chaotic times? Embracing life’s unpredictability can lead to resilience and impact. A new framework for thriving in an uncertain world.

In a world that feels increasingly chaotic and beyond our control, how can we find true agency and purpose? 

On this week’s WhoWhatWhy podcast we explore this profound question with Brian Klaas, author of the groundbreaking book Fluke: Chance, Chaos, and Why Everything We Do Matters.

As political upheaval, global crises, and personal challenges upend our lives at a breakneck pace, we often feel anxious and powerless. 

But what if embracing the inherent unpredictability of life is actually the key to resilience and empowerment?

Drawing on a kaleidoscope of disciplines, from social science to evolutionary biology, Klaas challenges our basic assumptions about how the world works. He argues that by accepting our limited control and leaning into the unexpected, we can discover a deeper sense of meaning and agency.

Klaas shares a framework for navigating the chaos of the modern world. Whether you’re grappling with political turmoil or personal problems, we can harness the power of chance and make a difference, no matter the odds.

iTunes Apple PodcastsGoogle PodcastsGoogle PodcastsRSS RSS


Full Text Transcript:

(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)

Jeff Schechtman: Welcome to the WhoWhatWhy podcast. I’m your host, Jeff Schechtman. As we find ourselves in the midst of perhaps the most critical election of modern times, a sense of anxiety and unease pervades all of us. From political upheaval to personal uncertainty, it often feels like the ground is shifting beneath our feet.

We refresh our newsfeeds incessantly. We argue with friends and family. We lie awake at night worrying about the future. It feels like everything is spinning out of control, and our quest for stability and certainty can leave us exhausted and anxious. But what if our attempt to tightly control every aspect of our lives is actually making us less resilient? What if embracing the inherent unpredictability of the world can help us navigate life’s challenges with greater ease, and even a sense of purpose?

This is a provocative argument at the heart of my guest, Brian Klaas’ new book, Fluke. Drawing on a wide range of disciplines from social science and evolutionary biology, to chaos theory and philosophy, Brian challenges our assumption about how the world works, and our place in it. He argues that, while we may have limited control over the outcome of elections, or even the next family crisis, everything we do matters, and has an influence, often in ways we can’t predict, or even comprehend.

By accepting this reality, and leaning into the unexpected, we can reduce our anxiety, and find a greater sense of agency and impact. In a political moment that feels so high stakes and unpredictable, where events move at a breakneck pace, and flukes like pandemics, insurrection, and assassinations upend our expectations, Brian’s framework just might be a way to stay grounded and engaged.

With degrees from Oxford, and now a professor of Global Politics at University College London, Brian is not just an academic. He’s a contributing writer for The Atlantic, host of the award-winning Power Corrupts podcast, and a frequent guest in national and international media. His field research spans the globe, advising major politicians and organizations, including NATO, and the European Union. It is my pleasure to welcome Brian Klaas here to the WhoWhatWhy podcast. Brian, thanks so much for joining us.

Brian Klaas: Thanks for having me on the show.

Jeff: Well, it is a delight to have you here. In many ways, why everything we do matters, but in some ways, everything we do doesn’t matter. Talk about that.

Brian: Well, I think everything we do does matter. And I mean that quite literally. I open early on in the book, there’s a story that is set in a– It’s a true story set in a farmhouse in rural Wisconsin, where a woman who has four young children snaps, and has a mental breakdown, and tragically, decides to kill her four kids, and then take her own life. And her husband comes home, and finds the family dead.

And the reason I’m talking about this story is, because this is my great-grandfather’s first wife, and he eventually remarried to my great-grandmother. And the amazing thing is that, I didn’t know about this until I was in my mid-20s, and my dad told me about it. And, of course, bewildered to realize that I wouldn’t have existed, but for this mass murder 119 years ago.

The more perplexing thing that makes you think is that, you wouldn’t be listening to my voice, but for this mass murder 119 years ago, so now it’s affecting your life too. And I think, this is something that, of course, my great-grandfather’s first wife could never [have] anticipated producing a podcast episode 119 years ago, or a radio show 119 years later, and yet, here we are.

And so, I think this is the kind of stuff where the ripple effects maybe short-time scale, long-time scales, but the ripple effects are always there, and I don’t think there’s any throw away action that has no impact on the world.

Jeff: In many cases, we have no agency as these things evolve. I mean, you tell the story, and it’s in The Atlantic excerpt from your book, about Henry Stimson.

Brian: Yes. So, this is a story that opens Fluke, and it’s a story of a couple that goes to Kyoto, Japan. [An] American couple goes to Kyoto, Japan, in 1926. And they fall in love with the city. They just find it utterly charming. And normally, you’d think, “Okay, a vacation doesn’t change history.” But 19 years later, Henry Stimson ends up as the American Secretary of War. And the Target Committee, which is comprised mostly of generals, decides that Kyoto is their top pick for where to drop the first atomic bomb in August of 1945.

And Stimson, when he gets his memo, springs into action, and goes to meet with President Truman. He actually meets with him twice, and convinces him to take Kyoto off the list clearly because he has a soft spot for the city, derived from his vacation 19 years earlier. And the second bomb is supposed to go to a place called Kokura, but when the bomber arrives, there’s briefly this cloud cover. And so it circles, and circles, and, eventually, has to get diverted to the secondary target, which is Nagasaki.

So, you have hundreds of thousands of people living and dying between these different cities, because of a 19-year-old vacation and a cloud. And I think we, intuitively, actually understand this is the way the world works, because when we think about our own lives, there’s all these little chance events that we can pinpoint. What’s really, really astonishing is to imagine the invisible ones.

And this is where this term “Kokura’s luck” comes in, because the people of Kokura, the intended bomb site of the second bomb, had no idea for decades that they nearly were all incinerated. And so, I think this is happening to us constantly. There are things where we are escaping catastrophe, or we’re potentially getting some massive benefit. And we’re completely blind to the dynamics, because we can only see one future, which is the one that we actually inhabit in the way our lives unfold. And so, that’s the really, I think, bewildering part that comes out of this worldview.

Jeff: Not only bewildering, but a sense that we realize the limits of our own power. That we think we have the world in an orderly way, and that we have a sense of agency in it. But when we think about, particularly, these invisible events, we realize that there’s an awful lot that we don’t control. In fact, most of it that we don’t control.

Brian: Yes. So, I mean, the first thing I would say is that, probably, the most important things that determine my life’s trajectory, I have literally zero control over. So, that was where I was born, when I was born, who my parents were, and what my brain was like. And those things are– I had literally no effect on any of those things, and they almost certainly dictated the majority of my life outcomes.

If I was born, I do research, for example, in Madagascar. If I was born in rural Madagascar, where the average person makes $1.90 a day, I don’t think we’d be talking. So, this is the kind of stuff where there’s a limit to how much we can control. And I have this statement throughout Fluke, it’s riffing off this idea from Scott Page, a complex systems theorist who says, “We control nothing, but we influence everything.”

And this is the kind of stuff where I think it depends on how you think the world is supposed to work. So, the way that we’re told the world is supposed to work, is we’re supposed to control everything, and determine our pathway through life, and if we don’t get the exact pathway we want, then we should be upset. And I think if you start to relinquish a little bit of control, and celebrate the influence you have, even though you don’t have much control, you start to accept the flukes a little bit more, and you actually enjoy the ride a bit more.

And I found this, for me, my worldview did really shift when I researched this book. I think about the world really differently from how I did three or four years ago, and I feel so much happier because of it. Because I just let go a little bit, of trying to live what I used to call a checklist existence with the life hacks for everything. And sometimes stuff happens to me, and I don’t know why, and I just enjoy the ride a bit more, and allows you to focus on the things that you care about, as opposed to trying to be a control freak in an uncontrollable world.

Jeff: Talk about how that lines up with human nature, particularly in the West, and the way we’ve grown up, and what we’ve learned, and how we have to really go against that in some respects, to get to this enlightened state that you’re talking about?

Brian: [chuckles] I wouldn’t describe myself as enlightened, but I think there is a nice side to this worldview that flows out of the chaos. The problems with Western modernity and my ideas, are in my mind, rather, are that we are told that we’re in charge of everything, and that we’re the main character in our own lives, and no one else really matters. So, I think this– I call this the delusion of individualism. And I think it’s a myth.

I think it’s something where our lives are stitched together through a tapestry with a lot of different threads, many of which we’re unaware of. Mine was probably stitched with this mass murderer that I had no idea existed until I was in my mid-20s. And that aspect of it is important to acknowledge. But I also think there’s elements of this, where I quote this philosopher named Hartmut Rosa, he’s a philosopher and sociologist. And he says, “The maxim of modern life is to accumulate as large a share of the world as you possibly can.” And I think it’s a bad way to live.

I’m not saying that you shouldn’t try to be comfortable in life, and so on, and, of course, we should strive for things we like. But I think this idea that you’re just supposed to capture more and more of the world for yourself, to my mind, in writing this book, a lot of the things that I enjoy the most, I was thinking about the idea as I was walking my dog, or going camping. And now, [chuckles] those things have any status or money associated with them. And they were some of the best moments of our lives.

I think when I talk to people, they also intuitively understand this, that we’re in this rat race in a lot of our daily lives, where we’re chasing stuff and status, and that’s what the Western mentality, and modern life is telling us to do. And I think when people say what they really find meaning in, it’s often not those things. And so, there’s this juxtaposition between the way that we intuitively get this, but then the message of what society is supposed to tell us is a successful life.

And I think the way you can reconcile those things is to understand the underlying dynamics of what’s going on. What’s going on is actually driven by chaos a lot more. It makes you accept I think this viewpoint a little bit more easily.

Jeff: Does it also make you though shun responsibility?

Brian: No, I think actually the opposite. It’s really interesting. So, some people when they first encounter my ideas, or they first start reading Fluke, they’re worried they’re going to become nihilists. Nothing matters, right? And when I have the third part of the subtitle Why Everything We Do Matters, I think the point that I’m making here is that you are constantly reshaping the world.

I think it’s the most empowering message imaginable. And I can convince you of this, I think, with ironclad logic that you’re going to change the world in one way or another, especially, if you have children, or are going to have children. Because the moment a child is conceived, if it’s a microsecond difference, a different person gets born. So, if you stop to have a sip of coffee that day, a different person gets born.

And that goes on and on the day before that, and the day before that. Everything had to stitch together in exactly the right way for your kid to be your kid. And I think that’s the stuff where it’s not like children being produced has some unique cause and effect, it’s just the way the world is. I think the other thought experiment that I think persuades a lot of people is when you imagine traveling back in time, people are like, “Don’t squish a bug. Don’t talk to your parents, because you might actually change history, or change the future, or even delete yourself from existence.”

And we accept that these small changes can have profound effects in the future. Then, when we get into [the] present, we just completely throw away that mentality. We don’t think about the squished bugs, or the conversations, we think all of that just gets washed out. So, to my mind, if we are constantly reshaping the future with our actions, the responsibility is much greater because everything is important.

So, I feel like I have more responsibility now that I’ve thought about chaotic dynamics, than in the past when I was told, there’s a difference between the signal and the noise, and the noise is unimportant because that gives you the license to not worry about a lot of stuff. I think we should be living deliberately in a way that is tied to this responsibility of reshaping the future that we all get to be part of.

Jeff: How does this measure up, or line up against science and technology, which is about predictability, and consistency, and absolutes?

Brian: Well, I think the world of science and technology are wonderful. I also think that the attempt for us to control the world has resulted in a more unstable environment than any human has navigated in history. And I’m not anti-science, I think we absolutely need to use science and technology to the maximum. But when you think about the 21st century, it’s a series of shocks, because we tried to create these straight jacket systems that try to contain a really complex world.

And so, the 21st century, in a nutshell: 9/11, the Iraq War, the Financial Crisis, the Arab Spring, the rise of Trump, Brexit, the pandemic, the wars in Gaza and Ukraine. How many calamities are happening on a regular basis now? We’ve engineered a society that is much more prone to flukes, and to chance events than ever before. And that’s because, I think, we had this hubris to believe we can control the world, so we optimize the world to its absolute limit.

So, the best way of explaining this is like, our supply chains, for example, have no slack in them. And so, when a gust of wind hits a single boat in the Suez Canal in 2021, it can wipe out $54 billion of economic productivity in an instant, which was never possible before, because we used to have more slack in our system. So, I think science and technology can solve a lot of problems, but if we worship at the altar of control, and think that science and technology are the solution to everything, then we actually get a hubris that I think leads us down a dangerous path.

Jeff: One of the things science and technology has done arguably is create a world of greater complexity. Talk about that in this broader context, the fact that things are more complex today.

Brian: Yes, so, I mean, I don’t think we understand how the world works, and I don’t mean this in some flippant way. I think that there are a lot of systems that exist in modern life that we fundamentally do not understand. And one of the examples I briefly touch on in Fluke is a guy who wasn’t malicious. He was a trader in West London, and he wanted to just see if he could manipulate the stock market in an interesting way, so he was just playing with things.

And he wiped out like a trillion dollars of wealth in five minutes. Now, there was an audit done of why this happened, and how it occurred. And I feel like [after] two or three years of the SEC and other government agencies investigating, they couldn’t really come up with an explanation. And it’s because he was able to manipulate this algorithm that was used for high frequency trading and so on, in ways that were just beyond the complexity of what a human brain can necessarily understand easily, and trace backwards.

And so, I do worry about this, that we’re creating systems that are more prone to not just shocks, but also inexplicable shocks. And also I think one of the things that I talk about in the book a lot is that our brains are designed for pattern detection. This is something that evolution has forged in us, but the patterns we used to have to discern were really straightforward.

Rustling in the grass equals saber-tooth tiger, therefore run, that’s a simple cause and effect model. Trying to understand modern finance or geopolitics, our brain is just not equipped to deal with eight billion interacting people in these unbelievably complicated systems. So, I think that’s where the systemic risk comes in, in that we imagine we can control these things.

We assert some sort of intervention into the system, and then there’s unintended consequences. And I think the lesson that I’m trying to push to people is that, if we accept there’s less control, then you have a little bit less optimization, and a little bit more resilience. And the resilience is something that we should focus on more in modern life to avoid those black swan events that have blindsided us throughout the 21st century.

Jeff: It’s interesting that we’re trying to deal with some of this complexity by more complexity. In many ways, a lot of the goals or the efforts that people lay out for AI are to try and deal with the effects of some of that complexity. And yet you talk to scientists in the area today, and they will tell you that even they don’t understand how it all works.

Brian: Yes. And AI is an interesting case, because one of the things that’s a problem, is that a lot of our world runs on models where they look at past patterns as a way to predict the future. Now, if you’ve got a stable system, that’s fine, but if the system is evolving and changing over time, it’s a really big problem. And we all know the system’s changing because you have things like, oh, there’s 100-year flood, or a 100-year fire, but it’s happening every three years now, right?

And that’s because the underlying dynamics of the climate have shifted. So, the past pattern is no longer predictive of the future outcome. Now, the problem with AI is that it’s built on training data from past outcomes, and the underlying causal dynamics are shifting faster than they ever have before in human history. Our world is changing faster than it ever has in human history.

So, if you embed a lot of systems into a procedure that basically assumes the past equals the future, you have created a world of systemic risk by design. And I’m not anti-AI, I think there’s serious benefits that are going to be derived from it, but I do think that we have to have some limits to the understanding of how much we should rely on it for critical infrastructure, and other things that make society function because of the problems I’m describing.

Jeff: But we’re no better, human beings are no better at pattern recognition than the AI, and arguably-

Brian: Sure.

Brian: -the AI is taking more history, more past patterns into account.

Brian: Yes. And in some ways that can be really good. So, if you’ve got a stable system, like if you’re trying to do diagnoses in a medical context, there’s no question AI is going to obliterate human diagnoses. Because a doctor might see 10,000 X-rays in his career, and the machine learning model will see 10 million, so it’ll just get better. But I think the problem is that with AI you don’t always understand the decision-making process, and you also are operating at a speed that is difficult to counteract when a cascade happens where things break down.

And so, I think that aspect of society is that we’re told that speed is always good, and in some contexts it is. We want to have Amazon deliver the package as quickly as possible, fine, but I think in other contexts speed can be dangerous if there are mistakes. And this is where high frequency trading is the example I come back to, because it’s so fast that our economies can be totally wrecked in a period of minutes before anyone can do a circuit breaker.

Now, they’re starting to build some of these circuit breakers into the trading system, but AI is getting faster and faster and faster. And at some point, I think there’s going to be some– This cat and mouse game is going to be a losing contest for all of us, because the speed is going to be so great with some of these models and decision-making processes that we’re not even going to know it’s happened until it’s too late. And I do worry about that sort of fetishization of, if we can only be a microsecond faster, then everything will be better.

Sometimes slow but stable is a better bet.

Jeff: Does speed in all these areas increase the potential, increase the number of flukes, and is there some inherent good in that?

Brian: Well, I don’t think there’s inherent good in it, because I think the aspects of flukes are often destabilizing. What I would say is that, speed makes it more likely that there will be contingency, which is a related idea, which is to say that a small change can have a big effect.

And I think that that is something that is absolutely clear about the way the world operates now. So, it is slightly in the weeds, but I think that there’s a really important point that I try to highlight in Fluke, which is that all past humans lived in a world in which the uncertainty they navigated was in their day-to-day life, and not the way the world worked.

So, they didn’t know where to get the next meal, and they might get eaten by a saber-tooth tiger. But the parents and children were going to live in the same world, generation after generation, and the dynamics of cause and effect were unchanging. We’ve flipped that world, and it’s with speed as well. So, we can beckon Amazon to our houses in exactly the right hour, and we can decide to go to Starbucks anywhere in the country, and it ends up being the same coffee.

So, we’ve ironed out all the uncertainty in day-to-day life for most of it, but we’ve created massive instability in how the world operates. So, I sort of say, it’s a bad bet to have Starbucks unchanging, while rivers dry up, and democracies collapse. And I think that’s the peril of a really fast, complex world where you try to control it, but you’re actually getting worse and worse over time at harnessing the benefits of some of these things that are tied to speed and technological advancement.

Jeff: And it’s interesting because so many of the things, and AI is an example of that, but so many of the things that we’re doing, particularly to try and gain speed and operate more quickly in the world, are a poor attempt, I suppose, to try and have more control.

Brian: And I think this is something where there’s a difference between systems that we fully understand, and systems that we don’t fully understand. So, the economy is a system we don’t fully understand, and even economists will acknowledge this. There’s all sorts of unexpected things that happen all the time. There’s fundamental debates about how to control inflation in the best way, et cetera.

Systems that we do understand, you can optimize to the absolute limit using technology and data. So, my favorite example of this is moneyball in baseball. Baseball is a closed system where the same teams compete every year, the same rules, roughly, and there’s a set of outcomes that are possible. We know that the Minnesota Vikings football team is not going to win the World Series. So, there’s a set of rules, and a set of constrained outcomes.

When you moneyball that system, and when you use data to optimize for performance, it works really well. It’s so much so that it made the game a little bit boring, because it was so optimized. But it’s a closed system, right? And so, I think there’s certain systems, with medicine as well, absolutely, we should optimize to the limit. If you can do anything with a cancer drug that’s going to stop the cancer from spreading, don’t have some aspect of, “Oh, well, let’s not try to control it.” You absolutely should.

It’s the complex systems that are the bigger metastructure of our societies that we don’t understand, where it becomes dangerous to imagine that we do understand them, and therefore to decide, “Yes, we’re going to do this,” and put all our eggs in that basket. So, I think it’s useful to differentiate between systems that are well, well understood, and systems that are poorly understood. And the degree of control you assert is proportionate to how much you think you understand the system.

Jeff: And how is evolution impacted in all of this? To what extent does the changing nature of human beings play a role?

Brian: Well, I think the main thing about this where I talk about evolution a lot in Fluke is how our brains have evolved to be allergic to actions that are described as being random. So, we have a psychology that has overlearned pattern detection, and is really immune to some explanation that says this is just random chance. And there’s a whole series of reasons I talk about in the neuroscience, and evolutionary biology, and evolutionary psychology literature about this, and why it’s happened.

But the point is that our brains are pattern detection machines. And so, one of the issues is that when random things do happen, and they do, then we instead infer a cause and effect that it’s in a neat and tidy story, and that makes us misunderstand the world. So, I think there’s some stuff also where, interestingly, this is differentiated by what kind of news we hear.

So, if you hear a positive development in your life, like you won the lottery, and it was random, people are pretty willing to accept that that might have been just arbitrary. There was no grand cause or anything like that. When people get bad news, they are almost universally immune from random explanations. And so, of course, cause and effect operates the same way, whether it’s positive or negative news, but our brains have evolved to try to cope with a world of randomness by inferring rational reasons.

And so, I think there’s some stuff where the way we navigate the world is partly a byproduct of a brain that was forged by evolution to survive long enough to reproduce. And that’s an arbitrary brain. So, I think the way we make sense of the world is not always 100 percent accurate, but it’s important to recognize those cognitive biases, because recognizing them is the first step to avoiding decision making mistakes.

Jeff: We seem to be caught up forever in this battle between correlation and causality.

Brian: Yes. This is something where there are very difficult ways to go from correlation to causation. I’m a social scientist. I have a lot of skepticism of some of the modern social science models, and their ability to accurately describe an extraordinarily complex world. But it’s still worth trying. I think it’s worth trying to determine when there are patterns, because sometimes there are patterns, and it’s really useful to understand them.

I think it’s just this aspect of, I don’t think we’re ever going to have perfect causation because I think– What I was talking about with chaos theory previously, I think with chaos theory, it’s an issue where when you think about it, there are an infinite number of causes. So, why are we having this conversation? Well, there’s a whole bunch of things that had to happen, but one of them is a mass murder in Wisconsin in 1905.

So, when you think that way, there’s not five or six variables in a model that explain an outcome, there’s an infinite number of variables that explain an outcome. So, I think causation is basically something we’re never going to fully get. I think we’re going to get better at hopefully anticipating what are key variables, but we’re never going to have perfect causation in anything that we understand.

So, the question is more about usefulness to me, it’s more about, “do we have an ability to at least understand the world well enough to avoid catastrophes?” And that I think should be the benchmark for how our politics, our economics, and our social science navigates the world. It should be goal oriented to try to reduce the harm that is avoidable in the world.

Jeff: One of the best examples of not ever having perfect causality is the very fact that we as individuals, or nations make the same mistake over and over again, which arguably we wouldn’t do, if we understood causes perfectly.

Brian: We don’t always learn from our errors. I would say that for sure. And I think one of those, it springs back to the previous part of the conversation, is like, the financial crisis wiped out a huge amount of wealth in the United States and around the world, and then things go back to seemingly normal, and we start to unlearn the lessons and embed more systemic risk into finance, and it’s going to happen again.

And so, I think there is this aspect of it where I would prefer that we lived in a world that had more stability, and slightly lower growth, for example. I’m not saying growth is bad by all means, but I would trade a world where you have five percent growth, but then catastrophe every five years. With one that has, let’s say 3.5 percent growth, but it always is 3.5 percent growth.

And I think that’s the stuff where we have to think a little bit differently. We have to think about the potential risks in our systems, and make sure that we make resilience a core policy priority when we think about both our lives, and also the way that our societies are run.

Jeff: Inherent in the notion of randomness and chaos is at the same time the element of risk, and the more random things are, it seems the more risk we might be willing to take.

Brian: What I would say is that when you have a worldview that is dominated by chaos and the uncertainty of life, one of the best lessons is not about taking more risks, but more experimentation. This also calls back to evolution, because the wisdom of evolution is basically undirected experimentation that comes up with really novel solutions to tricky problems.

And that’s why you have these crazy life forms all around us, right? From bizarre plants to bizarre animals and so on. And I think there’s a lesson for humans there too, that the way to navigate uncertainty is to experiment, to try new things, and you’ll end up discovering something that works better for you. And so, I think this is also true for public policy. I think we should be experimenting a lot more with public policy, because we don’t know the answers to a lot of questions.

And that was exposed really clearly by COVID. We just didn’t understand how to deal with the pandemic. And I think experimentation was something that was really important where you start to develop ideas and strategies, and some of them work, and some of them don’t. It’s much better than a top down sense of control where you just pick one idea and do it.

And a lot of politics is actually that way. A lot of politics is like, one ideology says, “We think this will work,” one ideology says, “We think that will work.” You elect them and then you only try one thing. And we never know whether it was the best outcome because we didn’t experiment.

Jeff: And we see that in business too. I mean, bookshelves in bookstores are filled with business books that will tell you this is the way to organize to do X. And in fact, it eschews that experimentation.

Brian: With business, you basically have a series of experiments through innovation. Then, also sometimes they arbitrarily get locked in, right? So, there’s this classic example of VHS versus Betamax in in the 1980s where– It’s like these technologies are experimented with, both of them sort of work, but VHS took an early lead in how many people bought it, and all of a sudden, boom, that was the end of Betamax.

So, it got locked in and it won the battle. And I think timing is part of this, experimenting is part of it. All these things add up to a point of view where the idea of top-down control is simply a myth. And so, if that’s the case, then in business you’re navigating an uncertain environment, the world’s changing. You don’t know what the next big thing will be, or what consumers are going to want. You had better have experimentation as part of your playbook. And if you don’t, those companies basically die, because they’re set for a world that then is outdated as soon as it changes. And that’s the peril of not being nimble and flexible and accepting uncertainty.

Jeff: And finally, Brian, how did your worldview change as you continue to work on Fluke?

Brian: Oh, it’s [changed] profoundly. I’ve written a few books. This is the only book that’s changed how I think about the world. I now view myself very firmly as a cosmic accident, which I think is true. And it’s really actually liberating, because the life lesson of a cosmic accident is to enjoy life. [chuckles] It’s to do things that you enjoy with people that you love, and not to sweat the stuff that you can’t control as much.

I also wrote a lot of this book after I would go on long walks with my dog, and I really enjoyed that. And it’s one of those things where the ideas came – I wasn’t sitting in front of a computer trying to force an idea – from exploring the world a bit more. I’ve just stopped doing a little bit– I’ve dialed down the degree to which I try to assert control in my life. I’ve dialed up the degree to which I try to enjoy things and explore.

And I think that’s the really nice thing about this, the philosophy that flows outta this worldview is that, it just frees you up to accept that you don’t have the ability to completely shape and dictate the terms of your life, so you might as well enjoy the ride. And I know it sounds really simple and cliché, but it’s how I feel after writing the book, and it’s made me a happier person.

Jeff: Brian Klass. The book is Fluke: Chance, Chaos, and Why Everything We Do Matters. Brian, I thank you so much for spending time with us here on the WhoWhatWhy podcast.

Brian: Thanks so much for having me on the show.

Jeff: Thank you. And thank you for listening and joining us here on the WhoWhatWhy podcast. I hope you join us next week for another Radio WhoWhatWhy Podcast. I’m Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.


Author

  • Jeff Schechtman

    Jeff Schechtman's career spans movies, radio stations, and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and, more recently, the world of podcasts. To date, he has conducted over ten thousand interviews with authors, journalists, and thought leaders. Since March 2015, he has produced almost 500 podcasts for WhoWhatWhy.

    View all posts

Comments are closed.