So much of what we do in everyday life involves managing risks, whether we explicitly think about risk management or not. Risk management includes personal decision-making regarding inter-personal relationships and our health, as well as broader societal decision-making regarding health care policy, the economy, and international conflicts. We like to assume we’re effectively managing these risks, either as individuals or as societies, but all you have to do is pick up the daily paper to find plenty of examples of risk management failures: a horrific drunk driving accident, hundreds of dead in a Brazilian night club fire, a large cruise ship running aground, or a gas pipeline exploding in a residential neighborhood. We’re constantly surprised by these accidents (or failures of risk management), and it is often hard for us to understand how things could have gone so wrong!
The definition of risk we use here is:
Risk = Hazard x Consequence x Probability
- Hazard: the underlying cause of a potentially negative outcome. A poorly designed car, a faulty airbag or gas tank indicator, a drunk driver, and black ice are all examples of driving hazards.
- Consequence: the significance of the range of outcomes associated with a given hazard. Inconvenience, an insurance claim, injury, or even death are all potential consequences of driving hazards. Consequences get complicated quickly; a drunk driver AND a faulty airbag can lead to a very different consequence than one or the other in isolation.
- Probability: the likelihood of particular hazards and consequences combining together for a particular outcome. Like consequences, probabilities get complicated quickly. The probability of encountering a drunk driver, for example, is different at different times of day or night, different days of the week, and specific days of the year.
Assessing, quantifying, and managing risk as defined above is difficult. First, in evolutionary terms we haven’t progressed very far since the time when we used our “risk brains” primarily for basic pattern recognition, e.g. searching for lions in the Serengeti, and the resulting “fight or flight” responses that served our ancestors well. Second, humans are particularly poor at internalizing and reacting to probabilities. When searching for lions, this may not be a significant hindrance since you don’t have to run until you see the lion; but it’s a bigger issue when assessing the risks of driving a car. Third, it turns out that we have a whole series of hard-wired cognitive biases that influence how we perceive, prioritize, and choose to manage risks. A few of these cognitive biases include:
- Availability Heuristic: We tend to perceive something as more likely to occur in the future if we have experienced it in the past, particularly the recent past.
- Confirmation Bias: We tend to search for or interpret information in a way that confirms our previously held perceptions.
- Overconfidence Effect: We tend to have excessive confidence in our own answers and in ourselves (we are all above-average drivers!).
- Positive Outcome Bias: We tend to overestimate the probability of good things happening.
- Semmelweis Reflex: We tend to reject new evidence that contradicts an established paradigm we already hold.
- Subjective Validation Bias: We tend to perceive things to be true if our underlying beliefs (e.g. political or religious beliefs) demand them to be true.
- Expectation Bias: When experimenting, we tend to believe results that agree with our expectations of the experiment, and to disbelieve contrary results.
And to cap it all off:
- Bias Blind Spot: We tend to see ourselves as not being the ones suffering from biases , and so we do not compensate for our cognitive biases.
From documenting our short-comings in perceiving and measuring risk in Daniel Kahneman’s Thinking Fast and Slow, to the documentation of the failings of risk management systems in Douglas Hubbard’s The Failure of Risk Management: Why It’s Broken and How to Fix It, we do understand why we commonly mischaracterize risks and why risk management often veers off course. In fact, our understanding of the “why” behind risk management failures is much more advanced than our ability to counter those failures through modified behaviors and risk management processes. The “human factor” is a particularly vexing challenge in risk management.
When we get into our cars every morning, for example, we implicitly conclude that the risk-reward ratio of doing so favors our safety. If asked, most of us would be unable to say what that risk-reward ratio really is, since it effectively requires quantifying a long list of potential hazards, consequences, and probabilities. But we would be quite confident that we are making the right choice.
What if every driver had to review data on traffic accidents before starting their car and read about the 40,000 deaths and millions of injuries every year in U.S. car accidents alone? We might log fewer vehicle miles — but not necessarily. After all, we’ve heard those numbers before. How we process those numbers is the real issue. Data such as dire traffic mortality statistics tend to create cognitive dissonance in our minds, reflecting the inherent conflict between wanting to assume we’re safe getting into our car, and the fact that millions of people get hurt every year as a result of doing just that. Interestingly, the cognitive biases introduced above have in all likelihood evolved to help us avoid the paralysis of inaction that can result from cognitive dissonance. When we get into our car, for example:
- The Availability Heuristic reminds us that we’ve never been in a bad car accident, so it must be pretty unlikely for us.
- The Neglect of Probability Bias prevents us from translating national traffic statistics into personally relevant probabilities that might influence our behavior.
- The Positive Outcome Bias reassures us that we’ll get where we’re going safely, even if other people might not.
- The Bias Blind Spot means that we won’t question ourselves about any of the biases just mentioned, thereby avoiding the possibility of arriving at a result we don’t like (and a more honest assessment of risk).
What’s perhaps most remarkable about risk is not that we encounter so many failures of risk management, but rather that we successfully manage any risks at all! Not surprisingly, the risks we manage most effectively are risks where something analogous to “fight or flight” is an option, as opposed to risks requiring in-depth risk assessment or the interplay of probabilities.
Managing Business Risks
The psychology underlying business risk decision-making is not fundamentally different from the psychology underlying individual risk management. Today’s companies, however, are expected to do a better job of managing risks than individuals, even as the “risk list” has expanded from the most basic operational risks to include brand risks, regulatory risks, environmental risks, financial risks, and many others.
Yet the amount of time in the “risk management day” has not changed, it still maxes out at 24 hours. Decision-makers have to scan more risk ground, delegate more risk scanning effort than they used to, and become more selective in where to focus their “risk attention.”
The sheer amount of information relating to each of those risks has also exploded. In the last three years more information has been produced than in the whole of human history up to that point. Thinking only about risk, whether it’s local front-page coverage of an accident, or wide-spread discussion of potential hazards, consequences, and probabilities in hundreds of specialized publications and thousands of websites, there is little question that most companies are in “risk information overload.”
One way to manage this overload is through the use of information filters that help insulate decision-makers from the overwhelming volume of information. This creates an obvious problem, since the decision-makers being shielded from this overload are exactly those people who are ultimately responsible for identifying and managing business risks. Since risks are always evolving, what if a risk that was being largely filtered out turns into a big risk without anyone noticing? This happens frequently and is why risk managers end up characterizing many risks as “black swans.” The term “black swan” was developed in the financial world to characterize a significant negative outcome that could not have been realistically anticipated. The 2008 financial crisis, for example, has been commonly referred to as a black swan. There is a huge difference, however, between events that “weren’t anticipated,” and events that “couldn’t have been anticipated.” The former reflects a failure of risk management; the latter doesn’t. Most events characterized today as black swans, including the 2008 financial crisis, are probably nothing of the sort, but it makes us feel better about ourselves as risk managers to characterize them as such.
Climate Change Risk as Business Risk
Climate change entered the public risk consciousness some 25 years ago; since then, the risks of climate change have been the focus of a steadily growing scientific consensus. Current forecasts suggest that by the year 2100 the world could experience an average increase in temperature of as much as 6-8o Celsius, a figure that can be contrasted to the less than 1oC fluctuations within which human civilizations have evolved over the last 10,000 years. Current science also suggests that during the next several decades we are likely to commit the planet to 3 feet of sea level rise, while simultaneously acidifying the oceans to the point of fundamentally changing some oceanic ecosystems and food chains. We might even trigger “tipping points” that could accelerate climate change (e.g., the release of large volumes of methane from melting Arctic permafrost, or the rapid melt of the Greenland ice pack).
There is little question in the scientific community that climate change constitutes an enormous societal risk. After 25 years of discussion, however, we have yet to see policy measures put into place that are likely to materially slow climate change. We can trace much of the gap between climate science and climate policy back to the cognitive biases introduced earlier:
- The Availability Heuristic tells us that we’ve never witnessed climate change before, so it must be unlikely as compared to natural weather variations.
- Our Confirmation Bias helps us conclude, based on legitimate uncertainties associated with climate change and climate forecasting, that there really isn’t a scientific consensus after all.
- The Neglect of Probability Bias makes it easier for us to ignore the dangerous “long tail of the risk distribution” when considering climate change impacts.
- Our Positive Outcome Bias helps convince us that technology will come to the rescue, so we shouldn’t be too hasty in pursuing expensive mitigation efforts.
- Subjective Validation Bias allows many of us with strong political or religious beliefs to reject climate change as incompatible with those beliefs.
- Our Expectation Bias, in assessing the massive amount of temperature and other data being produced every year, leads us to search for results that seem to support our skepticism regarding the business materiality and priority of climate risk.
Not surprisingly, these same factors have influenced corporate perceptions of the business risks posed by climate change. When combined with the growing “risk list” and “risk information overload” companies already face, and the short time frames within which most business risk assessment tends to occur, it is easy to understand why companies have not embraced and acted upon climate risks in the ways that many stakeholders would have liked.
The danger for companies is that climate risks will sneak through their risk filters and manifest in unanticipated but material ways. A very real question for business climate risk management is whether companies want to end up having to characterize the future manifestations of climate risk as “black swans” (as something they couldn’t have anticipated), or whether they want to address climate risks proactively. Comparing this to the financial crisis, some people made a lot of money by avoiding the allure of “black swan” thinking. They anticipated the looming problem, acted on their forecasts, and profited accordingly!
Corporate Risk Management in a 2.0 Risk Environment
It is not surprising that corporate decision-makers perceive and respond to climate change risks differently. There is a small fraction who already incorporate both physical and policy risks into their strategic planning. There are many more who see a problem but question their ability to do anything about it, and another fraction that discounts climate change itself and sees regulatory mandates as just one more governmental initiative to resist for as long as possible. Corporate assumptions and behaviors that tend to bias corporate decision-making towards inadequate risk management at best, and risk management inaction at worst, include:
- Assuming that the future of climate policy risk looks very much like the past, meaning little action for many years to come;
- Assuming that climate change will not materially manifest itself within a time frame relevant to current decision-making needs and processes;
- Overlooking that global climate modeling has become better at being able to robustly downscale climate impact forecasts;
- Overlooking key modeling unknowns regarding potential “climate tipping points” that risk substantially accelerating climate change;
- Assuming that physical and policy climate risks will evolve slowly and linearly, rather than through sudden and game-changing events and “tipping points”;
- Assuming that climate risks are really no different from other business risks in terms of the nature of the expected regulatory response, notwithstanding the unique nature of these risks based on the potential scale and irreversibility of climate change;
- Focusing only on “expected risk” rather than on more damaging but less likely outcomes within the “fat tail” of the climate risk distribution;
- Underestimating even “expected risk” due to psychological pre-dispositions that are increasingly well understood in human decision-making; and
- Focusing only on narrow slices of climate risk (e.g., localized extreme events or the direct impacts of a carbon price), rather than on complicated regional and global interplay of the full range of climate-related risks (e.g. through the workings of global supply chains), as well as changing consumer perceptions and behaviors.
What does it mean for corporate decision-makers to internalize climate risk into their risk assessment and management processes, and why is it so difficult? Exploring climate risk at the corporate level first means recognizing the different ways that business climate risk can manifest itself, including:
- Physical risks, including direct impacts of climate change on a company’s operations, supply chains, and financial performance;
- Brand risks, including impacts on corporate competitiveness of changes in stakeholder perceptions regarding a company’s corporate social responsibility or corporate preparedness when it comes to climate change risks;
- Policy risks, including direct impacts of climate change policy and regulatory mandates on a company’s competitiveness;
- Structural risk, including changes in the supply of and demand for a company’s products and services based on changing market forces; and
- Liability risk, including the possibility that litigation or legislation could assign corporate liability for an organization’s greenhouse gas emissions, potentially retroactively.
With these different aspects of climate risk as a foundation, decision-makers will need to dive into the three elements of risk as they apply to their specific company and business model:
- What hazards underlie these risk outcomes, and how significant could they prove to be? For a problem like climate change, scenarios need to look beyond conventional wisdom and break out of the constraints of the “availability heuristic” that tries to convince us that the future is simply an extension of the past.
- What are the potential consequences of the hazards, and how might they manifest themselves at the level of an individual facility, an individual company’s operations, or a company’s global supply chain? Few companies have truly thought through the full range of ways in which hazards underlying multiple climate risks could manifest themselves.
- What is the probability of these consequences during the time frame of relevance to an individual company, and how can these probabilities be most effectively assessed?
Individual companies may not be able to mitigate all potential climate change hazards, but they often can influence consequences and probabilities of potential risk outcomes. This is the basis, for example, of the field of climate change adaptation planning.
At the end of the day, corporate decision-makers looking at new risks like climate change need to decide:
- Is it worth it to me as a corporate decision-maker to tackle this topic?
- If it is worth it, do I have the ability to achieve my stated objectives, risk reduction or otherwise?
The problem facing many companies is that to answer these crucial decision-making questions, they require well-organized information regarding the climate hazards, consequences, and probabilities underlying different climate risks. And it’s not necessarily an easy risk assessment to do, if only because it is so difficult to extrapolate from global climate modeling to localized climate forecasts that can support corporate hazard and consequence assessment. It is also not unusual for new but important studies and findings to appear so frequently as to confound corporate learning efforts.
Corporate decision-makers are often asked to answer the all-important questions of “is it worth it” and “can I do it” without having the best available information at their disposal. This makes corporate climate change risk management an ideal application of knowledge management tools to support improved decision-making. While the volume of knowledge relevant to answering a particular company’s “is it worth it, can I do it” questions is large and always expanding, decision-makers tend to have access to a small and constantly shrinking fraction of that knowledge. It is a situation that almost guarantees sub-optimal decision-making outcomes, and in some cases may put the company’s very future at risk.