Building a Resilient Tomorrow

15 Apr 2010

In a 21st century world of uneven growth, disruptive technology, climate danger and chaotic politics, we must build a society that’s transparent, diverse and able to look ahead – and embracing a philosophy of resilience will help get us there.

The myriad challenges facing us in the first half of the 21st century – including climate disruption, the emergence of transformative bio-, nano- and neuro- technologies, key resources on the brink of collapse and a global economy whipsawing between boom and bust – have one key feature in common: They underscore how brittle our civilization has become. Our capacity to respond effectively to emerging crises is increasingly tenuous. Any one of these issues, if handled clumsily, could push us into a spiral of disaster; in combination, they present us with an almost impossible task.

Almost, but not entirely. We can meet these challenges – and others yet to emerge – if we embrace a philosophy of resilience. Resilience is the capacity of a system to withstand unexpected shocks, to rebuild itself when necessary and to thrive when possible. It’s a concept that historically was discussed in disciplines such as material science and psychology; increasingly, it’s a perspective that is finding purchase in the worlds of environmental science, sociology and national security. Across this diverse set of fields, we can see a growing emphasis on preparation over prevention, on decentralization over monocultures and on agility over strength.

At the core of the resilience concept is a simple argument: Failure happens, so we need to be ready. Yet strategies that depend upon complete, ongoing success – and that collapse under pressure – are distressingly common. We saw it in Iraq war planning that paid insufficient attention to the potential for post-war instability and in financial models that assumed that home prices only go up; we see it now in environmental arguments that assert that our only option is an immediate, complete cessation of carbon emissions. This way of thinking – call it the “aspirational” model – has us ask one big question: “What can we do to maximize our results?” When everything works as desired, this approach can be quite efficient and sometimes enormously successful.

But what if things don’t go as planned? What if the results we desire fail to materialize or are ephemeral? All too often, reality has the impertinence to take a different path; failure (of systems, of infrastructure, of people) can and will happen. Insurgencies erupt, housing prices fall, and there is a very real possibility that even an immediate cessation of carbon emissions would come too late to avoid climate disaster. In this kind of world, a resilience perspective forces us to ask a very different question: “What can we do to minimize harm?”

Admitting that failure happens may sound humble, but in many ways it’s actually quite a bit more ambitious than simply hoping for the best. Resilience requires that we structure our societies, our economies and our behavior in ways that can account for uncertainty and cushion us when problems arise. It asks us to sacrifice a measure of efficiency in order to see improvements in safety. It requires that we put the needs of the future ahead of the desires of the present – something that human society hasn’t always been very good at doing.

Resilient systems have three fundamental characteristics in common: transparency, diversity and foresight.

Transparency

When you can see how a system operates, you’re in a better position to spot problems before they turn catastrophic.

In general, resilient systems tend to be more open and transparent, for both social and functional reasons. Transparency means, in the resilience context, that it’s possible for stakeholders to see and understand how the system works and, ideally, contribute to its continued growth and function. In a resilient society, we are all stakeholders.

The value here is threefold:

• Transparent systems that allow stakeholder collaboration gain the advantage of a wider variety of perspectives and ideas, increasing the likelihood of discovering and adopting optimal responses.

• Stakeholder collaboration increases the perceived importance of the system to the user or contributor, as they see themselves as having a greater role in achieving desired outcomes.

• Transparent systems allow for more thorough, ongoing evaluation. In the words of the open-source software movement, “many eyes make all bugs shallow.” That is, the more people examining a system, each with differing knowledge and experiences, the more likely it is that subtle and obscure – but still potentially dangerous – problems within the system will be found.

Simply put, transparent systems tend to be more resilient because they can rely on the people using the systems to help uncover flaws and, if possible, to contribute to their solution. This isn’t to say that a closed, opaque system could never be considered resilient – but the effort required to maintain its level of resilience would be substantially greater.

Diversity

Resilient systems also tend to comprise a mix of differing components, loosely-connected and decentralized. Resilient systems are more often structured as networks, rather than as hierarchies. This structure increases a system’s ability to respond to the unexpected and to better withstand the loss of components. Any system that includes elements that are deemed “too big to fail” can be conclusively said to be non-resilient.

One way to think about diversity as an aspect of resilience is to conceive of it as “avoidance of monocultures.” Monocultures – ecosystems consisting of a single species, often clones of a single individual – can be found in a number of industries, from tree farming to corporate IT. In stable environments, monocultures can maximize efficient returns. The risk of monocultures, however, is that they are quite brittle: If one member of an ecosystem is vulnerable to an attack, all members of that ecosystem are equally vulnerable. In an office with identical PCs, for example, a computer virus can jump unhindered from machine to machine. Conversely, it’s more difficult for a virus to spread in an office with a mix of platforms.

Decentralization and differentiation strengthen resilient systems against a broad range of risks. Redundancy, sometimes thought of as “slack,” offers further protection. Through redundancy, resilient systems avoid “single point of failure” problems; ideally, this will rely on components that offer parallel services without being identical. If one part of the system fails, the substitute component can move right in. The clearest example of useful redundancy is the data backup system for your computer.

Diversity, with differentiated and redundant components, can have costs. When seeking to maximize results, diversity can easily be interpreted as a waste of resources and a reduction in efficiency. But its value is clear when we seek to minimize harm, as the very factors that reduce efficiency also serve to blunt the impact of system failures.

Foresight

Finally, resilient systems tend to integrate foresight. The capacity of a system to withstand unexpected shocks is boosted by mechanisms that enhance the early detection of (and preparation for) these shocks. This foresight doesn’t need to be 100 percent accurate to be a useful part of resilience; many of the steps that might be taken to prepare for one emergency can help a system prepare for many different emergencies.

Embedded foresight often takes the form of scenario-based planning. In brief, scenario planning builds multiple divergent stories of the coming years, with the goal of helping an organization devise strategies that would be effective across this spectrum of possibilities. These scenarios offer plausible alternative futures, not as predictions, but as tools for evaluation. Resilient foresight practices may include mechanisms to test how the system responds to imagined, plausible shocks, such as “war gaming.”

In my work, I often argue that foresight serves as something of an immune system for a resilient organization. In a biological immune system, a small sample of a given pathogen is sufficient to trigger the development of antibodies to defend the host. In a social system, foresight allows the safe sampling of different threats (and opportunities!) in order to trigger the development of strategies to avoid or take advantage of these possibilities.

A Resilient Future

What would a more resilient society look like? Superficially, it might appear to be less efficient and slower-moving, with fewer big, complex components and more emphasis placed on planning for future risks. I think those living in that society, however, would have quite a different experience. A resilient society would give citizens a greater sense of security, and perhaps greater license to take entrepreneurial risks. A society that works to minimize the harm of failure would provide greater opportunities for innovation and experimentation.

The “aspirational” model in which we currently live was well-suited for the 20th century world of steady growth, gradual technology change, a seemingly stable environment and predictable politics. The world of the 21st century, however, no longer offers that kind of environment. In a world of uneven growth, disruptive technology, climate danger and chaotic politics, we’re far better off building a society that’s transparent, diverse and able to look ahead.

A successful future is a resilient future.

JavaScript has been disabled in your browser