#163 – COMPLEXITY: THE WAGER – ANALYSIS OR INTUITION? – GEARY SIKICH

Untitled1-150x150Introduction

Business Continuity professionals need to rethink some of the paradigms of the practice.  All too often we tend to fall back on what are considered the tried and true ways of doing things.  This essentially leaves us in two camps; the first, evolved out of information technology and disaster recovery and the second, evolved out of emergency preparedness (tactical planning), financial risk management (operational) and strategic planning (strategic).  These two camps each offer much to be desired. 

The first, having renamed disaster recovery and calling it business continuity (BCP) still retains a strong focus on systems continuity rather than true business continuity; but this is not a bad thing.  The second, has begun a forced merger of sorts; combining the varied practices at three levels (tactical, operational and strategic) and renaming it, enterprise risk management (ERM).  The second group still retains strong perspectives on risk management; that is why I have divided it into the three sub-groups (tactical, operational and strategic).

Complexity, in Effect, is Changing the Business Continuity Paradigm

Complexity cannot be solved with a dependence on only mathematics.  Complexity also cannot be solved with just intuition.  There is a balance that has to be developed between the two in order to get a perspective on risk, threat, hazard, consequence and business impact.  As you assess you have to start mapping the complexity that evolves out of the identification of a risk, threat, hazard, etc.  You need to think in three dimensions – strategic, operational and tactical.  Each feeds into the others and gives you a list of issues that you can relate to the identified risk, threat, hazard, etc.

Taken as a continuous and never ending process, you begin to realize that risk, threat, hazard, etc. can only be buffered and that the buffering process has to be refreshed in order to maintain risk parity.  This is not a once and done process as many in the BC sphere seem to think (and practice).  It requires constant analysis and asset allocation to maintain parity.

Three conditions for Risk, Threat, Hazard, Vulnerability (RTHV) acceptance:

  • Benefits of buffering (spending) greater than the costs
  • Directed at projects needing combined efforts
  • Overall effort must be sustainable over time and with resources available

These three criteria are applied independently and all must be satisfied in order to justify the buffering efforts.  Difficulties arise when costs and benefits are not well defined and when intuition substitutes for analysis in the decision making process.

Are buffering efforts sustainable; a formula for Primary Buffering Sustainability (PBS) Key Factors:

  • Costs
  • Effectiveness over Time
  • Amplification of Change (positive or negative) on desired effect
  • You can quantify trends, you cannot forecast or predict trends

Complex systems can go from sub-critical to critical spontaneously as Per Bak’s research discovered in what is called “Self Organized Criticality”.  When we think of business analysis today, we must realize that complexity adds to uncertainty regarding outcomes and that modeling complexity is near impossible as the size of the worst event that can happen is an exponential function of the system scale; i.e.; the larger the system (touchpoints) the greater the impact of events due to cascade effects, and the effect of change that may be too minute to be observed.

Embrace Risk Parity

Risk Parity is a balancing of resources to a risk, threat, hazard, vulnerability (RTHV), etc.  You identify a RTHV and then balance the resources you allocate to buffer against the RTHV being realized (that is occurring).  This is done for all RTHV that you identify and is a constant process of allocation of resources to buffer the RTHV based on the expectation of RTHV occurring and the velocity, impact and ability to sustain resilience against the RTHV realization.  You would apply this and then constantly assess to determine what resources need to be shifted to address the RTHV.  This can be a short term or long term effort.  The main point is that achieving risk parity is a balancing of resources based on assessment of RTHV realization and potential consequences to the organization.  Risk Parity is not static as RTHV and consequences are not static.

When I say RTHV is not static, I mean that when you identify a RTHV and take action to mitigate that RTHV, the RTHV changes with regard to your action.  The RTHV may increase or decrease, but it changes due to the action taken.  You essentially create a new form of RTHV that you have to assess with regard to your action to mitigate the original RTHV.  This can become quite complex as others also will be altering the state of the RTHV by taking actions to buffer the RTHV.  The network that your organization operates in reacts to actions taken to address RTHV (i.e., “Value Chain” – Customers, Suppliers, etc.) all are reacting and this results in a non-static RTHV.

A good example would be the purchase of, say 100 shares of a stock.  You have a RTHV that the stock will decline in value (downside RTHV); you might decide to sell a call option to offset the downside RTHV or place a stop loss order to minimize your loss.  In essence you have changed the RTHV (non-static).  The call option also creates a new RTHV; that is that you may have the stock called away if it breaks the strike price.  This will limit your profit on the stock (upside RTHV).  In any event you have altered the RTHV paradigm and it has become non-static due to your actions and/or the actions of others within your network and external to your network.  This gets us to non-aligned RTHV that is RTHV that is influenced by nonlinear reaction.

I think that “relevance” is a very significant word relative to KRI’s (Key Risk Indicators).  You can have an extensive list but if they are not relevant to the organization and its operations they do little to enhance the risk management/business continuity efforts.  That said, we have to assess non-linearity and opacity with regard to the potential obfuscation of “relevance”.

Beware of Experts and Algorithms

“Why are experts inferior to algorithms?  One reason is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions.  Simple combinations of features are better.”  This statement covers more but let’s focus on the topic of over thinking; can people be led to avoid over thinking?

When we define over thinking, are we not applying a bias to the process?  Can one over think?  I would say that this is a constant process.  Now, that said, if this constant process leads to decision paralysis – failure to be able to make a decision due to constantly adding new information to the data matrix – then I would agree – people can be led to over think.  Algorithms are also flawed due to their very nature of being developed by humans.  A good example is trading algorithms in stock markets.  Too many permutations lead to potential catastrophic events – not “Black Swans” either, but catastrophes none the less.

New RTHV+C Paradigm: Efficient or Effective?

Uncertainty in a certainty seeking world offers surprises to many people, and to a very select few, confirmation of the need for optionality.  When we identify RTHV and apply buffering measures we alter the potential consequences of the RTVH being realized.  A focus on the potential consequences of the RTHV realized (RTHV+C) can allow for the development of effective scenarios that can be used to validate plans and to effect cultural change within an organization and its complex web of connectivity.

The risk dynamics of the real-world can provide unparalleled scenario generation opportunities.  You can also explore the roles of the organization against real-world events, current issues, complexity and the volatility of your operational markets.

Chartis reports that Operational Risk has overtaken credit risk as the most important risk type.  So, shouldn’t your scenarios reflect operational risk issues?  Fraud, Cyber-Threats, Geo-Political Uncertainties, Human Capital Issues, Supply Chain, Non-Aligned Business Risks/Impacts, etc., should be brought to the forefront of scenario planning.

Expanding your scenario planning horizons will allow for the incorporation of inputs from other disciplines within your organization and, perhaps, external to your organization.  Since risk is a non-static element, you will be able to add perspectives that allow you to better understand the nature of the risk that your organization is facing and to expand the operational risk management function to encompass more of the organization.

You Need to be Able to Connect the Dots

I have written several articles on the concept of “FutureProofing” and “Touchpoint Analysis” that have appeared in print and on the Internet.  I will summarize briefly five key assumptions that have been used as a basis to for the developmental framework of Logical Management Systems, Corp. “Futureproofing” methodology.  These are:

  • Assumption # 1: The modern business and government organizations represent complex systems operating within multiple networks;
  • Assumption # 2: There are many layers of complexity within organizations and their “Value Chains”;
  • Assumption # 3: Due to complexity, active analysis, RTHV buffering, risk parity, cascade analysis, etc. of the potential consequences of disruptive (positive and/or negative) events is critical to survivability;
  • Assumption # 4: Actions in response to disruptive events needs to be coordinated with all touchpoints;
  • Assumption # 5: Resources and skill sets are key issues that need to be recognized and addressed.

The recent World Economic Forum publication “Global Risks 2015” provides some excellent examples of connecting the dots.  The figure (#3), entitled “The Risks-Trends 2015 Interconnections Map” from the WEF report is an example of what can be done when you connect the dots.

Picture1Conclusions

Risk taking is central to the functioning of any organization.  Excessive risk taking and a simultaneous decline in the risk absorption capacity of the organization can lead to catastrophic results (i.e., the financial system and financial crisis).  One can never achieve true certainty when assessing RTHV unless you reduce the probabilities to zero or one.  Opacity, that is, constant uncertainty and changing factors makes getting a clear picture of RTHV realities nearly impossible.  In order to overcome opacity you need to constantly monitor the RTHV+C environment.  It’s all about targeted flexibility, the art of being prepared, rather than preparing for specific events.  Being able to respond rather than being able to forecast, facilitates early warning and proactive response to shifts in your market segment.

We live in a world full of consequences.  Our decisions need to be made with the most information available with the recognition that all decisions carry with them flaws due to our inability know everything.  Our focus should be on how our flawed decisions establish a context for flawed RTHV assessments, leading to flawed plans, resulting in flawed abilities to execute effectively.  If we change our thought processes from chasing symptoms and ignoring consequences to recognizing the limitations of decision making under uncertainty we may find that the decisions we are making have more upside than downside.

We’re limited not by the amount of RTHV we can identify, but by how inventive we are about how we think about RTHV and how much we’re willing to do to buffer against RTHV realization.  Here are seven identified needs for today’s risk/continuity managers:

  • Techniques for identifying permanent versus cyclical changes in the external operating environment,
  • Techniques for spotting and buffering risks so that the organization has the ability to leverage RTHV management activities for competitive advantage,
  • Tools for stimulating the creation of options, particularly where change is occurring rapidly and the scope for risk management action is shifting,
  • Tools for stimulating the understanding of opaque risk forces that are truly dynamic, with multiple orders of consequence effects,
  • Proven tools for improving strategy, risk management, business continuity and competitive intelligence processes, breaking inertia, and jolting conventional risk management thinking.
  • Techniques for generating and harnessing insights from big data about risks that customers, competitors, and suppliers present to the organization,
  • Techniques for identifying and focusing the top team’s attention on new or poorly understood risks—before it is too late and the risk materialize (risk realization).

Here are five factors affecting decision making under uncertainty:

  • Interconnectedness: Opportunities for risk contagion (geographic, category, geopolitical);
  • Asymmetry: Small events that can create disproportionate and unexpected effects;
  • Time Compression: “Just in time” processes have little leeway with effects of risk realization being felt rapidly;
  • “Noise”: Salient facts that are not noticed at the time of event (failure of critical thinking);
  • Information Vetting: Misinformation or inadequately provided information that has not been properly validated can lead to greater risk exposure and skewed responses.

I will close with a quote from Alexander Hamilton, who lived from 1755-1804 and was the first U.S. Treasury Secretary.  Hamilton said: “A nation which can prefer disgrace to danger is prepared for a master, and deserves one.”  Will a failure to connect the dots and rethink RTHV+C and uncertainty lead to your demise?

Bio:

Contact Information: E-mail: G.Sikich@att.net or gsikich@logicalmanagement.com.  Telephone: 1- 219-922-7718.

Geary Sikich is a seasoned risk management professional who advises private and public sector executives to develop risk buffering strategies to protect their asset base.  With a M.Ed. in Counseling and Guidance, Geary’s focus is human capital: what people think, who they are, what they need and how they communicate. With over 25 years in management consulting as a trusted advisor, crisis manager, senior executive and educator, Geary brings unprecedented value to clients worldwide.

Geary is well-versed in contingency planning, risk management, human resource development, “war gaming,” as well as competitive intelligence, issues analysis, global strategy and identification of transparent vulnerabilities.  Geary began his career as an officer in the U.S. Army after completing his BS in Criminology.  As a thought leader, Geary leverages his skills in client attraction and the tools of LinkedIn, social media and publishing to help executives in decision analysis, strategy development and risk buffering.  A well-known author, his books and articles are readily available on Amazon, Barnes & Noble and the Internet.

REFERENCES

Apgar, David, Risk Intelligence – Learning to Manage What We Don’t Know, Harvard Business School Press, 2006.
Davis, Stanley M., Christopher Meyer, Blur: The Speed of Change in the Connected Economy, (1998).
Decision Theory: The Wikipedia’s entry article summarizing some of the major approaches to choice under uncertainty including mention of Pascal together with extensive references is an excellent starting place to learn something of decision theory.
EyeWitness to History, www.eyewitnesstohistory.com “The Suicide of Socrates, 399 BC,” (2003).
Jones, Milo and Silberzahn, Philippe, Constructing Cassandra: Reframing Intelligence Failure at the CIA, 1947–2001, Stanford Security Studies (August 21, 2013) ISBN-10: 0804785805, ISBN-13: 978-0804785808
Kami, Michael J., “Trigger Points: how to make decisions three times faster,” 1988, McGraw-Hill, ISBN 0-07-033219-3
Klein, Gary, “Sources of Power: How People Make Decisions,” 1998, MIT Press, ISBN 13 978-0-262-11227-7
Sikich, Geary W., Graceful Degradation and Agile Restoration Synopsis, Disaster Resource Guide, 2002
Sikich, Geary W., “Integrated Business Continuity: Maintaining Resilience in Times of Uncertainty,” PennWell Publishing, 2003
Sikich, Geary W., “Risk and Compliance: Are you driving the car while looking in the rearview mirror?” 2013
Sikich, Geary W., ““Transparent Vulnerabilities” How we overlook the obvious, because it is too clear that it is there” 2008
Sikich, Geary W., “Risk and the Limitations of Knowledge” 2014
Tainter, Joseph, “The Collapse of Complex Societies,” Cambridge University Press (March 30, 1990), ISBN-10: 052138673X, ISBN-13: 978-0521386739
Taleb, Nicholas Nassim, “The Black Swan: The Impact of the Highly Improbable,” 2007, Random House – ISBN 978-1-4000-6351-2, 2nd Edition 2010, Random House – ISBN 978-0-8129-7381-5
Taleb, Nicholas Nassim, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, 2005, Updated edition (October 14, 2008) Random House – ISBN-13: 978-1400067930
Taleb, N.N., “Common Errors in Interpreting the Ideas of The Black Swan and Associated Papers;” NYU Poly Institute October 18, 2009
Taleb, Nicholas Nassim, “Antifragile: Things that gain from disorder,” 2012, Random House – ISBN 978-1-4000-6782-4

Leave a Reply

Your email address will not be published. Required fields are marked *