When applying Enterprise Risk Management (ERM), as in much else in life, the devil is in the details. The details are especially critical when attempting to apply standards such as ISO 31000 to software and IT intensive systems. ISO 31000 describes principles, a framework, and a high level process for ERM. ISO 31000 clause 5 identifies process steps – in this article I will focus on risk assessment and risk treatment as it applies to software and IT intensive contexts.
- 5.2 Communication and consultation
- 5.3 Establishing the Context
- 5.4 Risk assessment
- 5.4.2 Risk Identification
- 5.4.3 Risk Analysis
- 5.4.4 Risk Evaluation
- 5.5 Risk Treatment
As summarized in the table below IT risks can be grouped into three broad categories – software development risk, software operation (usage) risks, and infrastructure risks. For each category of risks I have identified a few of the more significant subcategories along with several of the more common risk drivers and associated recognition and remediation strategies.
In this article I will focus on software development risks (which are more fully explored in my book Managing the Black Hole: The Executive’s Guide to Software Project Risk.
IT Risk Category | Sub-Category | Risk Drivers | Risk Recognition & Remediation |
Software Development | Delivered Quality | Defect Containment | Metrics, Phase Containment |
Schedule | Estimates | Sizing | |
Cost | Quality, Schedule | Focus on Quality | |
Software Operation | Security | Standards Adherence | NIST 800 Series |
Data Corruption | Fallback Procedures | ITIL | |
Recoverability | Configuration Management | ITIL | |
Infrastructure | Capacity | Volume Estimates, Scaling | ITIL |
Reliability | Monitoring, Maintenance | ITIL | |
Recoverability | Disaster Recovery Plan | e.g., http://www.ready.gov/ |
SOFTWARE IS CENTRAL TO YOUR ABILITY TO RUN YOUR BUSINESS EFFECTIVELY
Software projects are risky – failures are common. Less than 1/3 of all software projects (purchased or built) are fully successful. Success means delivered on-time, on-budget, with the intended features and functions. The average software project overruns its budget by around 50% and schedule by around 80%. The average project delivers less than 70% of planned features and functions. These statistics have not significantly changed over at least the last 20 years!
Projects in the high risk group account for around 80 – 90% of total software spending, even though they constitute only around 8-10% of all software projects.
Software development risk identification in the context of software intensive systems means identifying the large projects.
SOFTWARE DEVELOPMENT RISK ASSESSMENT
Three subcategories of risk predominate – 1. Quality risk; 2. Schedule risk, and 3. Cost risk.
What are the consequences to the enterprise if delivered quality is poor? Industry benchmark data suggest average delivered quality will result in post-delivery defect remediation costs equal to the original development cost – is your organization prepared to absorb that cost and tie up scarce development resources for one or more years post-delivery?
What are the consequences to the enterprise if, as is typical, the actual project duration is 80% longer than forecast? What are the consequences for your competitive position? For compliance with government mandates?
What are the consequences to the enterprise if, as is typical, the actual cost of the project is 50% greater than planned? And if as a consequence, budget is no longer available to do the next critical project?
SOFTWARE DEVELOPMENT RISK TREATMENT
ISO 31000 suggests using the ‘4-Ts’ (tolerate, treat, transfer, terminate) to decide how to respond to identified risk. In virtually every software development case it is essential to treat the risks. Very briefly, treatment means:
- Require a formal quantification of the ‘size’ of a project (using a generally accepted method such as “function points”[1]) before budget or schedule commitments are made.
- Require a detailed quality plan that includes a forecast of the number of defects likely to be ‘inserted’ during each phase of a project. Industry benchmarks are available that will enable estimation of the number of requirements, design, code, and bad fix defects likely to be ‘inserted’ per size during the development process. The quality plan must specify a specific target for delivered quality (e.g., 99% of forecasted defects will be removed prior to delivery[2]). The quality plan must include a credible estimate of effort and methods planned to be used to find and fix the expected number of defects. Again, industry benchmarks provide a basis for evaluating plausibility of the plan. Testing alone will NEVER be sufficient. Require periodic independent monitoring of forecast vs. actual. 40-60% of total software cost is typically associated with finding and fixing defects – best in class groups may reduce that to 20-30%.
- Require independent estimates of cost and schedule that are derived using a proven estimating model that uses size and other factors as inputs.
- NEVER dictate a schedule – unrealistically compressed schedules invariably lead to failure – don’t deny the “laws of physics” – if no one else has done a project of the indicated size on the schedule you want, your team won’t either.
[1] Function Points are defined by an ANSI/ISO std. – see http://en.wikipedia.org/w/index.php?title=ISO/IEC_20926:2009&action=edit&redlink=1. Function Point counting is labor intensive and is based on known requirements. In practice size is needed before requirements are fully known. Capers Jones has devised an approach known as Software Risk Master that uses an analogy process to determine size very early – see www.namcook.com
[2] This generally means using a metric known as “Total Containment Effectiveness” (TCE). Industry data show cost and schedule are minimized when TCE is above 95%
Bio:
Gary Gack, is the founder and President of Process-Fusion.net, a provider of Assessments, Strategy advice, Training, and Coaching relating to integration and deployment of software and IT best practices. Mr. Gack holds an MBA from the Wharton School, is a Lean Six Sigma Black Belt and an ASQ Certified Software Quality Engineer. He has more than 40 years of diverse experience, including more than 20 years focused on process improvement. He is the author of many articles and a book entitled Managing the Black Hole: The Executive’s Guide to Software Project Risk. LinkedIn profile: http://www.linkedin.com/in/garygack