#40 – IN SOFTWARE (AND IN CYBER) ‘FASTER IS SLOWER’ – GARY GACK

GG-photo-20100224“Faster is Slower” is one of the “Laws” formulated by Peter Senge in his book The Fifth Discipline.  This particular “law” plays out with a vengeance in larger software development projects, and often has a secondary negative impact of cyber security.  Let’s look at a rather typical scenario that illustrates this dynamic.

El Supremo has decided it’s time to build a new software system to automate the xyz function of the business, and has directed the IT team to deliver “it” in no more than 9 months.  “It” is defined by a set of 14 napkins carefully constructed in a couple of hours at the local watering hole.  The IT team rushes off see if they can translate those napkins into something a bit more specific.  They spend the next few weeks interviewing some of the prospective users, prepare a 40 page “requirements specification”, and use that as a basis to estimate what it will take to get this done.  They then review the spec and their estimate (9 months of course –  these guys aren’t dumb) with Le Grand Fromage and get a sign off to proceed.

WHAT TYPICALLY HAPPENS?
Four months into this effort the IT team recognizes they are already 2 months behind the schedule they originally planned – turns out they didn’t really fully understand what was required and they now think the project is 50% bigger than the original estimate.  Oh my!  What shall we do?

Simon Legree, the software team leader, decides we’ll just buckle down, work some overtime, and get this done on the original schedule.  Nice try, but no cigar.  Six more weeks go by and even Legree realizes there’s no way all of the planned requirements can be completed within the nine month schedule – so, he decides to drop some of the planned functionality – roughly 30% of the intended features.

Seven months into the project the team is finally ready to start testing (remember healthcare.gov?).  When testing does start everyone is very surprised at the number of bugs being found.  Even worse, the rate of bug discovery is not dropping at the rate we usually expect.  In fact, the number of bugs found each week through eight weeks of testing has remained constant.  We’re now nine months into this effort and we have a product that is so full of bugs we don’t feel at all good about releasing it – and, by the way, we’ve not had time to do any security testing at all.

In many cases the end result is a 3 or 4 month delay in the actual launch, with 30% less functionality than planned, and with very low confidence in cyber security.  In a few cases, like healthcare.gov, El Supremo decides we’ll launch “on time” and deal with the consequences as they arise.  We all know how well that can play out.

WHAT COULD (AND SHOULD) HAPPEN?
As a first step, perhaps we can collectively acknowledge the FACT that the vast majority of failures like the one describe above are  MANAGEMENT failures, rarely the result of technical incompetence.  We could have, and should have “bullet-proofed” El Supremo’s shoes with a few simple, but rarely applied, best practices.

  • Get an independent estimate of size, effort, and schedule prepared by estimating specialists.  Very few software development teams are estimating experts.  Software estimating is a highly specialized field that uses a set of tools and methods that are the province of only a few firms (names available on request). Independence is essential – job security is clearly at risk if the internal team contradicts El Supremo’s wishes.  In the example above the original schedule never had any chance to be met – no software team in history had built a system of that size in nine months.
  • Establish an explicit goal for delivered quality – how many defects are acceptable? What kind of defects are acceptable – e.g., how firm are cyber security requirements?  How is delivered security to be validated?
  • Create an explicit quality plan.  How many defects are likely to be “inserted” at each stage of the process – in requirements, in design, in the code?  How many defects are planned to be found and fixed at each stage?  What methods are to be used to find those defects?  What level of effort is planned to implement those methods? Industry benchmarks and expertise exist that allow independent experts to evaluate the feasibility of the proposed quality plan.  Finding defects early is essential – finding them all in testing always results in schedule and cost overruns.
  • Establish “leading indicator” reporting to monitor “quality – adjusted” progress. Work products not evaluated for quality may appear to be, but in reality are not, complete unless defects have been removed.

None of these steps are rocket science.  All can and should be applied to every important software project.  Applying them requires specific expertise rarely found in most software teams – there’s a big difference between “architects” and “engineers” – similarly, there’s a big difference between estimating and quality management in contrast to actual software development – these are very different areas of expertise.

Bio:

Gary Gack, is the founder and President of Process-Fusion.net, a provider of Assessments, Strategy advice, Training, and Coaching relating to integration and deployment of software and IT best practices. Mr. Gack holds an MBA from the Wharton School, is a Lean Six Sigma Black Belt and an ASQ Certified Software Quality Engineer. He has more than 40 years of diverse experience, including more than 20 years focused on process improvement. He is the author of many articles and a book entitled Managing the Black Hole: The Executive’s Guide to Software Project Risk. LinkedIn profile: http://www.linkedin.com/in/garygack

He can be contacted at: 904.579.1894 or ggack@Process-Fusion.net

Leave a Reply

Your email address will not be published. Required fields are marked *