#38 – FIVE YEAR SOFTWARE TARGETS – CAPERS JONES

Capers Jones pixFollowing is a collection of 20 goals or targets for software engineering progress developed by Namcook Analytics LLC for the five years between 2014 and 2018.  All of these goals are achievable in 2014 and in fact have already been achieved by a small selection of leading companies.

Unfortunately less than 5% of U.S. and global companies have achieved any of these goals, and less than 1% have achieved most of them.  None of the author’s clients have achieved every goal.

The author suggests that every major software-producing company and government agency have their own set of five-year targets, using the current list as a starting point.

Twenty Software Engineering Targets from 2014 Through 2018

  1. Raise defect removal efficiency (DRE) from < 90.0% to > 99.5%.  This is the most important goal for the industry.  It cannot be achieved by testing alone but requires pre-test inspections and static analysis.  DRE is measured by comparing all bugs found during development to those reported in the first 90 days by customers.
  2. Lower software defect potentials from > 4.0 per function point to < 2.0 per function point.  Defect potentials are the sum of bugs found in requirements, design, code, user documents, and bad fixes.  Requirements and design bugs often outnumber code bugs.  Achieving this goal requires effective defect prevention such as joint application design (JAD), quality function deployment (QFD), certified reusable components, and others.  It also requires a complete software quality measurement program.  Achieving this goal also requires better training in common sources of defects found in requirements, design, and source code.
  3. Lower cost of quality (COQ) from > 45.0% of development to < 20.0% of development. Finding and fixing bugs has been the most expensive task in software for more than 50 years.  A synergistic combination of defect prevention and pre-test inspections and static analysis are needed to achieve this goal.
  4. Reduce average cyclomatic complexity from > 25.0 to < 10.0.  Achieving this goal requires careful analysis of software structures, and of course it also requires measuring cyclomatic complexity for all modules.
  5. Raise test coverage from < 75.0% to > 98.5% for risks, paths, and requirements.  Achieving this goal requires using mathematical design methods for test case creation such as using design of experiments.  It also requires measurement of test coverage.
  6. Eliminate error-prone modules in large systems.  Bugs are not randomly distributed.  Achieving this goal requires careful measurements of code defects during development and after release with tools that can trace bugs to specific modules.  Some companies such as IBM have been doing this for many years.  Error-prone modules (EPM) are usually less than 5% of total modules but receive more than 50% of total bugs.  Prevention is the best solution.  Existing error-prone modules in legacy applications may require surgical removal and replacement.
  7. Eliminate security flaws in all software applications.  As cyber crime becomes more common the need for better security is more urgent.  Achieving this goal requires use of security inspections, security testing, and automated tools that seek out security flaws.  For major systems containing valuable financial or confidential data, ethical hackers may also be needed.
  8. Reduce the odds of cyber attacks from > 10.0% to < 0.1%.  Achieving this goal requires a synergistic combination of better firewalls, continuous anti-virus checking with constant updates to viral signatures; and also increasing the immunity of software itself by means of changes to basic architecture and permission strategies.
  9. Reduce bad-fix injections from > 7.0% to < 1.0%.   Not many people know that about 7% of attempts to fix software bugs contain new bugs in the fixes themselves commonly called “bad fixes.”   When cyclomatic complexity tops 50 the bad-fix injection rate can soar to 25% or more.  Reducing bad-fix injection requires measuring and controlling cyclomatic complexity, using static analysis for all bug fixes, testing all bug fixes, and inspections of all significant fixes prior to integration.
  10. Reduce requirements creep from > 1.5% per calendar month to < 0.25% per calendar month.  Requirements creep has been an endemic problem of the software industry for more than 50 years.  While prototypes, agile embedded users, and joint application design (JAD) are useful, it is technically possible to also use automated requirements models to improve requirements completeness.
  11. Lower the risk of project failure or cancellation on large 10,000 function point projects from > 35.0% to < 5.0%.  Cancellation of large systems due to poor quality and cost overruns is an endemic problem of the software industry, and totally unnecessary.  A synergistic combination of effective defect prevention and pre-test inspections and static analysis can come close to eliminating this far too common problem.
  12. Reduce the odds of schedule delays from > 50.0% to < 5.0%.  Since the main reasons for schedule delays are poor quality and excessive requirements creep, solving some of the earlier problems in this list will also solve the problem of schedule delays.  Most projects seem on time until testing starts, when huge quantities of bugs begin to stretch out the test schedule to infinity.  Defect prevention combined with pre-test static analysis can reduce or eliminate schedule delays.
  13. Reduce the odds of cost overruns from > 40.0% to < 3.0%.  Software cost overruns and software schedule delays have similar root causes; i.e. poor quality control combined with excessive requirements creep.  Better defect prevention combined with pre-test defect removal can help to cure both of these endemic software problems.
  14. Reduce the odds of litigation on outsource contracts from > 5.0% to < 1.0%.  The author of this paper has been an expert witness in 12 breach of contract cases.  All of these cases seem to have similar root causes which include poor quality control, poor change control, and very poor status tracking.  A synergistic combination of early sizing and risk analysis prior to contract signing plus effective defect prevention and pre-test defect removal can lower the odds of software breach of contract litigation.
  15. Lower maintenance and warranty repair costs by > 75.0% compared to 2014 values.  Starting in about 2000 the number of U.S. maintenance programmers began to exceed the number of development programmers.  IBM discovered that effective defect prevention and pre-test defect removal reduced delivered defects to such low levels that maintenance costs were reduced by at least 45% and sometimes as much as 75%.
  16. Improve the volume of certified reusable materials from < 15.0% to > 75.0%.  Custom designs and manual coding are intrinsically error-prone and inefficient no matter what methodology is used.  The best way of converting software engineering from a craft to a modern profession would be to construct applications from libraries of certified reusable material; i.e. reusable requirements, design, code, and test materials.  Certification to near zero-defect levels is a precursor, so effective quality control is on the critical path to increasing the volumes of certified reusable materials.
  17. Improve average development productivity from < 8.0 function points per month to >16.0 function points per month.  Productivity rates vary based on application size, complexity, team experience, methodologies, and several other factors.  However when all projects are viewed in aggregate average productivity is below 8.0 function points per staff month.  Doubling this rate needs a combination of better quality control and much higher volumes of certified reusable materials; probably 50% or more.
  18. Improve work hours per function point from > 16.5 to < 8.25.  Goal 17 and this goal are essentially the same but use different metrics.   However there is one important difference.  Work hours will be the same in every country.  For example a project in Sweden with 126 work hours per month will have the same number of work hours as a project in China with 184 work hours per month.  But the Chinese project will need fewer calendar months than the Swedish project.
  19. Shorten average software development schedules by > 35.0% compared to 2014 averages.  The most common complaint of software clients and corporate executives at the CIO and CFO level is that big software projects take too long.  Surprisingly it is not hard to make them shorter.  A synergistic combination of better defect prevention, pre-test static analysis and inspections, and larger volumes of certified reusable materials can make significant reductions in schedule intervals.
  20. Raise maintenance assignment scopes from < 1,500 function points to > 5,000 function points.  The metric “maintenance assignment scope” refers to the number of function points that one maintenance programmer can keep up and running during a calendar year.  The range is from < 300 function points for buggy and complex software to > 5,000 function points for modern software released with effective quality control.  The current average is about 1,500 function points.  This is a key metric for predicting maintenance staffing for both individual projects and also for corporate portfolios.  Achieving this goal requires effective defect prevention, effective pre-test defect removal, and effective testing using modern mathematically based test case design methods.  It also requires low levels of cyclomatic complexity.

Note that the function point metrics used in this paper refer to function points as defined by the International Function Point User’s Group (IFPUG).  Other function points such as COSMIC, FISMA, NESMA, unadjusted, etc. can also be used but would have different quantitative results.

The technology stack available in 2014 is already good enough to achieve each of these 20 targets, although few companies have done so.  Some of the technologies associated with achieving these 20 targets include but are not limited to:

Technologies Useful in Achieving Software Engineering Goals

  • Use early risk analysis, sizing, and both quality and schedule/cost estimation before starting major projects, such as Namcook’s Software Risk Master (SRM).
  • Use effective defect prevention such as Joint Application Design (JAD) and Quality Function Deployment (QFD).
  • Use pre-test inspections of major deliverables such as requirements, architecture, design, code, etc.
  • Use both text static analysis and source code static analysis for all software.
  • Use the SANs Institute list of common programming bugs and avoid them all.
  • Use the FOG and FLESCH readability tools on requirements, design, etc.
  • Use mathematical test case design such as design of experiments.
  • Use certified test and quality assurance personnel.
  • Use function point metrics for benchmarks and normalization of data.
  • Use effective methodologies such as agile and XP for small projects; RUP and TSP for large systems.
  • Use automated test coverage tools.
  • Use automated cyclomatic complexity tools.
  • Use parametric estimation tools that can predict quality, schedules, and costs.  Manual estimates tend to be excessively optimistic.
  • Use accurate measurement tools and methods with at least 3.0% precision.
  • Consider applying automated requirements models, which seem to be effective in minimizing requirements issues.
  • Consider applying the new SEMAT method (Software Engineering Methods and Theory) which holds promise for improved design and code quality.  SEMAT comes with a learning curve so reading the published book is necessary prior to use.

It is past time to change software engineering from a craft to a true engineering profession.  It is also past time to switch from partial and inaccurate analysis of software results to results with high accuracy for both predictions before projects start and measurements after projects are completed.

The 20 goals shown above were positive targets that companies and government groups should strive to achieve.  But “software engineering” also has a number of harmful practices that should be avoided and eliminated.  Some of these are bad enough to be viewed as professional malpractice.  Following are six hazardous software methods, some of which have been in continuous use for more than 50 years without their harm being fully understood:

Six Hazardous Software Engineering Methods to be Avoided

  1. Stop trying to measure quality economics with “cost per defect.”  This metric always achieves the lowest value for the buggiest software, so it penalizes actual quality.  The metric also understates the true economic value of software by several hundred percent.  This metric violates standard economic assumptions and can be viewed as professional malpractice for measuring quality economics.  The best economic measure for cost of quality is “defect removal costs per function point.”  Cost per defect ignores the fixed costs for writing and running test cases.  It is a well-known law of manufacturing economics that if a process has a high proportion of fixed costs the cost per unit will go up.  The urban legend that it costs 100 times as much to fix a bug after release than before is not valid; the costs are almost flat if measured properly.
  2. Stop trying to measure software productivity with “lines of code” metrics.  This metric penalizes high level languages.  This metric also makes non-coding work such as requirements and design invisible.  This metric can be viewed as professional malpractice for economic analysis involving multiple programming languages.  The best metrics for software productivity are work hours per function point and function points per staff month.  Both of these can be used at activity levels and also for entire projects.  These metrics can also be used for non-code work such as requirements and design.  LOC metrics have limited use for coding itself, but are hazardous for larger economic studies of full projects.  LOC metrics ignore the costs of requirements, design, and documentation which are often larger than the costs of the code itself.
  3. Stop measuring “design, code, and unit test” or DCUT.  Measure full projects including management, requirements, design, coding, integration, documentations, all forms of testing, etc.  DCUT measures encompass less than 30% of the total costs of software development projects.  It is professionally embarrassing to measure only part of software development projects.
  4. Be cautious of “technical debt.”  This is a useful metaphor but not a complete metric for understanding quality economics.  Technical debt omits the high costs of canceled projects and it excludes both consequential damages to clients and also litigation costs and possible damage awards to plaintiffs.  Technical debt only includes about 17% of the true costs of poor quality.  Cost of quality (COQ) is a better metric for quality economics.
  5. Avoid “pair programming”.  Pair programming expensive and less effective for quality than a combination of inspections and static analysis.  Do read the literature on pair programming, and especially the reports by programmers who quit jobs specifically to avoid pair programming.  The literature in favor of pair programming also illustrates the general weakness of software engineering research, in that it does not compare pair programming to methods with proven quality results such as inspections and static analysis.  It only compares pairs to single programmers without any discussion of tools, methods, inspections, etc.
  6. Stop depending only on testing without using effective methods of defect prevention and effective methods of pre-test defect removal such as inspections and static analysis.  Testing by itself without pre-test removal is expensive and seldom tops 85% in defect removal efficiency levels.  A synergistic combination of defect prevention, pre-test removal such as static analysis and inspections can raise DRE to > 99% while lowering costs and shortening schedules at the same time.

The software engineering field has been very different from older and more mature forms of engineering.  One of the main differences between software engineering and true engineering fields is that software engineering has very poor measurement practices and far too much subjective information instead of solid empirical data.

This short paper suggests a set of 20 quantified targets that if achieved would make significant advances in both software quality and software productivity.  But the essential message is that poor software quality is a critical factor that needs to get better in order to improve software productivity, schedules, costs, and economics.

REFERENCES

Beck, Kent; Test-Driven Development; Addison Wesley, Boston, MA; 2002; ISBN 10: 0321146530; 240 pages.

Black, Rex; Managing the Testing Process: Practical Tools and Techniques for Managing Hardware and Software Testing; Wiley; 2009; ISBN-10 0470404159; 672 pages.

Chess, Brian and West, Jacob; Secure Programming with Static Analysis; Addison Wesley, Boston, MA; 20007; ISBN 13: 978-0321424778; 624 pages.

Cohen, Lou; Quality Function Deployment – How to Make QFD Work for You; Prentice Hall, Upper Saddle River, NJ; 1995; ISBN 10: 0201633302; 368 pages.

Crosby, Philip B.; Quality is Free; New American Library, Mentor Books, New York, NY; 1979; 270 pages.

Everett, Gerald D. And McLeod, Raymond; Software Testing; John Wiley & Sons, Hoboken, NJ; 2007; ISBN 978-0-471-79371-7; 261 pages.

Gack, Gary; Managing the Black Hole:  The Executives Guide to Software Project Risk; Business Expert Publishing, Thomson, GA; 2010; ISBN10: 1-935602-01-9.

Gack, Gary; Applying Six Sigma to Software Implementation Projects; http://software.isixsigma.com/library/content/c040915b.asp.

Gilb, Tom and Graham, Dorothy; Software Inspections; Addison Wesley, Reading, MA;  1993; ISBN 10: 0201631814.

Hallowell, David L.; Six Sigma Software Metrics, Part 1.; http://software.isixsigma.com/library/content/03910a.asp.

International Organization for Standards; ISO 9000 / ISO 14000; http://www.iso.org/iso/en/iso9000-14000/index.html.

Jacobsen, Ivar; Ng Pan-Wei; McMahon, Paul; Spence, Ian; Lidman, Svente; The Essence of Software Engineering:  Applying the SEMAT Kernel; Addison Wesley, 2013.

Jones, Capers; “A Short History of Lines of Code Metrics”; Namcook Analytics LLC, Narragansett, RI 2014.

Jones, Capers; “A Short History of the Cost per Defect Metric”; Namcook Analytics LLC, Narragansett RI 2014.

Jones, Capers; The Technical and Social History of Software Engineering; Addison Wesley Longman, Boston, Boston, MA; 2014.

Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality;

Addison Wesley, Boston, MA; 2011; ISBN 978-0-13-258220-9; 587 pages.

Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York; 2010; ISBN 978-0-07-162161-8;660 pages.

Jones, Capers; “Measuring Programming Quality and Productivity”; IBM Systems Journal; Vol. 17, No. 1; 1978; pp. 39-63.

Jones, Capers; Programming Productivity – Issues for the Eighties; IEEE Computer Society Press, Los Alamitos, CA; First edition 1981; Second edition 1986; ISBN 0-8186—0681-9; IEEE Computer Society Catalog 681; 489 pages.

Jones, Capers; “A Ten-Year Retrospective of the ITT Programming Technology Center”; Software Productivity Research, Burlington, MA; 1988.

Jones, Capers; Applied Software Measurement; McGraw Hill, 3rd edition 2008; ISBN 978=0-07-150244-3; 662 pages.

Jones, Capers; Critical Problems in Software Measurement; Information Systems Management Group, 1993; ISBN 1-56909-000-9; 195 pages.

Jones, Capers; Software Productivity and Quality Today — The Worldwide Perspective;  Information Systems Management Group, 1993; ISBN -156909-001-7;  200 pages.

Jones, Capers; Assessment and Control of Software Risks; Prentice Hall, 1994;  ISBN 0-13-741406-4; 711 pages.

Jones, Capers;  New Directions in Software Management; Information Systems Management Group;  ISBN 1-56909-009-2;  150 pages.

Jones, Capers; Patterns of Software System Failure and Success;  International Thomson Computer Press, Boston, MA;  December 1995; 250 pages; ISBN 1-850-32804-8; 292 pages.

Jones, Capers;  Software Quality – Analysis and Guidelines for Success; International Thomson Computer Press, Boston, MA; ISBN 1-85032-876-6; 1997; 492 pages.

Jones, Capers; Estimating Software Costs; 2nd edition; McGraw Hill, New York; 2007; 700 pages..

Jones, Capers; “The Economics of Object-Oriented Software”; Namcook Analytics; Narragansett, RI; 2014.

Jones, Capers; “Software Project Management Practices:  Failure Versus Success”;

Crosstalk, October 2004.

Jones, Capers; “Software Estimating Methods for Large Projects”; Crosstalk, April 2005.

Kan, Stephen H.; Metrics and Models in Software Quality Engineering, 2nd edition;  Addison Wesley Longman, Boston, MA; ISBN 0-201-72915-6; 2003; 528 pages.

Land, Susan K; Smith, Douglas B; Walz, John Z; Practical Support for Lean Six Sigma Software Process Definition: Using IEEE Software Engineering Standards; WileyBlackwell; 2008; ISBN 10: 0470170808; 312 pages.

Mosley, Daniel J.; The Handbook of MIS Application Software Testing; Yourdon Press, Prentice Hall; Englewood Cliffs, NJ; 1993; ISBN 0-13-907007-9; 354 pages.

Myers, Glenford; The Art of Software Testing; John Wiley & Sons, New York; 1979; ISBN 0-471-04328-1; 177 pages.

Nandyal; Raghav; Making Sense of Software Quality Assurance; Tata McGraw Hill Publishing, New Delhi, India; 2007; ISBN 0-07-063378-9; 350 pages.

Radice, Ronald A.; High Quality Low Cost Software Inspections;  Paradoxicon Publishingl Andover, MA; ISBN 0-9645913-1-6; 2002; 479 pages.

Royce, Walker E.; Software Project Management: A Unified Framework; Addison Wesley Longman, Reading, MA; 1998; ISBN 0-201-30958-0.

Wiegers, Karl E.; Peer Reviews in Software – A Practical Guide;  Addison Wesley Longman, Boston, MA; ISBN 0-201-73485-0; 2002; 232 pages.

Leave a Reply

Your email address will not be published. Required fields are marked *