#12 – EARLY SIZING AND EARLY RISK ANALYSIS FOR SOFTWARE PROJECTS – (C) CAPERS JONES

Capers Jones pixMost methods of software sizing are based on software requirements and design documents, or on the source code itself.  For both new applications and enhancements this means that substantial funds will have been expended before sizing takes place. 

Many risks are proportional to software size.  As a result of delayed sizing, software project risk analysis usually occurs after development methods are locked in place.   When problems or risks are identified, it is often too late to introduce improved methods so the only options are staffing increases, unpaid overtime, or deferred functionality.

A new method of software sizing based on pattern matching can be used prior to requirements.  This new method permits very early risk analysis before substantial investments are made.  These risk studies are also performed early enough to be of use in selecting software development methods and practices.

This paper discusses early sizing via pattern matching and early risk detection, followed by risk prevention and risk mitigation.

INTRODUCTION
It is widely known and supported by empirical data that many kinds of software risks are proportional to software application size.  Small projects below 100 function points in size have few risks and are usually completed on time and within budget.  Large systems above 10,000 function points, on the other hand, have numerous severe risks.

Large systems above 10,000 function points are seldom completed on time or within budget, and about 35% are never completed at all due to being so late that their return on investment (ROI) switches from positive to negative.

The large systems that are delivered often contain many high-severity defects and do not operate effectively for a year or more.  Indeed some clients of commercial software packages routinely wait for release 2.0 due to the assumption that the first release will be buggy and unreliable.

Because of the strong correlation between software application size and application risks, it is obvious that early sizing would be advantageous in analyzing potential risks and avoiding them before they occur.  This brings up the problem that until recently it was seldom possible to ascertain software application size prior to knowledge of requirements and sometimes until the middle of the design phase.

The older forms of software sizing include sizing by analogy with similar projects, function point analysis, Monte Carlo simulations, and using a few external indicators such as numbers of files or reports.

For legacy applications with existing source code there are several sizing methods available.  The oldest is to use code-counting tools that generate counts of physical lines of code, logical code statements, or both.

In the 1970’s A.J. Albrecht and his colleagues at IBM developed a method called “backfiring” that converts counts of logical source code statements into function points.  Backfiring is fast, but not very accurate due to differences in individual programming styles.

Newer tools are even more sophisticated and analyze the source code to extract hidden business rules and then use proprietary algorithms to generate function point counts than come fairly close to the precision of manual counts by certified function point counters.  However this method only works on software that already exists: it does not work for future software that has not yet been developed.

Sizing by analogy is useful, but few companies have enough data for the method to be widely used.  Monte Carlo simulations are basically sophisticated guesses.  Function point analysis from requirements and specifications is quite accurate, but requires written requirements and design documents so it cannot be performed early.  Also, because of the slow speed and high costs, function point analysis is seldom performed for applications much larger than 10,000 function points.

Application Sizing Using Pattern Matching
The pattern matching approach is based on the same methodology as the well-known Kelley Blue Book that is used to ascertain the approximate costs of used automobiles.  With the Kelley Blue Book customers for used cars or new cars use check lists of manufacturers, models, and equipment such as satellite radios to form a pattern.  Then users can look at local regional sales prices for automobiles that have the same pattern.

For software sizing via pattern matching a standard multiple-choice questionnaire is used (Appendix 1) that includes these topics:

Table 1:  Patterns for Application Sizing and Risk Analysis

  1. Local average team salary and burden rates
  2. Planned start date for the project
  3. Desired delivery date for the project
  4. Development methodologies that will be used (Agile, RUP, TSP, etc.) *
  5. CMMI level of the development group *
  6. Programming language(s) that will be used (C#, C++, Java, SQL, etc.) *
  7. Nature of the project (new, enhancement, etc.) *
  8. Scope of the project (subprogram, program, departmental system, etc.) *
  9. Class of the project (internal use, open-source, commercial, etc.) *
  10. Type of the project (embedded, web application, client-server, etc.) *
  11. Problem complexity ranging from very low to very high *
  12. Code complexity ranging from very low to very high *
  13. Data complexity ranging from very low to very high *

Note: Asterisks “*” indicate factors used for pattern analysis.

All of these topics are usually known well before requirements.  All of the questions are multiple choice questions except for start date and compensation and burden rates.  Default cost values are provided for situations where such cost information is not known or is proprietary.  This might occur if multiple contractors are bidding on a project and they all have different cost structures.

The answers to the multiple-choice questions form a “pattern” that is then compared against a knowledge base of more than 13,000 software projects.  As with the Kelley Blue Book and automobiles, software projects that have identical patterns usually have about the same size and similar results in terms of schedules, staffing, risks, and effort.

Sizing via pattern matching can be used prior to requirements and therefore perhaps six months earlier than most other sizing methods.  The method is also very quick and usually takes less than 5 minutes per project.  With experience, the time required can drop down to less than 2 minutes per project.

The pattern matching approach is very useful for large applications > 10,000 function points where manual sizing might take weeks or even months. With pattern matching the actual size of the application does not affect the speed of the result and even massive applications in excess of 100,000 function points can be sized in a few minutes or less.

This method of sizing by pattern matching is covered by a U.S. utility patent application submitted to the Patent Office in January of 2012.   The algorithms for sizing by pattern matching are included in the author’s tool Software Risk Master™ (SRM).

Additional patent applications include rapid and early defect and defect removal predictions, rapid and early defect removal cost predictions, and rapid and early schedule, effort, and cost predictions for software development.  All of these methods can be used prior to requirements.

The method of sizing by pattern matching is metric neutral and does not depend upon any specific metric.  However due to the fact that a majority of the author’s clients use function point metrics as defined by the International Function Point Users Group (IFPUG) the primary metric supported is that of IFPUG function points counting rules 4.2.  There are of course more projects measured using IFPUG function points than those available using other metrics.

Many additional metrics can also be based on sizing via pattern matching including but not limited to:

Table 2:  Metrics Supported by Pattern Matching

  1. IFPUG function points
  2. Non-functional “function points” based on SNAP rules
  3. COSMIC function points
  4. FISMA function points
  5. NESMA function points
  6. Simple function points
  7. Mark II function points
  8. Unadjusted function points
  9. Function points “light”
  10. Engineering function points
  11. Feature points
  12. Use-case points
  13. Story points
  14. Lines of code (logical statements)
  15. Lines of code (physical lines)
  16. RICE objects
  17. Other metrics as needed

As a general observation, the software industry has too many metrics and too little empirical data.  All of these variations on function points make it difficult to do serious economic studies because large samples of data usually include multiple function point styles.  Even lines of code have multiple counting methods that can differ by more than 500% for specific languages between counts of physical and logical code statements.

The pattern matching approach depends upon the availability of thousands of existing projects to be effective.  However now that function point metrics have been in use for more than 35 years there are thousands of projects available.

One additional feature of pattern matching is that it can provide size data on requirements creep and on deferred functions.  Thus the pattern-matching method predicts size at the end of the requirements phase, creeping requirements, size at delivery, and also the probable number of function points that might have to be deferred to achieve a desired delivery date.

In fact the pattern matching approach does not stop at delivery, but can continue to predict application growth year by year for up to 10 years after deployment.

Because the pattern matching approach uses external characteristics rather than internal requirements, it can size applications such as open-source applications, commercial software, or proprietary software applications where actual requirements might not be available.  A few samples of sizing by pattern matching include:

Table 3:  Examples of Software Size via Pattern Matching

Using Software Risk Master ™

Application Size in IFPUG Function Points

  1. Oracle 229,434
  2. Windows 7 (all features) 202,150
  3. Microsoft Windows XP   66,238
  4. Google docs   47,668
  5. Microsoft Office 2003   33,736
  6. F15 avionics/weapons   23,109
  7. VA medical records   19,819
  8. Apple I Phone   19,366
  9. IBM IMS data base   18,558
  10. Google search engine   18,640
  11. Linux   17,505
  12. ITT System 12 switching   17,002
  13. Denver Airport luggage (original)   16,661
  14. Child Support Payments (state)   12,546
  15. Facebook     8,404
  16. MapQuest     3,793
  17. Microsoft Project     1,963
  18. Android OS (original version)     1,858
  19. Microsoft Excel     1,578
  20. Garmin GPS navigation (hand held)     1,518
  21. Microsoft Word     1,431
  22. Mozilla Firefox     1,342
  23. Laser printer driver (HP)     1,248
  24. Sun Java compiler     1,185
  25. Wikipedia     1,142
  26. Cochlear implant (embedded)     1,041
  27. Microsoft DOS circa 1998     1,022
  28. Nintendo Gameboy DS     1,002
  29. Casio atomic watch        933
  30. Computer BIOS        857
  31. SPR KnowledgePlan        883
  32. Function Point Workbench                    714
  33. Norton anti-virus        700
  34. SPR SPQR/20        699
  35. Golf handicap analysis        662
  36. Google Gmail        590
  37. Twitter (original circa 2009)        541
  38. Freecell computer solitaire        102
  39. Software Risk Master™ prototype          38
  40. ILOVEYOU computer worm          22

The ability to size open-source and commercial applications or even classified weapons systems is a unique feature of sizing via pattern matching.

Early Risk Analysis
The main purpose of early sizing is to be able to identify software risks early enough to plan and deploy effective solutions.  If risks are not identified until after the requirements are complete, it is usually too late to make changes in development methods.

There are more than 200 software risks in total when technical risks, financial risks, and sociological risks are all considered.  Not all of these risks are directly related to application size.  The major risks where application size has been proven to be a major factor in application costs, schedules, and quality include but are not limited to:

Table 4:  Software Risks Related to Application Size

  1. Project cancellations
  2. Project cost overruns
  3. Project schedule delays
  4. Creeping requirements (> 1% per month)
  5. Deferred requirements (>20% of planned features)
  6. High defect potentials
  7. Low defect removal efficiency
  8. Error-prone modules in applications
  9. Odds of litigation for contract projects
  10. Low customer satisfaction levels
  11. Expensive and slow installation of application
  12. Long learning curves by clients and users
  13. Frequent user errors when learning new systems
  14. High cost of learning (COL)
  15. High cost of quality (COQ)
  16. High maintenance costs
  17. High warranty costs
  18. Excessive quantities of reword
  19. Difficult enhancement projects
  20. High total cost of ownership (TCO)

All 20 of these software risks are proportional to application size, so early sizing is a useful precursor for risk avoidance and risk mitigation.

Early Risk Prevention and Elimination
The main purpose of early sizing is risk detection followed by risk prevention and risk elimination.  There are two primary risks that are major causes of many other risks:

  1. Poor quality control.
  2. Unexpected creeping requirements.

Let us consider a large project in the nominal size range of 10,000 function points at the end of the requirements phase.  Let us assume a planned development schedule of 36 calendar months.

Worst-Case Scenario for 10,000 Function Points
Let’s start the worst-case scenario by assuming CMMI level 1 and waterfall development.  In this worst case, creeping requirements will add about another 25% of unplanned features and bring the total function points at delivery up to about 12,500.  These unplanned requirements will add about 8 calendar months to the schedule and hence cause the development schedule to stretch from 36 to 42 calendar months.  Of course it might be possible to defer 40% of the planned features, but that would lower the usefulness of the initial release to marginal levels.

There is yet another and bigger problem!

These large applications have defect potentials of about 6.00 defects per function point or 60,000 possible defects that need to be eliminated from requirements, design, code, and documents.  Unless pre-test inspections and static analysis are used prior to testing, there will still be at least 30,000 latent defects present when testing begins.  Test defect removal efficiency will be only about 80% or less.

The nominal test schedule for 10,000 function point applications is about 12 calendar months, but due to the huge number of latent defects the test schedule will probably double and extend to 24 months. (Normal test stages for 10,000 function points include unit test, function test, regression test, security test, performance test, usability test, component test, system test, and acceptance or Beta test.)

Even with unpaid overtime the unanticipated defects will add about 10 calendar months to the testing schedule, so the final schedule for the application would be close to 52 calendar months instead of the desired schedule of 36 calendar months.

The combination of unplanned requirements and inadequate quality control will probably cause such huge cost overruns and such long schedule delays that the return on investment (ROI) switches from positive to negative at about month 42.  If the ROI turns strongly negative, the entire project is likely to be terminated without completion.

However if the project goes to completion and is delivered after 52 months the total number of latent bugs at delivery might be 6,000 of which 1,200 will be of such high severity as to interfere with operation or produce incorrect results.   This means that successful use of the system may be delayed by another 12 months after installation.

If the application were created under contract or as an outsource project, the combination of delays, cost overruns, and poor quality leads to more than a 75% chance of litigation by the disgruntled client.

Of course the problems just described don’t have to occur.  There are known technologies that can eliminate them.  Let us now consider a best-case scenario.

Best-Case Scenario for 10,000 Function Points
For the best case we can assume CMMI level 3.  Assume that the same project of a nominal 10,000 function points utilized joint application design (JAD) for requirements analysis and gathering. Assume small prototypes of key features.  Assume that requirements inspections were used.  Assume the Team Software Process (TSP) was the primary development method.  Under such assumptions defect potentials would drop down from 6.0 per function point to about 5.00 per function point.

Assume that design inspections were used to eliminate design problems.  Assume that static analysis was used for code defect detection, combined with code inspections for critical modules.  Assume that test cases were designed using mathematical methodologies.  Under these assumptions pre-test defect removal efficiency would be about 90%.

Under these assumptions requirements creep would be less than 1,000 function points and would add about 2 months.  Due to defect prevention and pre-test defect removal the number of bugs present when testing starts would be less than 5,000 instead of 30,000.  As a result the test schedule would shrink from 12 calendar months to 9 calendar months.  Testing defect removal efficiency would be about 90%.  The project would be delivered with a total elapsed schedule of only 35 calendar months instead of the planned schedule of 36 calendar months.

Less than 500 defects would be present at delivery of which about 60 might be serious.  The serious defects would be found in the first 90 days of usage.

These two scenarios indicate why early sizing followed by risk analysis and risk prevention are urgently needed by the software industry.  All of the topics in the two scenarios are predicted by Software Risk Master™: schedules, defect potentials, pre-test and test defect removal efficiency, delivered defects, high-severity defects, and security flaws.  SRM also predicts project staffing, project costs, total cost of ownership (TCO), cost of quality (TCO), and cost of learning (COL).

For contract and outsource projects SRM will predict the odds of litigation, the probable month the litigation might be filed, and the probable expenses for both the plaintiff and the defendant.

The purpose of the Software Risk Master™ (SRM) tool is to combine very early sizing with risk analysis and risk prevention.  By running the SRM tool several times using alternative scenarios it is easily possible to see how requirements creep and grow, how various kinds of inspection and static analysis combine with testing, and how many bugs or defects are likely to be present when the software is released to customers.

Lifetime Sizing with Software Risk Master™
Although this article concentrates on quality and the initial release of a software application, the Software Risk Master™ sizing algorithms actually create 15 size predictions.  The initial prediction is for the nominal size at the end of requirements.  SRM also predicts requirements creep and deferred functions for the initial release.  After the first release SRM predicts application growth for a 10 year period.  To illustrate the full set of SRM size predictions, table 5 shows a sample application with a nominal starting size of 10,000 function points.  All of the values are in round numbers to make the patterns of growth clear:

Table 5: Software Risk Master™ Multi-Year Sizing
Copyright © 2011 by Capers Jones & Associates LLC
Patent application 61434091.  February 2011.
Nominal application size
in IFPUG function points

10,000

Function

Points

1

Size at end of requirements

10,000

2

Size of requirement creep

2,000

3

Size of planned delivery

12,000

4

Size of deferred functions

-4,800

5

Size of actual delivery

7,200

6

Year 1

12,000

7

Year 2

13,000

8

Year 3

14,000

9

Year 4

17,000

10

Year 5

18,000

11

Year 6

19,000

12

Year 7

20,000

13

Year 8

23,000

14

Year 9

24,000

15

Year 10

25,000

As can be seen from table 5 software applications do not have a single fixed size, but continue to grow and change for as long as they are being used by customers or clients.

Economic Modeling with Software Risk Master
Because Software Risk Master can predict the results of any methodology used for any size and kind of software project, it is in fact a general economic model that can show the total cost of ownership (TCO) and the cost of quality (COQ) for a variety of software development methods and practices.

For example SRM can show immediate results in less than one minute for any or all of the following software development methods, either alone or combined:

  1. Agile development
  2. Anti-patterns (harmful practices performed repeatedly)
  3. Capability Maturity Model Integrated (CMMI)™ – all 5 levels
  4. Clean-room development
  5. Consortium of IT Software Quality (CISQ) standards
  6. Cowboy development (unstructured; no formal process)
  7. Custom user-defined development methods
  8. Crystal development
  9. Data State Design (DSD)
  10. Department of Defense DOD 2167A software standard
  11. Dynamic systems development method (DSDM)
  12. Evolutionary Development (EVO)
  13. Extreme programming (XP)
  14. FDA Software validation
  15. Feature-driven development (FDD)
  16. Flow-based programming
  17. Formal methods and proofs of correctness
  18. Formal inspections (combined with other methods)
  19. Hybrid development (features from several methods)
  20. Iconix
  21. IEEE/EIA 12207 software lifecycle
  22. Information Engineering (IE)
  23. ISO 9001 quality standards
  24. ISO/IEC 9120 quality standard
  25. ISO/IEC 12207 development process standards
  26. Iterative development
  27. Jackson Development Method
  28. Kanban, Kaizen, Poka-Yoke, quality circles
  29. Lean software development (alone or in combination)
  30. Mashup software development
  31. Merise
  32. Microsoft Solutions Framework
  33. Mil-Std 498 military software standard
  34. Model-driven development
  35. Object-oriented development (OO)
  36. Object-Oriented Software Process (OOSP)
  37. OlivaNova model-based development
  38. Open-source development models
  39. Pair programming
  40. Pattern-Based Development
  41. Peer reviews (combined with other methods)
  42. Personal software process (PSP)
  43. Prince2
  44. Rapid application development (RAD)
  45. Rational unified process (RUP)
  46. Reusable components and artifacts (various levels of reuse)
  47. SEMAT – Software Engineering Methods and Theory
  48. Six Sigma (lean, software six sigma, etc.)
  49. Structured Analysis and Design Technique (SADT)
  50. SCRUM (alone or with other methods)
  51. Spiral development
  52. Structured Systems Analysis and Design Method (SSADM)
  53. Team software process (TSP)
  54. Test driven development (TDD)
  55. T-VEC requirements modeling
  56. V-model development
  57. Waterfall development

It takes less than one minute to switch Software Risk Master from one methodology to another, so it is possible to examine and evaluate 15 to 30 alternatives methods in less than half an hour.

A useful feature of the Software Risk Master is a side-by-side comparison mode which allows any two methodologies to be compared and their differences highlighted.  Table 6 illustrates side-by-side risk analyses for two versions of a software application of 1000 function points in size.  One version uses the Team Software Process (TSP) and the other version uses Agile with Scrum:

Table 6:  Side by Side Risk Comparisons of Two Development Methodologies

Team Software

Agile with

Differences in

Process (TSP)

Scrum

Risk Profiles

SRM II Risk Predictions

 

 

Odds of schedule delay

10.00%

15.00%

-5.00%

Odds of cost overrun

7.00%

12.00%

-5.00%

Odds of cancellation

2.00%

13.00%

-11.00%

Odds of poor quality

4.00%

20.00%

-16.00%

Odds of poor reliability

3.00%

20.00%

-17.00%

Odds of poor security protection

12.00%

35.00%

-23.00%

Odds of post-release cyber attack

5.00%

9.00%

-4.00%

Odds of poor user satisfaction

4.00%

22.00%

-18.00%

Odds of difficult learning curve

6.00%

10.00%

-4.00%

Odds of poor stakeholder satisfaction

7.00%

18.00%

-11.00%

Odds of litigation – breach of contract

1.00%

12.00%

-11.00%

Odds of litigation – patents

3.00%

3.00%

0.00%

Odds of high warranty repairs

10.00%

10.00%

0.00%

Odds of excessive service calls

10.00%

20.00%

-10.00%

Odds of slow maintenance turnaround

4.00%

25.00%

-21.00%

Odds of competitive software

5.00%

5.00%

0.00%

Average of all risks  

5.81%

15.56%

-9.75%

Software Risk Master can also model new, custom, proprietary, and hybrid methodologies but in these cases it may be necessary to utilize special tuning parameters.  These tuning parameters take several minutes to adjust but they allow Software Risk Master to match historical data with a very high degree of precision that approaches 1%.

Software Risk Master can also model any level of development team experience, management experience, tester experience, and even client experience.

Software Risk Master can also show the results of any programming language or combination of programming languages for more than 100 languages such as ABAP, Ada, APL, Basic, C, C#, C++, CHILL, COBOL, Eiffel, Forth, Fortran, HTML, Java, Javascript, Objective C, PERL, PHP, PL/I, Python, Ruby, Smalltalk, SQL, Visual Basic, and many other languages.   In theory Software Risk Master could support all 2,500 programming languages, but there is very little empirical data available for many of these.

To add clarity to the outputs, Software Risk Master can show identical data for every case, such as showing a sample application of 1000 function points and then changing methods, programming languages, CMMI levels, and team experience levels.  Using the same data and data formats allows side-by-side comparisons of different methods and practices.

This allows clients to judge the long-range economic advantages of various approaches for both development and total cost of ownership (TCO).

Software Outsource Contracts and Litigation for Contract Failures
One reason for early sizing and early risk analysis is to create workable and effective contracts between outsource vendors and their clients.  From working as an expert witness in a number of breach of contract lawsuits, four common problems tend to occur in every case:

  1. The initial estimates were excessively optimistic for schedules and quality
  2. Quality control was inadequate and bypassed pre-test inspections and static analysis
  3. Unplanned requirements changes were > 2% per calendar month.
  4. Monthly status tracking was poor and did not reveal problems until too late

Software Risk Master ™ is a useful tool for outsource contract analysis because it predicts both software defects and software requirements changes.  Thus both clients and outsource vendors will have an early understanding of the combined impact of rapid requirements growth and software defect removal methods on project schedules, costs, and success rates.

It is an unfortunate fact that about 5% of outsource contracts for large systems in the 10,000 function point size range end up in court for breach of contract.  Either the projects fail and are not completed, or they do not work when delivered, or their schedule and cost overruns are so large that litigation is filed by the disgruntled client.

The published data that goes into Software Risk Master has been used in breach of contract litigation since the 1990’s.  Some outputs from Software Risk Master are in current use in both arbitration and litigation.

Although Software Risk Master can be used to provide data for litigation, it would be a much better situation to use the same kind of data during early contract negotiations and thereby minimize or eliminate the chance of litigation later on.

Summary and Conclusions
Large software projects are among the most risky business ventures in history.  The failure rate of large systems is higher than other kinds of manufactured products.  Cost overruns and schedule delays for large software projects are endemic and occur on more than 75% of large applications.

Early sizing via pattern matching combined with early risk analysis can improve the success rates of large software applications due to alerting mangers and software teams to potential hazards while there is still time enough to take corrective actions prior to expending significant funds.

Sizing via pattern matching using the Software Risk Master™ demonstration web site

For additional information and a demonstration of sizing via pattern matching a demonstration version of the Software Risk Master™ sizing method can be found at the following URL:

http://www.namcook.com

The web site contains a variety of reports and information that can be downloaded.  To use the Software Risk Master™ sizing tool a password is needed.  Contact the authors to arrange a demonstration if that would be of interest.

For more information about early risk analysis and early risk avoidance contact Capers Jones, president of Capers Jones & Associates LLC or Ted Maroney, president of Namcook Consulting LLC:

  1. Email: Capers.Jones3@Gmail.com
  2. Email: Ted.Maroney@Comcast.net.
  3. Web:   www.Namcook.com

References and Readings

Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality; Addison Wesley Longman, Boston, MA; ISBN 10: 0-13-258220—1; 2011; 585 pages.

Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York, NY; ISBN 978-0-07-162161-8; 2010; 660 pages.

Jones, Capers; Applied Software Measurement; McGraw Hill, New York, NY; ISBN 978-0-07-150244-3; 2008; 662 pages.

Jones, Capers; Estimating Software Costs; McGraw Hill, New York, NY; 2007; ISBN-13: 978-0-07-148300-1.

Jones, Capers; Software Assessments, Benchmarks, and Best Practices;  Addison Wesley Longman, Boston, MA; ISBN 0-201-48542-7; 2000; 657 pages.

Jones, Capers;  Conflict and Litigation Between Software Clients and Developers; Software Productivity Research, Inc.; Burlington, MA; September 2007; 53 pages; (SPR technical report).

 APPENDIX 1

SOFTWARE RISK MASTER™ INPUT QUESTIONNAIRE

Copyright © 2010-2011 by Capers Jones & Associates LLC.  All rights reserved

Draft 10.0 August 11, 2011

SECTION I:            SOFTWARE TAXONOMY AND PROCESS ASSESSMENT

 

Security Level                                                 _____________________________________________

Project Name                                            _____________________________________________

Project Description (optional)                  _____________________________________________

Industry or NAIC code (optional)            _____________________________________________

Organization (optional)                                               

Location (optional)                                               

Manager (optional)                                               

Data provided by (optional)                                               

Current Date (MM/DD/YY)                                                                                               

Project Start (MM/DD/YY)                                                                                               

Planned Delivery (MM/DD/YY)                                                                                               

Actual Delivery (MM/DD/YY)                                                                                               

(If known)

PROJECT COST STRUCTURES                                             Default

Development monthly labor costs per staff member:            $7,500                        ______

Burden rate percentage:                 50%                        ______

Average monthly loaded labor cost:            $11,250                        ______

 

Maintenance monthly labor costs per staff member:            $7,000                        ______

Burden rate percentage:                 50%                        ______

Average monthly loaded labor cost:            $10,250                        ______

 

User monthly labor costs per staff member:            $6,000                        ______

Burden rate percentage:                 50%                        ______

Average monthly loaded labor cost:            $9,000                        ______

 

Additional costs, fees, if any:                                   _______

(COTS acquisition, legal fees, consulting,

Function point analysis, patents, etc.)

 

Default effective work hours per staff month:                                       132

Project effective work hours per staff month:                                  _______

 

 

PROJECT GOALS:                                                                        _______

1. Smallest staff with schedule delays

2. Smaller than average staff

3. Average staff; average schedule (default)

4. Larger staff, with schedule reduction

5. Largest staff; shortest schedule

 

CURRENT CMMI LEVEL (or equivalent): _______

  1. Initial
  2. Managed
  3. Defined
  4. Quantitative Management
  1. Optimizing

 

DEVELOPMENT METHODOLOGY:                                                ________

1

Mashup

2

Hybrid

3

OlivaNova

4

TSP/PSP

5

RUP

6

XP

7

Agile/Scrum

8

Data state design

9

T-VEC

10

Information engineering (IE)

12

Object Oriented

13

RAD

14

EVO

15

Jackson

16

SADT

17

Spiral

18

SSADM

19

Iterative

20

Flow based

21

V-Model

22

Prince2

23

Merise

24

DSDM

25

Clean room

26

ISO/IEC

27

Waterfall

28

Pair programming

29

DoD

30

Proofs of correctness

31

Cowboy

32

None

 

SIZING NEW APPLICATIONS AND ENHANCEMENTS

PROGRAMMING LANGUAGE(S):                                                __________________

PROGRAMMING LANGUAGE(S) LEVEL:                                    __________________

(Note: there are currently > 2,500 total languages

Most applications use multiple languages.

(For multiple or mixed languages an “average” level would be 8.5,

although calculating a level based on the specific languages is more accurate)

 

Software Risk Master™

Logical Source Code Sizing

Language

Programming Languages

Logical

Level

(Alphabetic)

LOC/FP

 

 

 

4.00

ABAP

80.00

6.50

Ada 95

49.23

3.00

Algol

106.67

10.00

APL

32.00

19.00

APS

16.84

13.00

ASP NET

24.62

5.00

Basic (interpreted)

64.00

1.00

Basic Assembly

320.00

3.00

Bliss

106.67

2.50

C

128.00

6.25

C#

51.20

6.00

C++

53.33

3.00

Chill

106.67

7.00

CICS

45.71

3.00

COBOL

106.67

3.00

Coral

106.67

8.00

DB2

40.00

11.00

Delphi

29.09

7.00

DTABL

45.71

14.00

Eiffel

22.86

0.10

English text

3,200.00

4.50

ESPL/I

71.11

50.00

Excel

6.40

18.00

Forte

17.78

5.00

Forth

64.00

3.00

Fortran

106.67

3.25

GW Basic

98.46

8.50

Haskell

37.65

2.00

HTML

160.00

16.00

IBM ADF

20.00

6.00

Java

53.33

4.50

JavaScript

71.11

1.45

JCL

220.69

3.00

Jovial

106.67

5.00

Lisp

64.00

0.50

Machine language

640.00

1.50

Macro Assembly

213.33

8.50

Mixed Languages

37.65

4.00

Modula

80.00

17.00

MUMPS

18.82

12.00

Objective C

26.67

8.00

Oracle

40.00

3.50

Pascal

91.43

9.00

Pearl

35.56

6.00

PHP

53.33

4.00

PL/I

80.00

3.50

PL/S

91.43

5.00

Prolog

64.00

6.00

Python

53.33

25.00

QBE

12.80

5.25

Quick Basic

60.95

6.75

RPG III

47.41

7.00

Ruby

45.71

7.00

Simula

45.71

15.00

Smalltalk

21.33

9.00

Speakeasy

35.56

6.00

Spring 2.0

53.33

25.00

SQL

12.80

20.00

TELON

16.00

12.00

Visual Basic

26.67

 

 

LEGACY APPLICATION SIZING (Standard method):                                                _________

  1. < 10 KLOC or < 200 function points
  2. 10 to 100 KLOC; 200 to 2000 function points
  3. 100 to 1000 KLOC; 2000 to 20,000 function points
  4. 1000 to 100,000 KLOC; 20,000 to 200,000 function points

5.    > 100,000 KLOC; > 200,000 function points

 

SIZING LEGACY APPLICATIONS (Advanced method)

LEGACY APPLICATION SIZE IN LOGICAL CODE STATEMENTS:            _________

PRIMARY (LANGUAGE 1):                                                                        _________

PRIMARY LANGUAGE LEVEL:                                                            _________

PERCENT OF LEGACY CODE IN PRIMARY LANGUAGE:                                                                         _________

 

SECONDARY LANGUAGE (LANGUAGE 2):                                                                                          _________

SECONDARY LANGUAGE LEVEL:                                                                                                            _________

PERCENT OF LEGACY CODE IN SECONDARY LANGUAGE:                                                                        _________

 

PERCENT OF APPLICATION CHANGED IN THIS RELEASE:                                                                        _________

(If only portions of the legacy application are being changed

please specify the percentage.  If the total application will be changed

the percent would be 100%.)

PROJECT NATURE: __

  1. New software application development
  2. Minor enhancement (small change to current application)
  3. Major enhancement (Large change to current application)
  4. Minor package customization
  5. Major package customization
  6. Maintenance or defect repairs *
  7. Conversion or adaptation (migration to new hardware) *
  8. Conversion or adaptation (migration to new software) *
  9. Reengineering (re-implementing a legacy application) *
  10. Package installation with no customization *
  1. Package installation, data migration,  and customization *

 

Note: Red nature entries with asterisks * are not supported in the SRM prototype.

 

PROJECT SCOPE: __

  1. Algorithm
  2. Subroutine
  3. Module
  4. Reusable module
  5. Disposable prototype or very small enhancement
  6. Evolutionary prototype or small enhancement
  7. Subprogram or medium enhancement
  8. Standalone program or large enhancement
  9. Multi-component program or major enhancement
  10. New component of a system with new features
  11. Release of a system with multiple changes
  12. New departmental system (initial release)
  13. New corporate system (initial release)
  14. New enterprise system (initial release)
  15. New national system (initial release)
  16. New global system (initial release)

 

 

PROJECT CLASS: __

  1. Personal program, for private use
  2. Personal program, to be used by others
  3. Academic program
  4. Internal program, for use at a single location
  5. Internal program, for use at a multiple locations
  6. Internal program, for use on an intranet
  7. Internal program, developed by contractor
  8. Internal program, with functions used via web
  9. Internal program, using military specifications
  10. External program, to be put in public domain
  11. External program to be placed on the Internet
  12. External program, leased to users
  13. External program, bundled with hardware
  14. External program, unbundled and marketed commercially
  15. External program, developed under commercial contract
  16. External program, developed under government contract
  1. External program, developed under military contract

 

 

PROJECT TYPE: __

  1. Nonprocedural (generated, query, spreadsheet)
  2. Batch application
  3. Web application
  4. Interactive application
  5. Interactive GUI applications program
  6. Batch database applications program
  7. Interactive database applications program
  8. Client/server applications program
  9. Computer game
  10. Scientific or mathematical program
  11. Expert system
  12. Systems or support program; “middleware”
  13. Service-oriented architecture (SOA)
  14. Communications or telecommunications program
  15. Process-control program
  16. Trusted system
  17. Embedded or real-time program
  18. Graphics, animation, or image program
  19. Multimedia program
  20. Robotics, or automation program
  21. Artificial intelligence program
  22. Neural net program
  23. Hybrid project (multiple types)

Primary type:      ____________

Secondary type:    ____________

 

 

 

PROBLEM COMPLEXITY:                                                ________

  1. No calculations or only simple algorithms
  1. Majority of simple algorithms and calculations
  2. Majority of simple algorithms; some complex
  3. Algorithms and calculations of simple and average complexity
  4. Algorithms and calculations of average complexity (default)
  5. A few difficult algorithms mixed with average and simple
  6. More difficult algorithms than average or simple
  7. A large majority of difficult and complex algorithms
  8. Difficult algorithms and some extremely complex
  9. All algorithms and calculations are extremely complex

 

CODE COMPLEXITY:                                                 _________

  1. Most “programming” done with pull down controls
  2. Simple nonprocedural code (queries, spreadsheet)
  3. Simple plus average nonprocedural code
  4. Built with program skeletons and reusable modules
  5. Average structure with small modules and simple paths  (default)
  6. Well structured, but some complex paths or modules
  7. Some complex modules, paths, and links between segments
  8. Above average complexity, paths, and links between segments
  9. Majority of paths and modules are large and complex
  10. Extremely complex structure large modules; many calls

 

DATA COMPLEXITY:                                                _________

  1. No permanent data or files required by application
  2. Only one simple file required, with few data interactions
  3. One or two files, simple data, and little complexity
  4. Several data elements, but simple data relationships
  5. Multiple files and data interactions of average complexity (default)
  6. Multiple files with some complex data elements
  7. Multiple files, complex data elements and interactions
  8. Multiple files, majority of complex data and interactions
  9. Multiple files, complex data elements, many data interactions
  10. Majority of complex files, data elements, and interaction

 

 

EXPERIENCE INPUTS FOR SOFTWARE RISK MASTER ™ II

Note:  These inputs were not used in the original SRM I prototype.

CLIENT EXPERIENCE WITH SOFTWARE PROJECTS:                                    _______

  1. Very experienced clients
  2. Fairly experienced clients
  3. Average experienced clients
  4. Fairly inexperienced clients
  5. Very inexperienced clients

 

PROJECT MANAGEMENT EXPERIENCE:                                                _______

  1. Very experienced management
  2. Fairly experienced management
  3. Average experienced management
  4. Fairly inexperienced management
  5. Very inexperienced management

DEVELOPMENT TEAM EXPERIENCE:                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

TEST TEAM EXPERIENCE:                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

QUALITY ASSURANCE TEAM EXPERIENCE:                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

CUSTOMER SUPPORT TEAM EXPERIENCE:                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

MAINTENANCE TEAM EXPERIENCE:                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

METHODOLOGY EXPERIENCE                                                _______

  1. All experts
  2. Majority of experts
  3. Even mix of experts and novices
  4. Majority of novices
  5. All novices

PROJECT VALUE

NOTE: Value data is optional data and can be supplied by user if value data is known.  If value data is supplied Software Risk Master will use it to predict ROI.

If value data is not supplied Software Risk Master will calculate the minimum value needed to recover total costs of ownership (TCO).

Direct revenues            _______________________

Indirect revenues            _______________________

Cost reductions            _______________________

TOTAL VALUE            _______________________

Send comments about questionnaire features to:

Capers Jones & Associates LLC

Email:  Capers.Jones3@gmail.com

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *