Most methods of software sizing are based on software requirements and design documents, or on the source code itself. For both new applications and enhancements this means that substantial funds will have been expended before sizing takes place.
Many risks are proportional to software size. As a result of delayed sizing, software project risk analysis usually occurs after development methods are locked in place. When problems or risks are identified, it is often too late to introduce improved methods so the only options are staffing increases, unpaid overtime, or deferred functionality.
A new method of software sizing based on pattern matching can be used prior to requirements. This new method permits very early risk analysis before substantial investments are made. These risk studies are also performed early enough to be of use in selecting software development methods and practices.
This paper discusses early sizing via pattern matching and early risk detection, followed by risk prevention and risk mitigation.
INTRODUCTION
It is widely known and supported by empirical data that many kinds of software risks are proportional to software application size. Small projects below 100 function points in size have few risks and are usually completed on time and within budget. Large systems above 10,000 function points, on the other hand, have numerous severe risks.
Large systems above 10,000 function points are seldom completed on time or within budget, and about 35% are never completed at all due to being so late that their return on investment (ROI) switches from positive to negative.
The large systems that are delivered often contain many high-severity defects and do not operate effectively for a year or more. Indeed some clients of commercial software packages routinely wait for release 2.0 due to the assumption that the first release will be buggy and unreliable.
Because of the strong correlation between software application size and application risks, it is obvious that early sizing would be advantageous in analyzing potential risks and avoiding them before they occur. This brings up the problem that until recently it was seldom possible to ascertain software application size prior to knowledge of requirements and sometimes until the middle of the design phase.
The older forms of software sizing include sizing by analogy with similar projects, function point analysis, Monte Carlo simulations, and using a few external indicators such as numbers of files or reports.
For legacy applications with existing source code there are several sizing methods available. The oldest is to use code-counting tools that generate counts of physical lines of code, logical code statements, or both.
In the 1970’s A.J. Albrecht and his colleagues at IBM developed a method called “backfiring” that converts counts of logical source code statements into function points. Backfiring is fast, but not very accurate due to differences in individual programming styles.
Newer tools are even more sophisticated and analyze the source code to extract hidden business rules and then use proprietary algorithms to generate function point counts than come fairly close to the precision of manual counts by certified function point counters. However this method only works on software that already exists: it does not work for future software that has not yet been developed.
Sizing by analogy is useful, but few companies have enough data for the method to be widely used. Monte Carlo simulations are basically sophisticated guesses. Function point analysis from requirements and specifications is quite accurate, but requires written requirements and design documents so it cannot be performed early. Also, because of the slow speed and high costs, function point analysis is seldom performed for applications much larger than 10,000 function points.
Application Sizing Using Pattern Matching
The pattern matching approach is based on the same methodology as the well-known Kelley Blue Book that is used to ascertain the approximate costs of used automobiles. With the Kelley Blue Book customers for used cars or new cars use check lists of manufacturers, models, and equipment such as satellite radios to form a pattern. Then users can look at local regional sales prices for automobiles that have the same pattern.
For software sizing via pattern matching a standard multiple-choice questionnaire is used (Appendix 1) that includes these topics:
Table 1: Patterns for Application Sizing and Risk Analysis
- Local average team salary and burden rates
- Planned start date for the project
- Desired delivery date for the project
- Development methodologies that will be used (Agile, RUP, TSP, etc.) *
- CMMI level of the development group *
- Programming language(s) that will be used (C#, C++, Java, SQL, etc.) *
- Nature of the project (new, enhancement, etc.) *
- Scope of the project (subprogram, program, departmental system, etc.) *
- Class of the project (internal use, open-source, commercial, etc.) *
- Type of the project (embedded, web application, client-server, etc.) *
- Problem complexity ranging from very low to very high *
- Code complexity ranging from very low to very high *
- Data complexity ranging from very low to very high *
Note: Asterisks “*” indicate factors used for pattern analysis.
All of these topics are usually known well before requirements. All of the questions are multiple choice questions except for start date and compensation and burden rates. Default cost values are provided for situations where such cost information is not known or is proprietary. This might occur if multiple contractors are bidding on a project and they all have different cost structures.
The answers to the multiple-choice questions form a “pattern” that is then compared against a knowledge base of more than 13,000 software projects. As with the Kelley Blue Book and automobiles, software projects that have identical patterns usually have about the same size and similar results in terms of schedules, staffing, risks, and effort.
Sizing via pattern matching can be used prior to requirements and therefore perhaps six months earlier than most other sizing methods. The method is also very quick and usually takes less than 5 minutes per project. With experience, the time required can drop down to less than 2 minutes per project.
The pattern matching approach is very useful for large applications > 10,000 function points where manual sizing might take weeks or even months. With pattern matching the actual size of the application does not affect the speed of the result and even massive applications in excess of 100,000 function points can be sized in a few minutes or less.
This method of sizing by pattern matching is covered by a U.S. utility patent application submitted to the Patent Office in January of 2012. The algorithms for sizing by pattern matching are included in the author’s tool Software Risk Master™ (SRM).
Additional patent applications include rapid and early defect and defect removal predictions, rapid and early defect removal cost predictions, and rapid and early schedule, effort, and cost predictions for software development. All of these methods can be used prior to requirements.
The method of sizing by pattern matching is metric neutral and does not depend upon any specific metric. However due to the fact that a majority of the author’s clients use function point metrics as defined by the International Function Point Users Group (IFPUG) the primary metric supported is that of IFPUG function points counting rules 4.2. There are of course more projects measured using IFPUG function points than those available using other metrics.
Many additional metrics can also be based on sizing via pattern matching including but not limited to:
Table 2: Metrics Supported by Pattern Matching
- IFPUG function points
- Non-functional “function points” based on SNAP rules
- COSMIC function points
- FISMA function points
- NESMA function points
- Simple function points
- Mark II function points
- Unadjusted function points
- Function points “light”
- Engineering function points
- Feature points
- Use-case points
- Story points
- Lines of code (logical statements)
- Lines of code (physical lines)
- RICE objects
- Other metrics as needed
As a general observation, the software industry has too many metrics and too little empirical data. All of these variations on function points make it difficult to do serious economic studies because large samples of data usually include multiple function point styles. Even lines of code have multiple counting methods that can differ by more than 500% for specific languages between counts of physical and logical code statements.
The pattern matching approach depends upon the availability of thousands of existing projects to be effective. However now that function point metrics have been in use for more than 35 years there are thousands of projects available.
One additional feature of pattern matching is that it can provide size data on requirements creep and on deferred functions. Thus the pattern-matching method predicts size at the end of the requirements phase, creeping requirements, size at delivery, and also the probable number of function points that might have to be deferred to achieve a desired delivery date.
In fact the pattern matching approach does not stop at delivery, but can continue to predict application growth year by year for up to 10 years after deployment.
Because the pattern matching approach uses external characteristics rather than internal requirements, it can size applications such as open-source applications, commercial software, or proprietary software applications where actual requirements might not be available. A few samples of sizing by pattern matching include:
Table 3: Examples of Software Size via Pattern Matching
Using Software Risk Master ™
Application Size in IFPUG Function Points
- Oracle 229,434
- Windows 7 (all features) 202,150
- Microsoft Windows XP 66,238
- Google docs 47,668
- Microsoft Office 2003 33,736
- F15 avionics/weapons 23,109
- VA medical records 19,819
- Apple I Phone 19,366
- IBM IMS data base 18,558
- Google search engine 18,640
- Linux 17,505
- ITT System 12 switching 17,002
- Denver Airport luggage (original) 16,661
- Child Support Payments (state) 12,546
- Facebook 8,404
- MapQuest 3,793
- Microsoft Project 1,963
- Android OS (original version) 1,858
- Microsoft Excel 1,578
- Garmin GPS navigation (hand held) 1,518
- Microsoft Word 1,431
- Mozilla Firefox 1,342
- Laser printer driver (HP) 1,248
- Sun Java compiler 1,185
- Wikipedia 1,142
- Cochlear implant (embedded) 1,041
- Microsoft DOS circa 1998 1,022
- Nintendo Gameboy DS 1,002
- Casio atomic watch 933
- Computer BIOS 857
- SPR KnowledgePlan 883
- Function Point Workbench 714
- Norton anti-virus 700
- SPR SPQR/20 699
- Golf handicap analysis 662
- Google Gmail 590
- Twitter (original circa 2009) 541
- Freecell computer solitaire 102
- Software Risk Master™ prototype 38
- ILOVEYOU computer worm 22
The ability to size open-source and commercial applications or even classified weapons systems is a unique feature of sizing via pattern matching.
Early Risk Analysis
The main purpose of early sizing is to be able to identify software risks early enough to plan and deploy effective solutions. If risks are not identified until after the requirements are complete, it is usually too late to make changes in development methods.
There are more than 200 software risks in total when technical risks, financial risks, and sociological risks are all considered. Not all of these risks are directly related to application size. The major risks where application size has been proven to be a major factor in application costs, schedules, and quality include but are not limited to:
Table 4: Software Risks Related to Application Size
- Project cancellations
- Project cost overruns
- Project schedule delays
- Creeping requirements (> 1% per month)
- Deferred requirements (>20% of planned features)
- High defect potentials
- Low defect removal efficiency
- Error-prone modules in applications
- Odds of litigation for contract projects
- Low customer satisfaction levels
- Expensive and slow installation of application
- Long learning curves by clients and users
- Frequent user errors when learning new systems
- High cost of learning (COL)
- High cost of quality (COQ)
- High maintenance costs
- High warranty costs
- Excessive quantities of reword
- Difficult enhancement projects
- High total cost of ownership (TCO)
All 20 of these software risks are proportional to application size, so early sizing is a useful precursor for risk avoidance and risk mitigation.
Early Risk Prevention and Elimination
The main purpose of early sizing is risk detection followed by risk prevention and risk elimination. There are two primary risks that are major causes of many other risks:
- Poor quality control.
- Unexpected creeping requirements.
Let us consider a large project in the nominal size range of 10,000 function points at the end of the requirements phase. Let us assume a planned development schedule of 36 calendar months.
Worst-Case Scenario for 10,000 Function Points
Let’s start the worst-case scenario by assuming CMMI level 1 and waterfall development. In this worst case, creeping requirements will add about another 25% of unplanned features and bring the total function points at delivery up to about 12,500. These unplanned requirements will add about 8 calendar months to the schedule and hence cause the development schedule to stretch from 36 to 42 calendar months. Of course it might be possible to defer 40% of the planned features, but that would lower the usefulness of the initial release to marginal levels.
There is yet another and bigger problem!
These large applications have defect potentials of about 6.00 defects per function point or 60,000 possible defects that need to be eliminated from requirements, design, code, and documents. Unless pre-test inspections and static analysis are used prior to testing, there will still be at least 30,000 latent defects present when testing begins. Test defect removal efficiency will be only about 80% or less.
The nominal test schedule for 10,000 function point applications is about 12 calendar months, but due to the huge number of latent defects the test schedule will probably double and extend to 24 months. (Normal test stages for 10,000 function points include unit test, function test, regression test, security test, performance test, usability test, component test, system test, and acceptance or Beta test.)
Even with unpaid overtime the unanticipated defects will add about 10 calendar months to the testing schedule, so the final schedule for the application would be close to 52 calendar months instead of the desired schedule of 36 calendar months.
The combination of unplanned requirements and inadequate quality control will probably cause such huge cost overruns and such long schedule delays that the return on investment (ROI) switches from positive to negative at about month 42. If the ROI turns strongly negative, the entire project is likely to be terminated without completion.
However if the project goes to completion and is delivered after 52 months the total number of latent bugs at delivery might be 6,000 of which 1,200 will be of such high severity as to interfere with operation or produce incorrect results. This means that successful use of the system may be delayed by another 12 months after installation.
If the application were created under contract or as an outsource project, the combination of delays, cost overruns, and poor quality leads to more than a 75% chance of litigation by the disgruntled client.
Of course the problems just described don’t have to occur. There are known technologies that can eliminate them. Let us now consider a best-case scenario.
Best-Case Scenario for 10,000 Function Points
For the best case we can assume CMMI level 3. Assume that the same project of a nominal 10,000 function points utilized joint application design (JAD) for requirements analysis and gathering. Assume small prototypes of key features. Assume that requirements inspections were used. Assume the Team Software Process (TSP) was the primary development method. Under such assumptions defect potentials would drop down from 6.0 per function point to about 5.00 per function point.
Assume that design inspections were used to eliminate design problems. Assume that static analysis was used for code defect detection, combined with code inspections for critical modules. Assume that test cases were designed using mathematical methodologies. Under these assumptions pre-test defect removal efficiency would be about 90%.
Under these assumptions requirements creep would be less than 1,000 function points and would add about 2 months. Due to defect prevention and pre-test defect removal the number of bugs present when testing starts would be less than 5,000 instead of 30,000. As a result the test schedule would shrink from 12 calendar months to 9 calendar months. Testing defect removal efficiency would be about 90%. The project would be delivered with a total elapsed schedule of only 35 calendar months instead of the planned schedule of 36 calendar months.
Less than 500 defects would be present at delivery of which about 60 might be serious. The serious defects would be found in the first 90 days of usage.
These two scenarios indicate why early sizing followed by risk analysis and risk prevention are urgently needed by the software industry. All of the topics in the two scenarios are predicted by Software Risk Master™: schedules, defect potentials, pre-test and test defect removal efficiency, delivered defects, high-severity defects, and security flaws. SRM also predicts project staffing, project costs, total cost of ownership (TCO), cost of quality (TCO), and cost of learning (COL).
For contract and outsource projects SRM will predict the odds of litigation, the probable month the litigation might be filed, and the probable expenses for both the plaintiff and the defendant.
The purpose of the Software Risk Master™ (SRM) tool is to combine very early sizing with risk analysis and risk prevention. By running the SRM tool several times using alternative scenarios it is easily possible to see how requirements creep and grow, how various kinds of inspection and static analysis combine with testing, and how many bugs or defects are likely to be present when the software is released to customers.
Lifetime Sizing with Software Risk Master™
Although this article concentrates on quality and the initial release of a software application, the Software Risk Master™ sizing algorithms actually create 15 size predictions. The initial prediction is for the nominal size at the end of requirements. SRM also predicts requirements creep and deferred functions for the initial release. After the first release SRM predicts application growth for a 10 year period. To illustrate the full set of SRM size predictions, table 5 shows a sample application with a nominal starting size of 10,000 function points. All of the values are in round numbers to make the patterns of growth clear:
Table 5: Software Risk Master™ Multi-Year Sizing | |||
Copyright © 2011 by Capers Jones & Associates LLC | |||
Patent application 61434091. February 2011. | |||
Nominal application size | |||
in IFPUG function points |
10,000 |
||
Function |
|||
Points |
|||
1 |
Size at end of requirements |
10,000 |
|
2 |
Size of requirement creep |
2,000 |
|
3 |
Size of planned delivery |
12,000 |
|
4 |
Size of deferred functions |
-4,800 |
|
5 |
Size of actual delivery |
7,200 |
|
6 |
Year 1 |
12,000 |
|
7 |
Year 2 |
13,000 |
|
8 |
Year 3 |
14,000 |
|
9 |
Year 4 |
17,000 |
|
10 |
Year 5 |
18,000 |
|
11 |
Year 6 |
19,000 |
|
12 |
Year 7 |
20,000 |
|
13 |
Year 8 |
23,000 |
|
14 |
Year 9 |
24,000 |
|
15 |
Year 10 |
25,000 |
As can be seen from table 5 software applications do not have a single fixed size, but continue to grow and change for as long as they are being used by customers or clients.
Economic Modeling with Software Risk Master
Because Software Risk Master can predict the results of any methodology used for any size and kind of software project, it is in fact a general economic model that can show the total cost of ownership (TCO) and the cost of quality (COQ) for a variety of software development methods and practices.
For example SRM can show immediate results in less than one minute for any or all of the following software development methods, either alone or combined:
- Agile development
- Anti-patterns (harmful practices performed repeatedly)
- Capability Maturity Model Integrated (CMMI)™ – all 5 levels
- Clean-room development
- Consortium of IT Software Quality (CISQ) standards
- Cowboy development (unstructured; no formal process)
- Custom user-defined development methods
- Crystal development
- Data State Design (DSD)
- Department of Defense DOD 2167A software standard
- Dynamic systems development method (DSDM)
- Evolutionary Development (EVO)
- Extreme programming (XP)
- FDA Software validation
- Feature-driven development (FDD)
- Flow-based programming
- Formal methods and proofs of correctness
- Formal inspections (combined with other methods)
- Hybrid development (features from several methods)
- Iconix
- IEEE/EIA 12207 software lifecycle
- Information Engineering (IE)
- ISO 9001 quality standards
- ISO/IEC 9120 quality standard
- ISO/IEC 12207 development process standards
- Iterative development
- Jackson Development Method
- Kanban, Kaizen, Poka-Yoke, quality circles
- Lean software development (alone or in combination)
- Mashup software development
- Merise
- Microsoft Solutions Framework
- Mil-Std 498 military software standard
- Model-driven development
- Object-oriented development (OO)
- Object-Oriented Software Process (OOSP)
- OlivaNova model-based development
- Open-source development models
- Pair programming
- Pattern-Based Development
- Peer reviews (combined with other methods)
- Personal software process (PSP)
- Prince2
- Rapid application development (RAD)
- Rational unified process (RUP)
- Reusable components and artifacts (various levels of reuse)
- SEMAT – Software Engineering Methods and Theory
- Six Sigma (lean, software six sigma, etc.)
- Structured Analysis and Design Technique (SADT)
- SCRUM (alone or with other methods)
- Spiral development
- Structured Systems Analysis and Design Method (SSADM)
- Team software process (TSP)
- Test driven development (TDD)
- T-VEC requirements modeling
- V-model development
- Waterfall development
It takes less than one minute to switch Software Risk Master from one methodology to another, so it is possible to examine and evaluate 15 to 30 alternatives methods in less than half an hour.
A useful feature of the Software Risk Master is a side-by-side comparison mode which allows any two methodologies to be compared and their differences highlighted. Table 6 illustrates side-by-side risk analyses for two versions of a software application of 1000 function points in size. One version uses the Team Software Process (TSP) and the other version uses Agile with Scrum:
Table 6: Side by Side Risk Comparisons of Two Development Methodologies | |||||
Team Software |
Agile with |
Differences in |
|||
Process (TSP) |
Scrum |
Risk Profiles |
|||
SRM II Risk Predictions |
|
|
|||
Odds of schedule delay |
10.00% |
15.00% |
-5.00% |
||
Odds of cost overrun |
7.00% |
12.00% |
-5.00% |
||
Odds of cancellation |
2.00% |
13.00% |
-11.00% |
||
Odds of poor quality |
4.00% |
20.00% |
-16.00% |
||
Odds of poor reliability |
3.00% |
20.00% |
-17.00% |
||
Odds of poor security protection |
12.00% |
35.00% |
-23.00% |
||
Odds of post-release cyber attack |
5.00% |
9.00% |
-4.00% |
||
Odds of poor user satisfaction |
4.00% |
22.00% |
-18.00% |
||
Odds of difficult learning curve |
6.00% |
10.00% |
-4.00% |
||
Odds of poor stakeholder satisfaction |
7.00% |
18.00% |
-11.00% |
||
Odds of litigation – breach of contract |
1.00% |
12.00% |
-11.00% |
||
Odds of litigation – patents |
3.00% |
3.00% |
0.00% |
||
Odds of high warranty repairs |
10.00% |
10.00% |
0.00% |
||
Odds of excessive service calls |
10.00% |
20.00% |
-10.00% |
||
Odds of slow maintenance turnaround |
4.00% |
25.00% |
-21.00% |
||
Odds of competitive software |
5.00% |
5.00% |
0.00% |
||
Average of all risks |
5.81% |
15.56% |
-9.75% |
Software Risk Master can also model new, custom, proprietary, and hybrid methodologies but in these cases it may be necessary to utilize special tuning parameters. These tuning parameters take several minutes to adjust but they allow Software Risk Master to match historical data with a very high degree of precision that approaches 1%.
Software Risk Master can also model any level of development team experience, management experience, tester experience, and even client experience.
Software Risk Master can also show the results of any programming language or combination of programming languages for more than 100 languages such as ABAP, Ada, APL, Basic, C, C#, C++, CHILL, COBOL, Eiffel, Forth, Fortran, HTML, Java, Javascript, Objective C, PERL, PHP, PL/I, Python, Ruby, Smalltalk, SQL, Visual Basic, and many other languages. In theory Software Risk Master could support all 2,500 programming languages, but there is very little empirical data available for many of these.
To add clarity to the outputs, Software Risk Master can show identical data for every case, such as showing a sample application of 1000 function points and then changing methods, programming languages, CMMI levels, and team experience levels. Using the same data and data formats allows side-by-side comparisons of different methods and practices.
This allows clients to judge the long-range economic advantages of various approaches for both development and total cost of ownership (TCO).
Software Outsource Contracts and Litigation for Contract Failures
One reason for early sizing and early risk analysis is to create workable and effective contracts between outsource vendors and their clients. From working as an expert witness in a number of breach of contract lawsuits, four common problems tend to occur in every case:
- The initial estimates were excessively optimistic for schedules and quality
- Quality control was inadequate and bypassed pre-test inspections and static analysis
- Unplanned requirements changes were > 2% per calendar month.
- Monthly status tracking was poor and did not reveal problems until too late
Software Risk Master ™ is a useful tool for outsource contract analysis because it predicts both software defects and software requirements changes. Thus both clients and outsource vendors will have an early understanding of the combined impact of rapid requirements growth and software defect removal methods on project schedules, costs, and success rates.
It is an unfortunate fact that about 5% of outsource contracts for large systems in the 10,000 function point size range end up in court for breach of contract. Either the projects fail and are not completed, or they do not work when delivered, or their schedule and cost overruns are so large that litigation is filed by the disgruntled client.
The published data that goes into Software Risk Master has been used in breach of contract litigation since the 1990’s. Some outputs from Software Risk Master are in current use in both arbitration and litigation.
Although Software Risk Master can be used to provide data for litigation, it would be a much better situation to use the same kind of data during early contract negotiations and thereby minimize or eliminate the chance of litigation later on.
Summary and Conclusions
Large software projects are among the most risky business ventures in history. The failure rate of large systems is higher than other kinds of manufactured products. Cost overruns and schedule delays for large software projects are endemic and occur on more than 75% of large applications.
Early sizing via pattern matching combined with early risk analysis can improve the success rates of large software applications due to alerting mangers and software teams to potential hazards while there is still time enough to take corrective actions prior to expending significant funds.
Sizing via pattern matching using the Software Risk Master™ demonstration web site
For additional information and a demonstration of sizing via pattern matching a demonstration version of the Software Risk Master™ sizing method can be found at the following URL:
http://www.namcook.com
The web site contains a variety of reports and information that can be downloaded. To use the Software Risk Master™ sizing tool a password is needed. Contact the authors to arrange a demonstration if that would be of interest.
For more information about early risk analysis and early risk avoidance contact Capers Jones, president of Capers Jones & Associates LLC or Ted Maroney, president of Namcook Consulting LLC:
- Email: Capers.Jones3@Gmail.com
- Email: Ted.Maroney@Comcast.net.
- Web: www.Namcook.com
References and Readings
Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality; Addison Wesley Longman, Boston, MA; ISBN 10: 0-13-258220—1; 2011; 585 pages.
Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York, NY; ISBN 978-0-07-162161-8; 2010; 660 pages.
Jones, Capers; Applied Software Measurement; McGraw Hill, New York, NY; ISBN 978-0-07-150244-3; 2008; 662 pages.
Jones, Capers; Estimating Software Costs; McGraw Hill, New York, NY; 2007; ISBN-13: 978-0-07-148300-1.
Jones, Capers; Software Assessments, Benchmarks, and Best Practices; Addison Wesley Longman, Boston, MA; ISBN 0-201-48542-7; 2000; 657 pages.
Jones, Capers; Conflict and Litigation Between Software Clients and Developers; Software Productivity Research, Inc.; Burlington, MA; September 2007; 53 pages; (SPR technical report).
APPENDIX 1
SOFTWARE RISK MASTER™ INPUT QUESTIONNAIRE
Copyright © 2010-2011 by Capers Jones & Associates LLC. All rights reserved
Draft 10.0 August 11, 2011
SECTION I: SOFTWARE TAXONOMY AND PROCESS ASSESSMENT
Security Level _____________________________________________
Project Name _____________________________________________
Project Description (optional) _____________________________________________
Industry or NAIC code (optional) _____________________________________________
Organization (optional)
Location (optional)
Manager (optional)
Data provided by (optional)
Current Date (MM/DD/YY)
Project Start (MM/DD/YY)
Planned Delivery (MM/DD/YY)
Actual Delivery (MM/DD/YY)
(If known)
PROJECT COST STRUCTURES Default
Development monthly labor costs per staff member: $7,500 ______
Burden rate percentage: 50% ______
Average monthly loaded labor cost: $11,250 ______
Maintenance monthly labor costs per staff member: $7,000 ______
Burden rate percentage: 50% ______
Average monthly loaded labor cost: $10,250 ______
User monthly labor costs per staff member: $6,000 ______
Burden rate percentage: 50% ______
Average monthly loaded labor cost: $9,000 ______
Additional costs, fees, if any: _______
(COTS acquisition, legal fees, consulting,
Function point analysis, patents, etc.)
Default effective work hours per staff month: 132
Project effective work hours per staff month: _______
PROJECT GOALS: _______
1. Smallest staff with schedule delays
2. Smaller than average staff
3. Average staff; average schedule (default)
4. Larger staff, with schedule reduction
5. Largest staff; shortest schedule
CURRENT CMMI LEVEL (or equivalent): _______
- Initial
- Managed
- Defined
- Quantitative Management
- Optimizing
DEVELOPMENT METHODOLOGY: ________
1 |
Mashup | ||
2 |
Hybrid | ||
3 |
OlivaNova | ||
4 |
TSP/PSP | ||
5 |
RUP | ||
6 |
XP | ||
7 |
Agile/Scrum | ||
8 |
Data state design | ||
9 |
T-VEC | ||
10 |
Information engineering (IE) | ||
12 |
Object Oriented | ||
13 |
RAD | ||
14 |
EVO | ||
15 |
Jackson | ||
16 |
SADT | ||
17 |
Spiral | ||
18 |
SSADM | ||
19 |
Iterative | ||
20 |
Flow based | ||
21 |
V-Model | ||
22 |
Prince2 | ||
23 |
Merise | ||
24 |
DSDM | ||
25 |
Clean room | ||
26 |
ISO/IEC | ||
27 |
Waterfall | ||
28 |
Pair programming | ||
29 |
DoD | ||
30 |
Proofs of correctness | ||
31 |
Cowboy | ||
32 |
None |
SIZING NEW APPLICATIONS AND ENHANCEMENTS
PROGRAMMING LANGUAGE(S): __________________
PROGRAMMING LANGUAGE(S) LEVEL: __________________
(Note: there are currently > 2,500 total languages
Most applications use multiple languages.
(For multiple or mixed languages an “average” level would be 8.5,
although calculating a level based on the specific languages is more accurate)
Software Risk Master™ |
||
Logical Source Code Sizing |
||
Language |
Programming Languages |
Logical |
Level |
(Alphabetic) |
LOC/FP |
|
|
|
4.00 |
ABAP |
80.00 |
6.50 |
Ada 95 |
49.23 |
3.00 |
Algol |
106.67 |
10.00 |
APL |
32.00 |
19.00 |
APS |
16.84 |
13.00 |
ASP NET |
24.62 |
5.00 |
Basic (interpreted) |
64.00 |
1.00 |
Basic Assembly |
320.00 |
3.00 |
Bliss |
106.67 |
2.50 |
C |
128.00 |
6.25 |
C# |
51.20 |
6.00 |
C++ |
53.33 |
3.00 |
Chill |
106.67 |
7.00 |
CICS |
45.71 |
3.00 |
COBOL |
106.67 |
3.00 |
Coral |
106.67 |
8.00 |
DB2 |
40.00 |
11.00 |
Delphi |
29.09 |
7.00 |
DTABL |
45.71 |
14.00 |
Eiffel |
22.86 |
0.10 |
English text |
3,200.00 |
4.50 |
ESPL/I |
71.11 |
50.00 |
Excel |
6.40 |
18.00 |
Forte |
17.78 |
5.00 |
Forth |
64.00 |
3.00 |
Fortran |
106.67 |
3.25 |
GW Basic |
98.46 |
8.50 |
Haskell |
37.65 |
2.00 |
HTML |
160.00 |
16.00 |
IBM ADF |
20.00 |
6.00 |
Java |
53.33 |
4.50 |
JavaScript |
71.11 |
1.45 |
JCL |
220.69 |
3.00 |
Jovial |
106.67 |
5.00 |
Lisp |
64.00 |
0.50 |
Machine language |
640.00 |
1.50 |
Macro Assembly |
213.33 |
8.50 |
Mixed Languages |
37.65 |
4.00 |
Modula |
80.00 |
17.00 |
MUMPS |
18.82 |
12.00 |
Objective C |
26.67 |
8.00 |
Oracle |
40.00 |
3.50 |
Pascal |
91.43 |
9.00 |
Pearl |
35.56 |
6.00 |
PHP |
53.33 |
4.00 |
PL/I |
80.00 |
3.50 |
PL/S |
91.43 |
5.00 |
Prolog |
64.00 |
6.00 |
Python |
53.33 |
25.00 |
QBE |
12.80 |
5.25 |
Quick Basic |
60.95 |
6.75 |
RPG III |
47.41 |
7.00 |
Ruby |
45.71 |
7.00 |
Simula |
45.71 |
15.00 |
Smalltalk |
21.33 |
9.00 |
Speakeasy |
35.56 |
6.00 |
Spring 2.0 |
53.33 |
25.00 |
SQL |
12.80 |
20.00 |
TELON |
16.00 |
12.00 |
Visual Basic |
26.67 |
LEGACY APPLICATION SIZING (Standard method): _________
- < 10 KLOC or < 200 function points
- 10 to 100 KLOC; 200 to 2000 function points
- 100 to 1000 KLOC; 2000 to 20,000 function points
- 1000 to 100,000 KLOC; 20,000 to 200,000 function points
5. > 100,000 KLOC; > 200,000 function points
SIZING LEGACY APPLICATIONS (Advanced method)
LEGACY APPLICATION SIZE IN LOGICAL CODE STATEMENTS: _________
PRIMARY (LANGUAGE 1): _________
PRIMARY LANGUAGE LEVEL: _________
PERCENT OF LEGACY CODE IN PRIMARY LANGUAGE: _________
SECONDARY LANGUAGE (LANGUAGE 2): _________
SECONDARY LANGUAGE LEVEL: _________
PERCENT OF LEGACY CODE IN SECONDARY LANGUAGE: _________
PERCENT OF APPLICATION CHANGED IN THIS RELEASE: _________
(If only portions of the legacy application are being changed
please specify the percentage. If the total application will be changed
the percent would be 100%.)
PROJECT NATURE: __
- New software application development
- Minor enhancement (small change to current application)
- Major enhancement (Large change to current application)
- Minor package customization
- Major package customization
- Maintenance or defect repairs *
- Conversion or adaptation (migration to new hardware) *
- Conversion or adaptation (migration to new software) *
- Reengineering (re-implementing a legacy application) *
- Package installation with no customization *
- Package installation, data migration, and customization *
Note: Red nature entries with asterisks * are not supported in the SRM prototype.
PROJECT SCOPE: __
- Algorithm
- Subroutine
- Module
- Reusable module
- Disposable prototype or very small enhancement
- Evolutionary prototype or small enhancement
- Subprogram or medium enhancement
- Standalone program or large enhancement
- Multi-component program or major enhancement
- New component of a system with new features
- Release of a system with multiple changes
- New departmental system (initial release)
- New corporate system (initial release)
- New enterprise system (initial release)
- New national system (initial release)
- New global system (initial release)
PROJECT CLASS: __
- Personal program, for private use
- Personal program, to be used by others
- Academic program
- Internal program, for use at a single location
- Internal program, for use at a multiple locations
- Internal program, for use on an intranet
- Internal program, developed by contractor
- Internal program, with functions used via web
- Internal program, using military specifications
- External program, to be put in public domain
- External program to be placed on the Internet
- External program, leased to users
- External program, bundled with hardware
- External program, unbundled and marketed commercially
- External program, developed under commercial contract
- External program, developed under government contract
- External program, developed under military contract
PROJECT TYPE: __
- Nonprocedural (generated, query, spreadsheet)
- Batch application
- Web application
- Interactive application
- Interactive GUI applications program
- Batch database applications program
- Interactive database applications program
- Client/server applications program
- Computer game
- Scientific or mathematical program
- Expert system
- Systems or support program; “middleware”
- Service-oriented architecture (SOA)
- Communications or telecommunications program
- Process-control program
- Trusted system
- Embedded or real-time program
- Graphics, animation, or image program
- Multimedia program
- Robotics, or automation program
- Artificial intelligence program
- Neural net program
- Hybrid project (multiple types)
Primary type: ____________
Secondary type: ____________
PROBLEM COMPLEXITY: ________
- No calculations or only simple algorithms
- Majority of simple algorithms and calculations
- Majority of simple algorithms; some complex
- Algorithms and calculations of simple and average complexity
- Algorithms and calculations of average complexity (default)
- A few difficult algorithms mixed with average and simple
- More difficult algorithms than average or simple
- A large majority of difficult and complex algorithms
- Difficult algorithms and some extremely complex
- All algorithms and calculations are extremely complex
CODE COMPLEXITY: _________
- Most “programming” done with pull down controls
- Simple nonprocedural code (queries, spreadsheet)
- Simple plus average nonprocedural code
- Built with program skeletons and reusable modules
- Average structure with small modules and simple paths (default)
- Well structured, but some complex paths or modules
- Some complex modules, paths, and links between segments
- Above average complexity, paths, and links between segments
- Majority of paths and modules are large and complex
- Extremely complex structure large modules; many calls
DATA COMPLEXITY: _________
- No permanent data or files required by application
- Only one simple file required, with few data interactions
- One or two files, simple data, and little complexity
- Several data elements, but simple data relationships
- Multiple files and data interactions of average complexity (default)
- Multiple files with some complex data elements
- Multiple files, complex data elements and interactions
- Multiple files, majority of complex data and interactions
- Multiple files, complex data elements, many data interactions
- Majority of complex files, data elements, and interaction
EXPERIENCE INPUTS FOR SOFTWARE RISK MASTER ™ II
Note: These inputs were not used in the original SRM I prototype.
CLIENT EXPERIENCE WITH SOFTWARE PROJECTS: _______
- Very experienced clients
- Fairly experienced clients
- Average experienced clients
- Fairly inexperienced clients
- Very inexperienced clients
PROJECT MANAGEMENT EXPERIENCE: _______
- Very experienced management
- Fairly experienced management
- Average experienced management
- Fairly inexperienced management
- Very inexperienced management
DEVELOPMENT TEAM EXPERIENCE: _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
TEST TEAM EXPERIENCE: _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
QUALITY ASSURANCE TEAM EXPERIENCE: _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
CUSTOMER SUPPORT TEAM EXPERIENCE: _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
MAINTENANCE TEAM EXPERIENCE: _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
METHODOLOGY EXPERIENCE _______
- All experts
- Majority of experts
- Even mix of experts and novices
- Majority of novices
- All novices
PROJECT VALUE
NOTE: Value data is optional data and can be supplied by user if value data is known. If value data is supplied Software Risk Master will use it to predict ROI.
If value data is not supplied Software Risk Master will calculate the minimum value needed to recover total costs of ownership (TCO).
Direct revenues _______________________
Indirect revenues _______________________
Cost reductions _______________________
TOTAL VALUE _______________________
Send comments about questionnaire features to:
Capers Jones & Associates LLC
Email: Capers.Jones3@gmail.com