#13 – DO SOFTWARE – READ ABOUT THE UNIVERSAL SOFTWARE METRIC – (C) Capers Jones

Capers Jones pix

Function point metrics are the most accurate and effective metrics yet developed for software sizing and also for studying software productivity, quality, costs, risks, and economic value.

Unlike the older “lines of code” metric function points can be used to study requirements, design, and in fact all software activities from development through maintenance.

In the future function point metrics can easily become a universal metric used for all software applications and for all software contracts in all countries.  The government of Brazil already requires function points for all software contracts, and South Korea and Italy may soon follow.

However, there are some logistical problems with function point metrics that need to be understood and overcome in order for function point metrics to become the primary metric for software economic analysis.

Manual function point counting is too slow and costly to be used on large software projects above 10,000 function points in size.  Also, application size is not constant but grows at about 2% per calendar month during development and 8% or more per calendar year for as long as software is in active use.

This paper discusses a method of high-speed function point counting that can size any application in less than two minutes, and which can predict application growth during development and for five years after release.  This new method is based on pattern matching and is covered by U.S. utility patent application and hence is patent pending.

Introduction
Function point metrics were invented by A.J. Albrecht and colleagues at IBM’s White Plains development center circa 1975.  Function point metrics were placed in the public domain by IBM in 1978.  Responsibility for function point counting rules soon transferred to the International Function Point User’s Group (IFPUG).  Their web site is www.IFPUG.org.

Function point metrics were developed by IBM due to serious mathematical and economic problems associated with the older “lines of code” metric or LOC.  The LOC metric penalizes high-level programming languages and also cannot be used to evaluate requirements, design, business analysis, user documentation, or any other non-coding activities.

In the current era circa 2013 function point metrics are the major metric for software economic and productivity studies.  At least 50,000 software projects have been measured using IFPUG function point metrics, including more than 5,000 projects that are publically available from the International Software Benchmark Standards Group (ISBSG).  Their web site is www.ISBSG.org.

The Strengths of Function Point Metrics

  1. IFPUG function point metrics have more measured projects than all other metrics combined.
  2. IFPUG function point metrics are endorsed by ISO/IEC standard 20926:2009.
  3. Formal training and certification examinations are available for IFPUG function point counting.
  4. Hundreds of certified IFPUG function point counters are available in most countries.
  5. Counts of function points by certified counters usually are within 5% of each other.
  6. IFPUG function point metrics are standard features of most parametric estimating tools such as KnowledgePlan, SEER, and Software Risk Master.
  7. Function points are increasingly used for software contracts.  The government of Brazil requires function points for all software contracts.

The Weaknesses of Function Point Metrics

  1. Function point analysis is slow.  Counting speeds for function points average perhaps 500 function points per day.
  2. Due to the slow speed of function point analysis, function points are almost never used on large systems > 10,000 function points in size.
  3. Function point analysis is expensive.  Assuming a daily counting speed of 500 function points and a daily consulting fee of $1,500 counting an application of 10,000 function points would require 20 days and cost $30,000.  This is equal to a cost of $3.00 for every function point counted.
  4. Application size is not constant.  During development applications grow at perhaps 2% per calendar month.  After development applications continue to grow at perhaps 8% per calendar year.  Current counting rules do not include continuous growth.
  5. More than a dozen function point counting variations exist circa 2013 including COSMIC function points, NESMA function points, FISMA function points, fast function points, backfired function points, and a number of others.  These variations produce function point totals that differ from IFPUG function points by perhaps + or – 15%.

A Patented Method for High-Speed Function Point Analysis

In order to make function point metrics easier to use and more rapid, the author filed a U.S. utility patent application in 2012.  The utility patent application is U.S. Patent Application No. 13/352,434 filed January 18, 2012.

The high-speed sizing method is embedded in the Software Risk Master ™ (SRM) sizing and estimating tool under development by Namcook Analytics LLC.  A working version is available on the Namcook Analytics web site, www.Namcook.com.  The version requires a password from within the site.

The Namcook Analytics high-speed method includes these features:

  1. From more than 200 trials sizing speed averages about 1.87 minutes per application.  This speed is more or less constant between applications as small as 10 function points or as large as 300,000 function points.
  2. The sizing method often comes within 5% of manual counts by certified counters.  The closest match was an SRM predicted size of 1,802 function points for an application sized manually at 1,800 function points.
  3. The sizing method can also be used prior to full requirements, which is the earliest of any known software sizing method.
  4. The patent-pending method is based on external pattern matching rather than internal attributes.  So long as an application can be placed on the SRM taxonomy the application can be sized.
  5. The method can size all types of software including operating systems, ERP packages, telephone switching systems, medical device software, web applications, smart-phone applications, and normal information systems applications.
  6. The sizing method is metric neutral and predicts application size in a total of 15 metrics including IFPUG function points, the new SNAP metric for non-functional attributes; COSMIC function points, story points, use-case points, logical code statements, and many others.
  7. The sizing method predicts application growth during development and for five years of post-release usage.

A Short Summary of Pattern Matching
Today in 2013 very few applications are truly new.  Most are replacements for older legacy applications or enhancements to older legacy applications.  Pattern matching uses the size, cost, schedules, and other factors from legacy applications to generate similar values for new applications.

Software pattern matching as described here is based on a proprietary taxonomy developed by the author, Capers Jones.  The taxonomy uses multiple-choice questions to identify the key attributes of software projects.  The taxonomy is used to collect historical benchmark data and also as basis for estimating future projects.  The taxonomy is also used for sizing applications.

For sizing, the taxonomy includes project nature, scope, class, type, problem complexity, code complexity, and data complexity. For estimating, additional parameters such as CMMI level, methodology, and team experience are also used.

The proprietary Namcook taxonomy used for pattern matching contains 122 factors.  With 122 total elements the permutations of the full taxonomy total to 214,200,000 possible patterns.  Needless to say more than half of these patterns have never occurred and will never occur.

For the software industry in 2013 the total number of patterns that occur with relatively high frequency is much smaller:  about 20,000.

The Software Risk Master tool uses the taxonomy to select similar projects from its knowledge base of around 15,000 projects.  Mathematical algorithms are used to derive results for patterns that do not have a perfect match.

However a great majority of software projects do have matches because they have been done many times.  For example all banks perform similar transactions for customers and therefore have similar software packages.  Telephone switches also have been done many times and all have similar features.

Pattern matching with a good taxonomy to guide the search is a very cost-effective way for dealing with application size.

Pattern matching is new for software sizing but common elsewhere.  Two examples of pattern matching are the Zillow data base of real-estate costs and the Kelley Blue Book of used automobile costs.  Both use taxonomies to narrow down choices, and then show clients the end results of those choices.

Increasing Executive Awareness of Function Points for Economic Studies
Because of the slow speed of function point analysis and the lack of data from large applications function points are a niche metric below the interest level of most CEO’s and especially CEO’s of Fortune 500 companies with large portfolios and many large systems including ERP packages.

In order for function point metrics to become a priority for C level executives and a standard method for all software contracts, some improvements are needed:

  1. Function point size must be available in a few minutes for large systems; not after weeks of counting.
  2. The cost per function point counted must be lower than $0.05 per function point rather than today’s costs of more than $3.00 per function point counted.
  3. Function point metrics must be able to size applications ranging from a low of 1 function point to a high of more than 300,000 function points.
  4. Sizing of applications must also deal with the measured rates of requirements creep during development and the measured rates of post-release growth for perhaps 10 years after the initial release.
  5. Function points must also be applied to maintenance, enhancements, and total costs of ownership (TCO).
  6. Individual changes in requirements should be sized in real-time as they occur.  If a client wants a new feature that may be 10 function points in size, this fact should be established within a few minutes.
  7. Function points should be used for large-scale economic analysis of methods, industries, and even countries.

Examples of Software Risk Master ™ (SRM) Sizing and Estimating
Following are some examples of the features of the patent-pending sizing method embedded in Software Risk Master ™:

Sizing Application Growth during Development and After Release

Table 1 shows an example of the way SRM predicts requirements growth and post-release changes:

Table 1: Software Risk Master™ Multi-Year Sizing
  Copyright © 2011-2013 by Capers Jones.
Patent application 13/352,434.
Nominal application size
in IFPUG function points

10,000

Function

Points

1

Size at end of requirements

10,000

2

Size of requirement creep

2,000

3

Size of planned delivery

12,000

4

Size of deferred functions

-4,800

5

Size of actual delivery

7,200

6

Year 1

12,000

7

Year 2

13,000

8

Year 3

14,000

9

Year 4*

17,000

10

Year 5

18,000

11

Year 6

19,000

12

Year 7

20,000

13

Year 8*

23,000

14

Year 9

24,000

15

Year 10

25,000

Note that year’s 4 and 7 show a phenomenon called “mid-life kickers” or major new features added about every four years to commercial software applications.

Multi-year sizing is based on empirical data from a number of major companies such as IBM where applications have been in service for more than 10 years.

Predicting Application Size in Multiple Metrics
There are so many metrics in use in 2013 that as a professional courtesy to users and other metrics groups SRM predicts size in the metrics shown in Table 2.  Assume that the application being sized is known to be 10,000 function points using IFPUG version 4.2 counting rules:

Table 2:  Metrics Supported by SRM Pattern Matching

Alternate Metrics

Size

% of IFPUG

Backfired function points

            10,000

100.00%

Cosmic function points

            11,429

114.29%

Fast function points

               9,700

97.00%

Feature points

            10,000

100.00%

FISMA function points

            10,200

102.00%

Full function points

            11,700

117.00%

Function points light

               9,650

96.50%

Mark II function points

            10,600

106.00%

NESMA function points

            10,400

104.00%

RICE objects

            47,143

471.43%

 SCCQI “function points”

            30,286

302.86%

SNAP non functional metrics

               1,818

18.18%

Story points

               5,556

55.56%

Unadjusted function points

               8,900

89.00%

Use case points

               3,333

33.33%

 

Because SRM is metric neutral, additional metrics could be added to the list of supported metrics if new metrics become available in the future.

SRM also predicts application size in terms of logical code statements or “LOC.”  However with more than 2,500 programming languages in existence and the majority of projects using several languages, code sizing requires that users inform the SRM tool as to which language(s) will be used.  This is done by specifying a percentage of various languages from an SRM pull-down menu that lists the languages supported.  Currently SRM supports about 180 languages for sizing, but this is just an arbitrary number that can easily be expanded.

Sizing All Known Types of Software Application
One of the advantages of sizing by means of external pattern matching rather than sizing by internal attributes is that the any known application can be sized.  Table 3 shows 40 samples of applications size by the Software Risk Master ™ patent-pending method:

Table 3:  Examples of Software Size via Pattern Matching

                Using Software Risk Master ™

Application Size in IFPUG Function Points

  1. Oracle 229,434
  2. Windows 7 (all features) 202,150
  3. Microsoft Windows XP   66,238
  4. Google docs   47,668
  5. Microsoft Office 2003   33,736
  6. F15 avionics/weapons   23,109
  7. VA medical records   19,819
  8. Apple I Phone   19,366
  9. IBM IMS data base   18,558
  10. Google search engine   18,640
  11. Linux   17,505
  12. ITT System 12 switching   17,002
  13. Denver Airport luggage (original)   16,661
  14. Child Support Payments (state)   12,546
  15. Facebook     8,404
  16. MapQuest     3,793
  17. Microsoft Project     1,963
  18. Android OS (original version)     1,858
  19. Microsoft Excel     1,578
  20. Garmin GPS navigation (hand held)     1,518
  21. Microsoft Word     1,431
  22. Mozilla Firefox     1,342
  23. Laser printer driver (HP)     1,248
  24. Sun Java compiler     1,185
  25. Wikipedia     1,142
  26. Cochlear implant (embedded)     1,041
  27. Microsoft DOS circa 1998     1,022
  28. Nintendo Gameboy DS     1,002
  29. Casio atomic watch        933
  30. Computer BIOS        857
  31. SPR KnowledgePlan        883
  32. Function Point Workbench                    714
  33. Norton anti-virus        700
  34. SPR SPQR/20        699
  35. Golf handicap analysis        662
  36. Google Gmail        590
  37. Twitter (original circa 2009)        541
  38. Freecell computer solitaire        102
  39. Software Risk Master™ prototype          38
  40. ILOVEYOU computer worm          22

This list of 40 applications was sized by the author in about 75 minutes, which is a rate of 1.875 minutes per application sized.  The cost per function point sized is less than $0.001.  As of 2013 SRM sizing is the fastest and least expensive method of sizing yet developed.  This makes SRM useful for Agile projects where normal function point analysis is seldom used.

Function Points for Early Analysis of Software Risks
Software projects are susceptible to more than 200 risks in all, of which about 50 can be analyzed using function point metrics.  As application size goes up when measured with function point metrics, software risks also go up.

Table 4 shows the comparative risk profiles of four sample projects of 100, 1000, 10,000, and 100,000 function points.  All four are “average” projects using iterative development.  All four are assumed to be at CMMI level 1.

Table 4: Average Risks for IT Projects by Size
(Predictions by Software Risk Master ™)
Risks for 100 function points
Cancellation

8.36%

Negative ROI

10.59%

Cost overrun

9.19%

Schedule slip

11.14%

Unhappy customers

36.00%

Litigation

3.68%

Average Risks

13.16%

Financial Risk

15.43%

Risks for 1000 function points
Cancellation

13.78%

Negative ROI

17.46%

Cost overrun

15.16%

Schedule slip

18.38%

Unhappy customers

36.00%

Litigation

6.06%

Average Risks

17.81%

Financial Risk

25.44%

Risks for 10,000 function points
Cancellation

26.03%

Negative ROI

32.97%

Cost overrun

28.63%

Schedule slip

34.70%

Unhappy customers

36.00%

Litigation

11.45%

Average Risks

28.29%

Financial Risk

48.04%

Risks for 100,000 function points
Cancellation

53.76%

Negative ROI

68.09%

Cost overrun

59.13%

Schedule slip

71.68%

Unhappy customers

36.00%

Litigation

23.65%

Average Risks

52.05%

Financial Risk

99.24%

All of the data in Table 4 are standard risk predictions from Software Risk Master ™.  Risks would go down with higher CMMI levels, more experienced teams, and robust methodologies such a RUP or TSP.

Small projects below 1000 function points are usually completed without too much difficulty.  But large systems above 10,000 function points are among the most hazardous of all manufactured objects in human history.

It is an interesting phenomenon that every software breach of contract lawsuit except one where the author worked as an expert witness were for projects of 10,000 function points and higher.

Activity-Based Sizing and Cost Estimating
In order to be useful for software economic analysis, function point metrics need to be applied to individual software development activities.  Corporate executives at the CEO level want to know all cost elements, and not just “design, code, and unit test” or DCUT as it is commonly called.

SRM has a variable focus that allows it to show data ranging from full projects to 40 activities.
Table 5 shows the complete set of 40 activities for an application of 10,000 function points in size:

Table 5:  Function Points for Activity-Based Cost Analysis

Development Activities

Work

Burdened

Hours per

Cost per

Funct. Pt.

Funct. Pt.

1

Business analysis

0.02

$1.33

2

Risk analysis/sizing

0.00

$0.29

3

Risk solution planning

0.01

$0.67

4

Requirements

0.38

$28.57

5

Requirement. Inspection

0.22

$16.67

6

Prototyping

0.33

$25.00

7

Architecture

0.05

$4.00

8

Architecture. Inspection

0.04

$3.33

9

Project plans/estimates

0.03

$2.00

10

Initial Design

0.75

$57.14

11

Detail Design

0.75

$57.14

12

Design inspections

0.53

$40.00

13

Coding

4.00

$303.03

14

Code inspections

3.30

$250.00

15

Reuse acquisition

0.01

$1.00

16

Static analysis

0.02

$1.33

17

COTS Package purchase

0.01

$1.00

18

Open-source acquisition.

0.01

$1.00

19

Code security audit.

0.04

$2.86

20

Ind. Verif. & Valid.

0.07

$5.00

21

Configuration control.

0.04

$2.86

22

Integration

0.04

$2.86

23

User documentation

0.29

$22.22

24

Unit testing

0.88

$66.67

25

Function testing

0.75

$57.14

26

Regression testing

0.53

$40.00

27

Integration testing

0.44

$33.33

28

Performance testing

0.33

$25.00

29

Security testing

0.26

$20.00

30

Usability testing

0.22

$16.67

31

System testing

0.88

$66.67

32

Cloud testing

0.13

$10.00

33

Field (Beta) testing

0.18

$13.33

34

Acceptance testing

0.05

$4.00

35

Independent testing

0.07

$5.00

36

Quality assurance

0.18

$13.33

37

Installation/training

0.04

$2.86

38

Project measurement

0.01

$1.00

39

Project office

0.18

$13.33

40

Project management

4.40

$333.33

Cumulative Results

20.44

$1,548.68

 

SRM uses this level of detail for collecting benchmark data from large applications.  In predictive mode prior to requirements this much detail is not needed, so a smaller chart of accounts is used.

Function Points and Methodology Analysis
One topic of considerable interest to both C level executives and also to academics and software engineers is how various methodologies compare.  Software Risk Master ™ includes empirical results from more than 30 different software development methodologies; more than any other benchmark or estimation tool.

Table 6 shows the approximate development schedules noted for 30 different software development methods.  The rankings run from slowest at the top of the table to fastest at the bottom of the table:

Table 6:  Application Schedules
Application Size =

10

100

1,000

10,000

100,000

(IFPUG 4.2)

Schedule

Schedule

Schedule

Schedule

Schedule

Methods

Months

Months

Months

Months

Months

1

Proofs

2.60

6.76

17.58

45.71

118.85

2

DoD

2.57

6.61

16.98

43.65

112.20

3

Cowboy

2.51

6.31

15.85

39.81

100.00

4

Waterfall

2.48

6.17

15.31

38.02

94.41

5

ISO/IEC

2.47

6.11

15.10

37.33

92.26

6

Pairs

2.45

6.03

14.79

36.31

89.13

7

Prince2

2.44

5.94

14.49

35.32

86.10

8

Merise

2.44

5.94

14.49

35.32

86.10

9

DSDM

2.43

5.92

14.39

34.99

85.11

10

Models

2.43

5.89

14.29

34.67

84.14

11

Clean rm.

2.42

5.86

14.19

34.36

83.18

12

T-VEC

2.42

5.86

14.19

34.36

83.18

13

V-Model

2.42

5.83

14.09

34.04

82.22

14

Iterative

2.41

5.81

14.00

33.73

81.28

15

SSADM

2.40

5.78

13.90

33.42

80.35

16

Spiral

2.40

5.75

13.80

33.11

79.43

17

SADT

2.39

5.73

13.71

32.81

78.52

18

Jackson

2.39

5.73

13.71

32.81

78.52

19

EVO

2.39

5.70

13.61

32.51

77.62

20

IE

2.39

5.70

13.61

32.51

77.62

21

OO

2.38

5.68

13.52

32.21

76.74

22

DSD

2.38

5.65

13.43

31.92

75.86

23

RUP

2.37

5.62

13.34

31.62

74.99

24

PSP/TSP

2.36

5.56

13.11

30.90

72.86

25

FDD

2.36

5.55

13.06

30.76

72.44

26

RAD

2.35

5.52

12.97

30.48

71.61

27

Agile

2.34

5.50

12.88

30.20

70.79

28

XP

2.34

5.47

12.79

29.92

69.98

29

Hybrid

2.32

5.40

12.53

29.11

67.61

30

Mashup

2.24

5.01

11.22

25.12

56.23

Average

2.41

5.81

14.03

33.90

81.98

Software Risk Master ™ also predicts staffing, effort in months and hours, costs, quality, and 5-years of post-release maintenance and enhancement.  Table 6 only shows schedules since that topic is of considerable interest to CEO’s as well as other C level executives.

Note that table 6 assumes close to a zero value for certified reusable components.  Software reuse can shorten schedules compared those shown in table 6.

Table 6 also assumes an average team and no use of the capability maturity model.  Expert teams and projects in organizations at CMMI levels 3 or higher will have shorter schedules than those shown in table 6.

SRM itself handles adjustments in team skills, CMMI levels, methodologies, programming languages, and volumes of reuse.

Function Points for Software Quality Analysis
Function points are the best metric for software quality analysis.  The older metric “cost per defect” penalizes quality and also violates standard economic assumptions.  Quality economics are much better analyzed using function point metrics than any other.  Table 7 shows a sample of the quality predictions from SRM for an application of 1000 function points.  The table shows a “best case” example:

Table 7: SRM Quality Estimate Output Results
Requirements defect potential

134

Design defect potential

561

Code defect potential

887

Document defect potential

135

Total Defect Potential

1,717

Per function point                  1.72
Per KLOC                32.20
Defect Prevention

Efficiency

Remainder

Bad Fixes

Costs

JAD

27%

               1,262

5

$28,052

QFD

30%

                   888

4

$39,633

Prototype

20%

                   713

2

$17,045

Models

68%

                   229

5

$42,684

Subtotal

86%

                   234

15

$127,415

Pre-Test Removal

Efficiency

Remainder

Bad Fixes

Costs

Desk check

27%

                   171

2

$13,225

Static analysis

55%

                     78

1

$7,823

Inspections

93%

                       5

0

$73,791

Subtotal

98%

                       6

3

$94,839

Test Removal

Efficiency

Remainder

Bad Fixes

Costs

Unit

32%

                       4

0

$22,390

Function

35%

                       2

0

$39,835

Regression

14%

                       2

0

$51,578

Component

32%

                       1

0

$57,704

Performance

14%

                       1

0

$33,366

System

36%

                       1

0

$63,747

Acceptance

17%

                       1

0

$15,225

Subtotal

87%

                       1

0

$283,845

Costs

PRE-RELEASE COSTS                1,734

3

$506,099

POST-RELEASE REPAIRS

(TECHNICAL DEBT)

                       1

0

$658

MAINTENANCE OVERHEAD

$46,545

COST OF QUALITY  (COQ)

$553,302

Defects delivered

                       1

High severity

                       0

Security flaws

                       0

High severity %

11.58%

Delivered Per FP

0.001

High severity per FP

0.000

Security flaws per FP

0.000

Delivered Per KLOC

0.014

High severity per KLOC

0.002

Security flaws per KLOC

0.001

Cumulative

99.96%

Removal Efficiency

Function points are able to quantify requirements and design defects, which outnumber coding defects for large applications.  This is not possible using LOC metrics.  Function points are also superior to “cost per defect” for measuring technical debt and cost of quality (COQ).  Both technical debt and COQ are standard SRM outputs.

Table 7 is only an example.  SRM can also model various ISO standards, certification of test personnel, team experience levels, CMMI levels, and in fact a total of about 200 specific quality factors.

Function Points and Software Maintenance, Enhancements, and Total Cost of Ownership
Software costs do not end when the software is delivered.  Nor does delivery put an end to the need to monitor both costs and quality.  Some applications have useful lives that can span 20 years or more.  These applications are not fixed, but add new features on an annual basis.  Therefore function point metrics need to continue to be applied to software projects after release.

Post-release costs are more complex than development costs because they need to integrate enhancements or adding new features, maintenance or fixing bugs, and customer support or helping clients when they call or contact a company about a specific application.

The need to keep records for applications that are constantly growing over time means that normalization of data will need to be cognizant of the current size of the application.  The method used by Software Risk Master ™ is to normalize results for both enhancements and maintenance at the end of every calendar year; i.e. the size of the application is based on the date of December 31.   The pre-release size is based on the size of the application on the day it was first delivered to clients.  The sizes of requirements creep during development are also recorded.

Table 8 shows the approximate rate of growth and the maintenance and enhancement effort for five years for an application of a nominal 1000 function points when first delivered:

Table 8:  Five Years of Software Maintenance and Enhancement for 1000 Function Points
(MAINTENANCE + ENHANCEMENT)
   

Year 1

Year 2

Year 3

Year 4

Year 5

5-Year

2013

2014

2015

2016

2017

Totals

 

 

 

 

 

 

Annual enhancement %

80

86

93

101

109

469

Application Growth in FP

1,080

1,166

1,260

1,360

1,469

1,469

Application Growth in LOC

57,600

62,208

67,185

67,185

78,364

78,364

Cyclomatic complexity increase

11.09

11.54

12.00

12.48

12.98

12.98

 

 

 

 

 

 

Enhancement staff

0.81

0.88

0.96

1.05

1.15

0.97

Maintenance staff

5.68

5.72

5.85

6.36

7.28

6.18

Total staff

6.49

6.61

6.81

7.41

8.43

7.15

 

 

 

 

 

 

Enhancement effort (months)

9.72

10.61

11.58

12.64

13.80

58.34

Maintenance effort (months)

68.19

68.70

70.20

76.31

87.34

370.74

Total effort (months)

77.91

79.30

81.78

88.95

101.14

429.08

Total effort (hours)

10,283.53

10,467.77

10,794.70

11,741.94

13,350.37

56,638.31

Enhancement Effort %

12.47%

13.37%

14.16%

14.21%

13.64%

13.60%

Maintenance Effort %

87.53%

86.63%

85.84%

85.79%

86.36%

86.40%

Total Effort %

100.00%

100.00%

100.00%

100.00%

100.00%

100.00%

Enhancement cost

$77,733

$84,845

$92,617

$101,114

$110,403

$466,712

Maintenance cost

$331,052

$316,674

$304,368

$315,546

$347,348

$1,614,988

Total cost

$408,785

$401,518

$396,985

$416,660

$457,751

$2,081,700

Enhancement cost %

19.02%

21.13%

23.33%

24.27%

24.12%

22.42%

Maintenance cost %

80.98%

78.87%

76.67%

75.73%

75.88%

77.58%

Total Cost

100.00%

100.00%

100.00%

100.00%

100.00%

100.00%

The original development cost for the application was $1,027,348.  The costs for five years of maintenance and enhancements the cost were $2,081,700 or more than twice the original development cost.  The total cost of ownership is the sum of development and the 5-year M&E period.  In this example the TCO is $3,109,048.

CEO’s and other C level executives want to know the “total cost of ownership” (TCO) of software and not just the initial development costs.

Five-year maintenance and enhancement predictions are standard outputs from Software Risk Master ™ (SRM).

Function Points and Forensic Analysis of Canceled Projects
About 35% of large systems > 10,000 function points are canceled and never delivered to end users or clients.  These canceled projects are seldom studied but forensic analysis of failures can lead to important insights.

Because they are terminated, the size of the project at termination and accumulated costs and resource data may not be available.   But Software Risk Master ™ can provide both values.

Tables 9.1 through 9.3 are taken from Chapter 7 of the author’s book The Economics of Software Quality, Addison Wesley, 2011.

Table 9.1: Odds of Cancellation by Size and Quality Level
  (Includes negative ROI, poor quality, and change in business need)
       

Function

Low

Average

High

Points

Quality

Quality

Quality

10

2.00%

0.00%

0.00%

100

7.00%

3.00%

2.00%

1000

20.00%

10.00%

5.00%

10000

45.00%

15.00%

7.00%

100000

65.00%

35.00%

12.00%

Average

27.80%

12.60%

5.20%

Table 9.2: Probable Month of Cancellation from Start of Project
(Elapsed months from start of project)

Function

Low

Average

High

Points

Quality

Quality

Quality

10

1.4

None

None

100

5.9

5.2

3.8

1000

16.0

13.8

9.3

10000

38.2

32.1

16.1

100000

82.3

70.1

25.2

Average

45.5

30.3

16.9

Percent

150.10%

100.00%

55.68%

Table 9.3: Probable Effort from Project Start to Point of Cancellation
(Effort in terms of Person Months)

Function

Low

Average

High

Points

Quality

Quality

Quality

 

 

 

 

10

0.8

None

None

100

10.0

7.9

5.4

1000

120.5

92.0

57.0

10000

2,866.5

2,110.1

913.2

100000

61,194.7

45,545.5

13,745.5

Average

21,393.9

15,915.9

4,905.2

Percent

134.42%

100.00%

30.82%

When high-quality projects are canceled it is usually because of business reasons.  For example the author was working on an application when his company bought a competitor that already had the same kind of application up and running.  The company did not need two identical applications, so the version under development was canceled.   This was a rational business decision and not due to poor quality or negative ROI.

When low-quality projects are canceled it is usually because they are so late and so much over budget that their return on investment (ROI) turned from positive to strongly negative.  The delays and cost overruns explain why low-quality canceled projects are much more expensive than successful projects of the same size and type.  Function point metrics are the best choice for forensic analysis of canceled projects.

Portfolio Analysis with Function Point Metrics
To be useful and interesting to CEO’s and other C level executives function points should be able to quantify not just individual projects but also large collections of related projects such as the full portfolio of a Fortune 500 company.

Table 10 is an example of the special SRM predictions for corporate portfolios.  This prediction shows the size in function points for the portfolio of a Fortune 500 manufacturing company:

Table 10:  Portfolio Analysis of a Fortune 500 Manufacturing Company

Number of

Applications

Function

Lines of

Corporate Functions

Used

Points

Code

1

Accounts payable

                       18

          26,674

        1,467,081

2

Accounts receivable

                       22

          33,581

        1,846,945

3

Advertising

                       32

          47,434

        2,134,537

4

Advisory boards – technical

                         6

            8,435

            463,932

5

Banking relationships

                       38

        131,543

        7,234,870

6

Board of directors

                         4

            6,325

            347,900

7

Building maintenance

                         2

            3,557

            195,638

8

Business intelligence

                       21

              73,972

        4,068,466

9

Business partnerships

                       18

              44,457

        2,445,134

10

Competitive analysis

                       30

          74,635

        4,104,901

11

Consultant management

                         3

            4,609

            253,486

12

Contract management

                       32

          94,868

        5,217,758

13

Customer resource management

                       56

        140,585

        7,732,193

14

Customer support

                       45

          67,003

        3,685,140

15

Divestitures

                       10

          15,000

            825,000

16

Education – customers

                         7

          11,248

            618,663

17

Education – staff

                         4

            6,325

            347,900

18

Embedded software

                       84

        252,419

      21,455,576

19

Energy consumption monitoring

                         4

            6,325

            347,900

20

Energy acquisition

                         5

            7,097

            390,350

21

Engineering

79

        276,699

      20,752,447

22

ERP – Corporate

                       63

           252,982

      17,708,755

23

Finances (corporate)

                       84

        210,349

      11,569,183

24

Finances (divisional)

                       63

        157,739

        8,675,663

25

Governance

                       10

          25,000

        1,375,000

26

Government certification (if any)

                       24

          35,571

        1,956,383

27

Government regulations (if any)

                       13

          20,003

        1,100,155

28

Human resources

                         7

          11,248

            618,663

29

Insurance

                         6

            8,935

            491,421

30

Inventory management

                       45

          67,003

        3,685,140

31

Legal department

                       24

          35,571

        1,956,383

32

Litigation

                       32

          47,434

        2,608,879

33

Long-range planning

                         7

          18,747

        1,031,105

34

Maintenance – product

                       75

        112,484

        6,186,627

35

Maintenance – buildings

                         6

            8,435

            632,634

36

Manufacturing

                    178

        311,199

      23,339,917

37

Market research

                       38

          56,376

        3,100,659

38

Marketing

                       27

          39,911

        2,195,098

39

Measures – customer satisfaction

                         4

            6,325

            347,900

40

Measures – financial

                       24

          35,571

        1,956,383

41

Measures – market share

                         8

          12,621

            694,151

42

Measures – performance

                         9

          14,161

            778,850

43

Measures – quality

                       10

          15,000

            825,000

44

Measures – ROI and profitability

                       32

          47,434

        2,608,879

45

Mergers and acquisitions

                       24

          59,284

        3,260,639

46

Office suites

                         8

          29,449

        1,619,686

47

Open-source tools – general

                       67

        100,252

        5,513,837

48

Order entry

                       27

          39,911

        2,195,098

49

Outside services – manufacturing

                       24

          35,571

        1,956,383

50

Outside services – legal

                       27

          66,518

        3,658,497

51

Outside services – marketing

                       15

          22,444

        1,234,394

52

Outside services – sales

                       17

          25,182

        1,385,013

53

Outside services – terminations

                         9

              11,141

            612,735

54

Outsource management

                       32

          47,434

        2,608,879

55

Patents and inventions

                       19

          28,255

        1,554,010

56

Payrolls

                       21

          52,837

        2,906,047

57

Planning – manufacturing

                       42

          63,254

        3,478,996

58

Planning – products

                       10

          15,000

            825,000

59

Process management

                       12

          17,828

            980,514

60

Product design

                       56

        140,585

        7,732,193

61

Product nationalization

                       13

          30,004

        1,650,233

62

Product testing

                       38

          56,376

        3,100,659

63

Project offices

                       32

          55,340

        3,043,692

64

Project management

                       10

          27,500

        1,512,500

65

Purchasing

                       30

          44,781

        2,462,941

66

Quality control

                       13

          20,003

        1,100,155

67

Real estate

                         8

          12,621

            694,151

68

Research and development

                    106

        370,739

      20,390,634

69

Sales

                       45

          67,003

        3,685,140

70

Sales support

                       15

          22,444

        1,234,394

71

Security – buildings

                       21

          31,702

        1,743,628

72

Security – computing and software

                       32

           110,680

        6,087,384

73

Shareholder relationships

                         8

          29,449

        1,619,686

74

Shipping/receiving products

                       27

          66,518

        3,658,497

75

Software development

                       79

        238,298

      13,106,416

76

Standards compliance

                       13

          20,003

        1,100,155

77

Stocks and bonds

                       21

          73,972

        4,068,466

78

Supply chain management

                       47

          70,973

        3,903,498

79

Taxes

                       42

          84,339

        4,638,662

80

Travel

                       10

          25,000

        1,375,000

81

Unbudgeted costs – cyber attacks

                       32

              86,963

        4,782,945

82

Warranty support

                         7

          10,025

            551,384

                        –

Portfolio Totals

                 2,366

        5,192,567

    308,410,789

It is obvious that sizing a full portfolio with more than 2,300 applications and more than 5,000,000 function points cannot be accomplished by manual function point counting.  At an average counting rate of 500 function points per day, counting this portfolio would take 10,385 days.  At a cost of $1,500 per day the expense would be $5,192,197.

This fact alone explains why faster and cheaper function point analysis is a critical step leading to interest in function points by chief executive officers (CEO’s) and other C level executives.

Industry Studies using Function Point Metrics
One of the high-interest levels of CEO’s and other C level executives is in the area of how their companies compare to others in the same business sector, and how their business sectors compare to other business sectors.  Function point metrics are the best choice for these industry studies.

Table 11 shows approximate productivity and quality results for 68 U.S. industries using function points as the basis of analysis:

Table 11:  Approximate Industry Productivity and Quality Using Function Point Metrics

Software

Defect

Removal

Delivered

Productivity

Potentials

Efficiency

Defects

Industry

2013

2013

2013

2013

1

Government – intelligence

7.20

5.95

99.50%

0.03

2

Manufacturing – medical devices

7.75

5.20

98.50%

0.08

3

Manufacturing – aircraft

7.25

5.75

98.00%

0.12

4

Telecommunications operations

9.75

5.00

97.50%

0.13

5

Manufacturing – electronics

8.25

5.25

97.00%

0.16

6

Manufacturing – telecommunications

9.75

5.50

96.50%

0.19

7

Manufacturing – defense

6.85

6.00

96.25%

0.23

8

Government – military

6.75

6.40

96.00%

0.26

9

Entertainment – films

13.00

4.00

96.00%

0.16

10

Manufacturing – pharmaceuticals

8.90

4.55

95.50%

0.20

11

Smartphone/tablet applications

15.25

3.30

95.00%

0.17

12

Transportation – airlines

8.75

5.00

94.50%

0.28

13

Software (commercial)

15.00

3.50

94.00%

0.21

14

Manufacturing – automotive

7.75

4.90

94.00%

0.29

15

Transportation – bus

8.00

5.10

94.00%

0.31

16

Manufacturing – chemicals

8.00

4.80

94.00%

0.29

17

Banks – investment

11.50

4.60

93.75%

0.29

18

Open source development

13.75

4.40

93.50%

0.29

19

Banks – commercial

11.50

4.50

93.50%

0.29

20

Credit unions

11.20

4.50

93.50%

0.29

21

Professional support – medicine

8.55

4.80

93.50%

0.31

22

Government – police

8.50

5.20

93.50%

0.34

23

Entertainment – television

12.25

4.60

93.00%

0.32

24

Manufacturing – appliances

7.60

4.30

93.00%

0.30

25

Software (outsourcing)

14.00

4.65

92.75%

0.34

26

Manufacturing – nautical

8.00

4.60

92.50%

0.35

27

Process control

9.00

4.90

92.50%

0.37

28

Stock/commodity brokerage

10.00

5.15

92.50%

0.39

29

Professional support – law

8.50

4.75

92.00%

0.38

30

Games – computer

15.75

3.00

91.00%

0.27

31

Social networks

14.90

4.90

91.00%

0.44

32

Insurance – Life

10.00

5.00

91.00%

0.45

33

Insurance – medical

10.50

5.25

91.00%

0.47

34

Public utilities – electricity

7.00

4.80

90.50%

0.46

35

Education – University

8.60

4.50

90.00%

0.45

36

Automotive sales

8.00

4.75

90.00%

0.48

37

Hospitals

8.00

4.80

90.00%

0.48

38

Insurance – property and casualty

9.80

5.00

90.00%

0.50

39

Oil extraction

8.75

5.00

90.00%

0.50

40

Consulting

12.70

4.00

89.00%

0.44

41

Public utilities – water

7.25

4.40

89.00%

0.48

42

Publishing (books/journals)

8.60

4.50

89.00%

0.50

43

Transportation – ship

8.00

4.90

88.00%

0.59

44

Natural gas generation

6.75

5.00

87.50%

0.63

45

Education – secondary

7.60

4.35

87.00%

0.57

46

Construction

7.10

4.70

87.00%

0.61

47

Real estate – commercial

7.25

5.00

87.00%

0.65

48

Agriculture

7.75

5.50

87.00%

0.72

49

Entertainment – music

11.00

4.00

86.50%

0.54

50

Education – primary

7.50

4.30

86.50%

0.58

51

Transportation – truck

8.00

5.00

86.50%

0.68

52

Government – state

6.50

5.65

86.50%

0.76

53

Manufacturing – apparel

7.00

3.00

86.00%

0.42

54

Games – traditional

7.50

4.00

86.00%

0.56

55

Manufacturing – general

8.25

5.20

86.00%

0.73

56

Retail

8.00

5.40

85.50%

0.78

57

Hotels

8.75

4.40

85.00%

0.66

58

Real estate – residential

7.25

4.80

85.00%

0.72

59

Mining – metals

7.00

4.90

85.00%

0.74

60

Automotive repairs

7.50

5.00

85.00%

0.75

61

Wholesale

8.25

5.20

85.00%

0.78

62

Government – federal civilian

6.50

6.00

84.75%

0.92

63

Waste management

7.00

4.60

84.50%

0.71

64

Transportation – trains

8.00

4.70

84.50%

0.73

65

Food – restaurants

7.00

4.80

84.50%

0.74

66

Mining-coal

7.00

5.00

84.50%

0.78

67

Government – county

6.50

5.55

84.50%

0.86

68

Government – municipal

7.00

5.50

84.00%

0.88

TOTAL/AVERAGES

8.95

4.82

90.39%

0.46

Global Studies Using Function Point Analysis
In today’s world software development is a global business.  About 60% of Indian companies and more than 80% of Indian outsource companies use function points in order to attract outsource business, with considerable success.   As already mentioned Brazil now requires function points for all government outsource contracts.

Clearly global competition is a topic of critical interest to all C level executives including CEO’s, CFO’s, CTO’s CIO’s, CRO’s, and all others.

Function point metrics are the best (and only) metric that is effective for very large scale global studies of software productivity and quality.  Table 12 shows approximate results for 68 countries.

Table 12: Approximate Global Productivity and Quality in Function Points

Approximate

Approximate

Approximate

Approximate

Software

Defect

Defect

Delivered

Productivity

Potentials

Removal

Defects

(FP per Month)

in 2013

Efficiency

in 2013

(Defects per FP)

(Defects per FP)

1

Japan

9.15

4.50

93.50%

0.29

2

India

11.30

4.90

93.00%

0.34

3

Denmark

9.45

4.80

92.00%

0.38

4

Canada

8.85

4.75

91.75%

0.39

5

South Korea

8.75

4.90

92.00%

0.39

6

Switzerland

9.35

5.00

92.00%

0.40

7

United Kingdom

8.85

4.75

91.50%

0.40

8

Israel

9.10

5.10

92.00%

0.41

9

Sweden

9.25

4.75

91.00%

0.43

10

Norway

9.15

4.75

91.00%

0.43

11

Netherlands

9.30

4.80

91.00%

0.43

12

Hungary

9.00

4.60

90.50%

0.44

13

Ireland

9.20

4.85

90.50%

0.46

14

United States

8.95

4.82

90.15%

0.47

15

Brazil

9.40

4.75

90.00%

0.48

16

France

8.60

4.85

90.00%

0.49

17

Australia

8.88

4.85

90.00%

0.49

18

Austria

8.95

4.75

89.50%

0.50

19

Belgium

9.10

4.70

89.15%

0.51

20

Finland

9.00

4.70

89.00%

0.52

21

Hong Kong

9.50

4.75

89.00%

0.52

22

Mexico

8.65

4.85

88.00%

0.58

23

Germany

8.85

4.95

88.00%

0.59

24

Philippines

10.75

5.00

88.00%

0.60

25

New Zealand.

9.05

4.85

87.50%

0.61

26

Taiwan

9.00

4.90

87.50%

0.61

27

Italy

8.60

4.95

87.50%

0.62

28

Jordan

7.85

5.00

87.50%

0.63

29

Malaysia

8.40

4.65

86.25%

0.64

30

Thailand

7.90

4.95

87.00%

0.64

31

Spain

8.50

4.90

86.50%

0.66

32

Portugal

8.45

4.85

86.20%

0.67

33

Singapore

9.40

4.80

86.00%

0.67

34

Russia

8.65

5.15

86.50%

0.70

35

Argentina

8.30

4.80

85.50%

0.70

36

China

9.15

5.20

86.50%

0.70

37

South Africa

8.35

4.90

85.50%

0.71

38

Iceland

8.70

4.75

85.00%

0.71

39

Poland

8.45

4.80

85.00%

0.72

40

Costa Rica

8.00

4.70

84.50%

0.73

41

Bahrain

7.85

4.75

84.50%

0.74

42

Ukraine

9.10

4.95

85.00%

0.74

43

Turkey

8.60

4.90

84.50%

0.76

44

Viet Nam

8.65

4.90

84.50%

0.76

45

Kuwait

8.80

4.80

84.00%

0.77

46

Colombia

8.00

4.75

83.50%

0.78

47

Peru

8.75

4.90

84.00%

0.78

48

Greece

7.85

4.80

83.50%

0.79

49

Syria

7.60

4.95

84.00%

0.79

50

Tunisia

8.20

4.75

83.00%

0.81

51

Saudi Arabia

8.85

5.05

84.00%

0.81

52

Cuba

7.85

4.75

82.50%

0.83

53

Panama

7.95

4.75

82.50%

0.83

54

Egypt

8.55

4.90

82.75%

0.85

55

Libya

7.80

4.85

82.50%

0.85

56

Lebanon

7.75

4.75

82.00%

0.86

57

Iran

7.25

5.25

83.50%

0.87

58

Venezuela

7.50

4.70

81.50%

0.87

59

Iraq

7.95

5.05

82.50%

0.88

60

Pakistan

7.40

5.05

82.00%

0.91

61

Algeria

8.10

4.85

81.00%

0.92

62

Indonesia

8.90

4.90

80.50%

0.96

63

North Korea

7.65

5.10

81.00%

0.97

64

Nigeria

7.00

4.75

78.00%

1.05

65

Bangladesh

7.50

4.75

77.00%

1.09

66

Burma

7.40

4.80

77.00%

1.10

AVERAGE/TOTAL

8.59

4.85

86.27%

0.67

 

Some of the data in table 12 is provisional and included primarily to encourage more studies of productivity and quality in countries that lack effective benchmarks circa 2013.  For example China and Russia are major producers of software but seem to lag India, Brazil, the Netherlands, Finland, and the United States in adopting modern metrics and function points.

Summary and Conclusions
Function point metrics are the most powerful metrics yet developed for studies of software economics, productivity, risks, and quality.  They are much better than older metrics such as “lines of code” and “cost per defect.”  They are also much better than alternate metrics such as “story points” and “use-case points”.

However the slow speed and high costs of manual function point analysis has caused function points to be viewed by top-executives such as CEO’s as a minor niche metric.  In order to be useful to C level executives function point metrics need:

  1. Faster counting by more than an order of magnitude from today’s averages
  2. Lower costs down below $0.05 per function point counted
  3. Methodology benchmarks for all known methods
  4. Quality benchmarks for defect prevention, pre-test removal and testing
  5. Maintenance, enhancement, and total cost of ownership (TCO) benchmarks
  6. Portfolio benchmarks for many companies and government groups
  7. Industry benchmarks for all software-intensive industries
  8. Global benchmarks for all countries that produce software in large volumes

This paper discusses a patent-pending method of sizing software projects in less than 2 minutes, with function points as one of the default metrics produced.  Software Risk Masters ™ (SRM) produces development estimates in about 3 minutes; quality estimates in 4 minutes; and maintenance estimates in 4 minutes.

The Software Risk Master ™ tool can do more than size.  The SRM tool can also predict the results and risks of any methodology, any level of team experience, any CMMI level, and programming language (or combination), and any volume of reusable materials.

References and Readings

Jones, Capers; “A Short History of Lines of Code Metrics”; Namcook Analytics Technical Report; Narragansett, RI; 2012.

This report provides a mathematical proof that “lines of code” metrics violate standard economic assumptions.  LOC metrics make requirements and design invisible.  Worse, LOC metrics penalize high-level languages.  The report asserts that LOC should be deemed professional malpractice if used to compare results between different programming languages.  There are other legitimate purposes for LOC, such as merely measuring coding speed.

Jones, Capers; “A Short History of the Cost Per Defect Metrics”; Namcook Analytics Technical Report; Narragansett, RI 2012.

This report provides a mathematical proof that “cost per defect” penalizes quality and achieves its lowest values for the buggiest software applications.  It also points out that the urban legend that “cost per defect after release is 100 times larger than early elimination” is not true.  The reason for expansion of cost per defect for down-stream defect repairs is due to ignoring fixed costs.   The cost per defect metric also ignores many economic topics such as the fact that high quality leads to shorter schedules.

Jones, Capers: “Sizing Up Software;” Scientific American Magazine, Volume 279, No. 6, December 1998; pages 104-111.

Jones, Capers; “Early Sizing and Early Risk Analysis”; Capers Jones & Associates LLC;

Narragansett, RI; July 2011.

Jones, Capers and Bonsignour, Olivier; The Economics of Software Quality; Addison Wesley Longman, Boston, MA; ISBN 10: 0-13-258220—1; 2011; 585 pages.

Jones, Capers; Software Engineering Best Practices; McGraw Hill, New York, NY; ISBN 978-0-07-162161-8; 2010; 660 pages.

Jones, Capers; Applied Software Measurement; McGraw Hill, New York, NY; ISBN 978-0-07-150244-3; 2008; 662 pages.

Jones, Capers; Estimating Software Costs; McGraw Hill, New York, NY; 2007; ISBN-13: 978-0-07-148300-1.

Jones, Capers; Software Assessments, Benchmarks, and Best Practices;  Addison Wesley Longman, Boston, MA; ISBN 0-201-48542-7; 2000; 657 pages.

Jones, Capers;  Conflict and Litigation Between Software Clients and Developers; Software Productivity Research, Inc.; Burlington, MA; September 2007; 53 pages; (SPR technical report).

The literature on function point metrics is quite extensive.  Following are some of the more useful books:

Abran, Alain and Dumke, Reiner R; Innovations in Software Measurement; Shaker-Verlag, Aachen, DE; ISBN 3-8322-4405-0; 2005; 456 pages.

Abran, Alain; Bundschuh, Manfred; Dumke, Reiner; Ebert; Christof; and Zuse, Horst; Software Measurement News; Vol. 13, No. 2, Oct. 2008 (periodical).

Bundschuh, Manfred and Dekkers, Carol; The IT Measurement Compendium; Springer-Verlag, Berlin, DE; ISBN 978-3-540-68187-8; 2008; 642 pages.

Chidamber, S.R. & Kemerer, C.F.; “A Metrics Suite for Object-Oriented Design”; IEEE Trans. On Software Engineering; Vol. SE20, No. 6; June 1994; pp. 476-493.

Dumke, Reiner; Braungarten, Rene; Büren, Günter; Abran, Alain; Cuadrado-Gallego, Juan J; (editors); Software Process and Product Measurement; Springer-Verlag, Berlin; ISBN 10: 3-540-89402-0; 2008; 361 pages.

Ebert, Christof and Dumke, Reiner; Software Measurement: Establish, Extract, Evaluate, Execute; Springer-Verlag, Berlin, DE; ISBN 978-3-540-71648-8; 2007; 561 pages.

Garmus, David & Herron, David; Measuring the Software Process:  A Practical Guide to Functional Measurement;  Prentice Hall, Englewood Cliffs, NJ; 1995.

Garmus, David and Herron, David; Function Point Analysis – Measurement Practices for Successful Software Projects; Addison Wesley Longman, Boston, MA; 2001; ISBN 0-201-69944-3;363 pages.

International Function Point Users Group (IFPUG); IT Measurement – Practical Advice from the Experts; Addison Wesley Longman, Boston, MA; 2002; ISBN 0-201-74158-X; 759 pages.

Kemerer, C.F.; “Reliability of Function Point Measurement – A Field Experiment”; Communications of the ACM; Vol. 36; pp 85-97; 1993.

Parthasarathy, M.A.; Practical Software Estimation – Function Point Metrics for Insourced and Outsourced Projects; Infosys Press, Addison Wesley, Upper Saddle River, NJ; 2007; ISBN 0-321-43910-4.

Putnam, Lawrence H.; Measures for Excellence — Reliable Software On Time, Within Budget; Yourdon Press – Prentice Hall, Englewood Cliffs, NJ; ISBN 0-13-567694-0; 1992; 336 pages.

Putnam, Lawrence H and Myers, Ware.;  Industrial Strength Software – Effective Management Using Measurement; IEEE Press, Los Alamitos, CA; ISBN 0-8186-7532-2; 1997; 320 pages.

Stein, Timothy R; The Computer System Risk Management Book and Validation Life Cycle; Paton Press, Chico, CA; 2006; ISBN 10: 1-9328-09-5; 576 pages.

Stutzke, Richard D; Estimating Software-Intensive Systems; Addison Wesley, Upper Saddle River, NJ; 2005; ISBN 0-201-70312-2; 918 pages.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *