Function Points

Source: x


Function Points

2A more thorough discussion of function point analysis is provided in Appendix A.

The inherent problems of LOC as a metric for estimation and productivity necessitated the need for a better software metric. In 1979, Allan Albrecht of IBM proposed the idea of function points at a conference hosted by IBM in Monterey, California (Albrecht 1979). Function points are a synthetic metric, similar to ones used every day, such as hours, kilos, tons, nautical miles, degrees Celsius, and so on. However, function points focus on the functionality and complexity of an application system or a particular module. For example, just as a 20 degree Celsius day is warmer than a 10 degree Celsius day, a 1,000 function point application is larger and more complex than a 500 function point application.

The good thing about function points is that they are independent of the technology. More specifically, functionality and the technology are kept separate so we can compare different applications that may or may not use different programming languages or technology platforms. That is, we can compare one application written in COBOL with another application developed in Java. Moreover, function point analysis is reliable—that is, two people who are skilled and experienced in function point analysis will obtain function point counts that are the same, that is, within an acceptable margin of error.

Counting function points is fairly straightforward; however, the rules can be complex for the novice. It is recommended that anyone serious about learning function point analysis become certified. Although several function point organizations exist, the two main ones are the International Function Point Users Group (IFPUG) and the United Kingdom Function Point Users Group (UFPUG). Both of these nonprofit organizations oversee the rules, guidelines, standards, and certifications for function point analysis. In addition, there are resources at the end of the chapter if you are interested in learning more about function points.

The key to counting function points is having a good understanding of the user’s requirements. Early on in the project, a function point analysis can be conducted based on the project’s scope. Then a more detailed analysis of the user’s requirements can be made during the analysis and design phases. Then function point analysis can and should be conducted at various stages of the project life cycle. For example, a function point analysis conducted based on the project’s scope definition can be used for estimation and developing the project’s plan. During the analysis and design phases, function points can be used to manage and report progress and for monitoring scope creep. In addition, a function point analysis conducted during or after the project’s implementation can be useful for determining whether all of the functionality was delivered. By capturing this information in a repository or database, it can be combined with other metrics useful for benchmarking, estimating future projects, and understanding the impact of new methods, tools, technologies, and best practices that were introduced.

Function point analysis is based on an evaluation of five data and transactional types that define the application boundary as illustrated in Figure 6.5.


It is recommended that anyone serious about learning function point analysis become certified. Although several function point organizations exist, the two main ones are the International Function Point Users Group (IFPUG) and the United Kingdom Function Point Users Group (UFPUG). Both of these nonprofit organizations oversee the rules, guidelines, standards, and certifications for function point analysis. In addition, there are resources at the end of the chapter if you are interested in learning more about function points.


Function point analysis is based on an evaluation of five data and transactional types that define the application boundary as illustrated in Figure 6.5.

Figure 6.5 The Application Boundary for Function Point Analysis

[Click to enlarge]

* Internal logical file (ILF)—An ILF is a logical file that stores data within the application boundary. For example, each entity in an Entity-Relationship Diagram (ERD) would be considered an ILF. The complexity of an ILF can be classified as low, average, or high based on the number of data elements and subgroups of data elements maintained by the ILF. An example of a subgroup would be new customers for an entity called customer. Examples of data elements would be customer number, name, address, phone number, and so forth. In short, ILFs with fewer data elements and subgroups will be less complex than ILFs with more data elements and subgroups.
* External interface file (EIF)—An EIF is similar to an ILF; however, an EIF is a file maintained by another application system. The complexity of an EIF is determined using the same criteria used for an ILF.

  • External input (EI)—An EI refers to processes or transactional data that originate outside the application and cross the application boundary from outside to inside. The data generally are added, deleted, or updated in one or more files internal to the application (i.e., internal logical files). A common example of an EI would be a screen that allows the user to input information using a keyboard and a mouse. Data can, however, pass through the application boundary from other applications. For example, a sales system may need a customer’s current balance from an accounts receivable system. Based on its complexity, in terms of the number of internal files referenced, number of data elements (i.e., fields) included, and any other human factors, each EI is classified as low, average, or high.
  • External output (EO)—Similarly, an EO is a process or transaction that allows data to exit the application boundary. Examples of EOs include reports, confirmation messages, derived or calculated totals, and graphs or charts. This data could go to screens, printers, or other applications. After the number of EOs are counted, they are rated based on their complexity, like the external inputs (EI).
  • External inquiry (EQ)—An EQ is a process or transaction that includes a combination of inputs and outputs for retrieving data from either the internal files or from files external to the application. EQs do not update or change any data stored in a file. They only read this information. Queries with different processing logic or a different input or output format are counted as a single EQ. Once the EQs are identified, they are classified based on their complexity as low, average, or high, according to the number of files referenced and number of data elements included in the query.

Once all of the ILFs, EIFs, EIs, EOs, and EQs, are counted and their relative complexities rated, an unadjusted function point (UAF) count is determined. For example, let’s say that after reviewing an application system, the following was determined:

* ILF: 3 Low, 2 Average, 1 Complex
* EIF: 2 Average
* EI: 3 Low, 5 Average, 4 Complex
* EO: 4 Low, 2 Average, 1 Complex
* EQ: 2 Low, 5 Average, 3 Complex

Using Table 6.1, the (UAF) value is calculated.

Table 6.1 Computing UAF
 Complexity
 Low Average High Total
Internal logical files (ILF)
3 × 7 = 21
2 × 10 = 20
1 × 15 = 15
56
External interface files (EIF)
_ × 5 = _
2 × 7 = 14
_ × 10 = _
14
External input (El)
3 × 3 = 9
5 × 4 = 20
4 × 6 = 24
53
External output (EO)
4 × 4 = 16
2 × 5 = 10
1 × 7 = 7
33
External inquiry (EQ)
2 × 3 = 6
5 × 4 = 20
3 × 6 = 18
44
Total unadjusted function points (UAF)
   200

The next step in function point analysis is to compute a Value Adjustment Factor (VAF). The VAF is based on the Degrees of Influence (DI), often called the Processing Complexity Adjustment (PCA), and is derived from the fourteen General Systems Characteristics (GSC) shown in Table 6.2. To determine the total DI, each GSC is rated based on the following scale from 0 to 5:

* 0 = not present or no influence
* 1 = incidental influence
* 2 = moderate influence
* 3 = average influence
* 4 = significant influence
* 5 = strong influence

Table 6.2 GSC and Total Adjusted Function Point
General System Characteristic Degree of Influence
Data communications
3
Distributed data processing
2
Performance
4
Heavily used configuration
3
Transaction rate
3
On-line data entry
4
End user efficiency
4
Online update
3
Complex processing
3
Reusability
2
Installation ease
3
Operational ease
3
Multiple sites
1
Facilitate change
2
Total degrees of influence (TDI)
40
VALUE ADJUSTMENT FACTOR VAF = (TDI * 0.01) +.65
VAF = (40 *. 01) +.65 = 1.05
Total adjusted function points = FP = UAF * VAF
FP = 200 * 1.05 = 210


Related Links:

Function Point Manual

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License