| || |
by: Grant Rule
A Comparison of the Mark II and IFPUG Variants of Function Point Analysis
This document aims to show the similarities between the two most popular variants of function point analysis while also pointing out the main differences.
It is hoped that this will assist practitioners to understand the common principles and objectives that underpin these techniques. In particular, it aims to help those on the point of selecting a functional size measurement technique for use in a newly established software metrics programme. For those interested, it enables practitioners to use both techniques in parallel with the minimum effort.
It does not aim to make a value judgement regarding which is the "better" technique that is left as an exercise for the reader.
The two variants compared are those documented in the IFPUG FPA Counting Practices Manual Release 4.0 and the UFPUG Mark II FPA Counting Practices Manual Version 1.0.
The usual approach for interpreting between multiple natural languages, such as the various national languages of the countries in European Union, is to choose a "base" language, and to map each of the subject languages to that. In this way comparison necessitates, for N languages, only N-1 mappings, rather than N x (N-1). Hence, in order to establish a mapping between IFPUG (Albrecht) function points and Mark II function points, it is useful to map both to the ISO draft standard for Functional Size Measurement ISO94. This facilitates subsequent comparison with other variants.
This comparison of the two most popular techniques of function point analysis (FPA) therefore uses the terminology introduced by the ISO draft standard.
The ISO draft standard requires that any technique for functional size measurement (FSM) must measure the functional requirements of the application under consideration.
The term functional requirement is not defined in the standard, but we take it to mean "something that the application must do". This is in contrast to the qualitative or non-functional requirements of the application; that is, how well the application must perform in terms of a wide range of quality and resource consumption characteristics. A hierarchy of qualitative requirements is the subject of a separate ISO standard ISO9126.
FPA techniques attempt to quantify all of the functional and qualitative requirements. However, in both the techniques discussed here, this is achieved by initial determination of the functional size, then application of an adjustment to this size value, to cater for the additional effort involved in fulfilling the qualitative requirements. The main differences between IFPUG and Mark II FPA are to be found in the way in which the functional size is determined, so we will consider that first.
The ISO draft standard requires that any technique for functional size measurement (FSM) must measure the functional requirements of an application in terms of one or more types of base logical components (BLC).
A base logical component is defined as "An elemental unit of functional user requirements defined by and used by an FSM Method for measurement purposes".
Each type of base logical component must have an explicit relationship with the application boundary, where the boundary is defined as "The conceptual interface between the software under study and its users".
Any FSM technique must provide rules for establishing the identity of instances of the various base logical component types instantiated by the software under consideration. Rules must also be provided to govern how a numeric value, representing the functional size, may be assigned to each BLC instance. Once identified, the functional size of the application is determined by simple summation of the size of each of the instances of the base logical component types.
The ISO draft standard thus provides us with a structure on which we may base our comparison of Mark II and IFPUG function points.
Expression of Functional Requirements
Both the Mark II UFPUG94 and IFPUG FPA IFPUG94 techniques express the functional requirements in terms of base logical components.
Table 1: Base Logical Components
Mark II uses a single type of BLC and expresses all functional requirements as a catalogue of logical transactions.
IFPUG uses five BLC types, as shown in Table 1, but does not use the catalogue concept no relationship between external inputs and external outputs is made explicit.
Some other relationships are made clear (see below).
An ILF is maintained by one or more EIs although an EI containing "control information may or may not maintain an ILF".
An EQ "is an elementary process made up of an input-output combination that results in data retrieval" from one or more ILFs or EIFs (my italics). "No ILF is maintained during processing".
However, there are some surprises. For example, an EO is defined as "An elementary process that generates data or control information sent outside the application boundary." This seems to permit output messages to contain data that originates (either by direct retrieval or calculation) from neither an ILF or an EIF! This appears to be an error of omission one assumes that there is an implied relationship between an EO and one or more ILFs and/or EIFs from which it extracts information, whether or not that data is manipulated, edited and formatted.
The ISO standard requires every FSM to define the relationships, if any, between the BLC types. This is trivial in Mark II FPA, as there is only one BLC type. However, in IFPUG FPA, the set of defined relationships seems incomplete (as illustrated above).
Constituent Parts of Base Logical Components
The ISO standard requires an FSM to assign appropriate numeric values to each BLC. However, it does not specify how the FSM is to derive such values. IFPUG and Mark II FPA fulfill this requirement. In both cases, numeric values are assigned after identifying and evaluating the constituent parts from which the BLC types are composed.
The base logical component types are composed of various parts. Each of these parts is visible within the functional requirements.
Tables 2 and 3 show how each constituent part is realised in the external world for Mark II and IFPUG FPA respectively.
Table 2: Constituent Parts of Mark II Base Logical Components
Table 3: Constituent Parts of IFPUG Base Logical Components
The logical expression of business data is common to both techniques. Regardless of the terminology used, what is always of concern is the set of business requirements.
The IFPUG Counting Practices Manual indicates that external inputs and external outputs may consist of "data or control information". The use of the term "control information" refers to data that is not physically input by the human user, but rather is originated by some automatic aspect of the application. For example, system dates and times taken from the system clock; automatic "notification that an employee has completed 12 months on a job assignment and that a performance review is required". Control information is defined as "data used by the application to ensure compliance with business function requirements specified by the user". Hence, this control information would still need to exist in the absence of a computerised solution it is equivalent to logical data in a logical message.
Logical data models, expressed as a set of relational tables (i.e. entity types), is one of the most widely supported techniques used in the information technology (IT) industry. Also it is one of the few software engineering techniques that has a solid mathematical underpinning, in the form of Dr. Edgar Codds relational algebra.
Hence, it is logical to describe user requirements for retained business data in terms of a logical data model. Often, it is presented graphically, as an entity relationship diagram. Such a model is comprised of data tables in normalised form, usually third normal form (3NF). This gives the optimum logical data structure, ensuring each data attribute occurs the minimum necessary number of times.
Figure 1 illustrates the various components recognised by the two FPA techniques.
( Click on the "Magnifying Glass" icon to open an enlarged view in a separate window. )
It must be emphasised however, that the diagram is not the model! Nor is the problem dependent on the techniques used to solve it. The entity types and the relationships between them exist in the external world, whether or not we choose to describe them, and regardless of which techniques or notations we use to discuss and communicate about the situation.
In Mark II FPA, each entity type is treated as independent and references to entity types are counted per logical transaction.
In IFPUG FPA, entity types are grouped to form internal logical files (ILF), if within the application boundary, or external interface files (EIF) if outwith the application boundary. References to entity types then are counted as file type references (FTR) per external input (EI), external output (EO) or external query (EQ).
The IFPUG CPM v.4.0 specifically states that there is not necessarily a one to one relationship between third normal form (3NF) entity types and Internal Logical Files and External Interface Files (page 513). However, the mapping of ILFs & EIFs to 3NF entity types remains somewhat unclear.
For instance, where a many to many relationship exists between two entity types, 3NF requires this to be split into two one to many relationships supported by means of an associative entity (sometimes called a link entity). The key of such an associative entity consists of the unique identifiers of the "parent" entity types, concatenated together. Sometimes such an associative entity contains nothing but the key but in other situations it contains data attributes that describe or qualify the relationship. Typically, the many to many relationship modelled by the associative entity is optional that is, a "parent" entity on one side of the relationship may be related to no instances of the "parent" entity on the other side of the relationship.
From the IFPUG rules for ILF & EIF recognition, it is clear that the associative entity should contribute to the count in some way, as it defines a user-recognisable relationship.
However, IFPUG CASE STUDY 1 IFPUG94-2 contains the following (on page 815).
For a relational table:
This implies that, where an associative entity consists of only the key attributes, it is treated as an additional Record Element Type of the "parent" entities (one or both is not clear). However, where the associative entity contains non-key data, it is treated as a distinct Internal Logical File! This distinction seems unnecessarily perverse and makes accurate identification of the ILFs & EIFs early in the project lifecycle almost impossible as a detailed knowledge of the data contents of each table is required before it can be classified.
Relationship Between the BLC Types and the Boundary
Both techniques, as required by the standard, indicate that there are clear relationships between the types of base logical component and the boundary of the application under consideration. These relationships are tabulated in Tables 4 and 5.
Table 4: Mark II Base Logical Components
Table 5: IFPUG Base Logical Components
Deriving the Base Counts
Both techniques couch their rules for evaluating the size of their respective base logical components in terms of specified base counts. These are counts of specification objects from which the constituent parts of the BLC types are composed. In both techniques, counts are made of the number of data attribute types in the message parts of the BLCs and the number of references to data retained within the application or related peer applications.
Although the techniques use different terminology, the definitions used are similar. Tables 6 and 7 paraphrase the definitions used in the counting practices manuals for two techniques.
Table 6: Definitions of Counted Elements in Mark II FPA
Table 7: Definitions of Counted Elements in IFPUG FPA
Note that, whereas in Mark II FPA counts result from the transactional inputs and outputs and the necessary references to retained data made during the course of those transactions, in IFPUG FPA internal logical files and external interface files are identified and their constituent parts (i.e. data element types per record element type) expressly counted as contributing to delivered functionality in their own right.
Also, in Mark II, one entity reference is counted for each entity type accessed during the course of a logical transaction. In IFPUG FPA, one file type reference is counted for each ILF or EIF accessed during the course of an external input, output or query. As both ILFs and EIFs are groups of logically dependent entity types, this practice results in lower values being credited by IFPUG FPA for the contribution made to the size of a transaction by those parts of it that access the data retained in the application. (Of course, this bias against data accesses is balanced by separate counts for the ILFs and EIFs).
Note that both techniques treat sub-types of an entity type in much the same way.
In Mark II FPA "Some transactions may differentiate between sub-types of a primary entity and perform different operations on them". "If, within a single transaction, a sub-entity is required to be handled differently from other sub-entities of the same entity" each separately accessed sub-entity is counted as a distinct entity reference.
In IFPUG FPA, every "subgroup of data elements within an Internal Logical File or an External Interface File" is classified as either a mandatory subgroup or an optional subgroup. This seems to be equivalent to entity subtypes that have complete coverage and those that have partial coverage. respectively. In either case, "Count a Record Element Type for each optional or mandatory subgroup of the Internal Logical File or the External Interface File."
To all intents and purposes, and unremarkably, the specification objects that must be identified are the same for both techniques. They are as follows.
Input messages, that enter the application boundary and the data attribute types of which they are composed.
Output messages, that leave the application boundary and the data attribute types of which they are composed.
Error messages, that leave the application boundary and the data attribute types of which they are composed.
Entity types in third normal form.
Additionally, for IFPUG FPA, the data attributes types stored on the entity types.
The main differences between the two techniques arise from how the base counts are constructed, not from what is counted.
To determine the base counts, the practitioner must perform the following steps.
IFPUG FPA practitioners must perform these additional steps.
There are some further minor variations. For instance, Mark II uses the concept of the "system entity". This distinguishes data tables that contain only implementation-dependent information from those that contain business information that would exist even in the absence of a computer system. Mark II then limits the count of references to the "system entity" to zero or one per logical transaction. IFPUG FPA does not count implementation-dependent data at all.
Tables 8 and 9 illustrate the relevant base counts for Mark II and IFPUG FPA respectively.
In Mark II, these base counts are used directly in the calculation of the functional size index, expressed in Mark II unadjusted function points.
In IFPUG FPA, these base counts are used to determine the magnitude of each base logical component. Using tables, provided by the IFPUG Counting Practices Manual, the magnitude of each BLC is accessed as small, average or large, based on the respective values of the base counts of data attributes and references to entity types, or the number of record element types in the case of ILFs and EIFs.
In fact, the IFPUG Counting Practices Manual uses the term complexity, with ratings of low, medium and high, rather than the term magnitude. However, this is confusing the assessment has nothing to do with the simplicity, intricacy or processing complications related to the BLC (which would introduce implementation issues into the evaluation); the assessment is made simply on the relative size of the numbers involved. Hence, the term magnitude is preferred here, to avoid confusion with the Mark II understanding of the term technical complexity.
Table 8: Mark II Base Counts
Table 9: IFPUG Base Counts
Weighting - Correcting for the Different Kinds of Contribution
Input messages, output messages and references to retained data each make their own contributions to an application. Each is a necessary part of a system and requires the existence of the other parts. However, the contribution each makes is of a different kind.
Input messages must acquire and validate incoming data. Output messages must format and write data across the boundary. References to retained data require operational reads, or create, update or delete operations.
Hence, the base counts are counts of unlike things. In order to combine them, to derive a single numeric value for the functional size index, we must contrive to normalise the base counts to use a single unit. This is achieved by using a suitable system of relative weights.
Note that the term "weight" is used in the IFPUG document "Function Points as an Asset" IFPUG92 page 8, in the definition of the Work Product and Work Effort Metrics.
In Mark II FPA, three weights are used; one for Input Types (Wi) one for Output Types (Wo) and one for Entity References (We). The values for these weights are usually referred to as the industry average weights (see Table 10) and are chosen to add up to 2.5. This contrivance is intended to maintain correspondence with IFPUG function points. The weighting values have been calculated, and are periodically validated, from historic data of numerous projects in a number of large development organisations. The data from which the average weights are calculated is published and the values may be calibrated.
Table 10: Weights used in the Mark II Counting Practices Manual v. 1.0
In IFPUG FPA, the weighting system is more complicated due to the larger number of base logical component types. Each type of base logical component is assigned a weighted number of function points, depending upon its BLC Type and upon its magnitude (small, medium or large). For instance, a small external output is assigned 4 IFPUG function points, while medium and large external outputs are assigned 5 and 7 IFPUG function points respectively.
Table 11 provides the full list of 15 weights used in IFPUG FPA.
Table 11: Weights used in the IFPUG Counting Practices Manual v. 4.0
Interestingly, the IFPUG Counting Practices Manual gives no justification for the values used in this weighting system.
In both IFPUG and Mark II techniques, the summed totals of the respective weighted counts gives the functional size of the application, expressed in function points. This value is termed the functional size index.
Adjusting for the Qualitative Requirements
The techniques use similar approaches to account for the qualitative requirements.
In both cases, a list of non-functional system characteristics are evaluated on a scale of zero to five. A value of zero means "no influence" and a value of five means "strong influence throughout" every stage of the application development.
The techniques differ in the number of system characteristics evaluated IFPUG uses 14; Mark II uses the same 14 but adds another five. Mark II also permits practitioners to add further system characteristics to the list (but few people, if any, do so).
Once the degree of influence of the chosen set of system characteristics has been evaluated, they are added to give the total degree of influence of the entire set. This value is then used in a calculation that adjusts the functional size index to give a new value that supposedly represents the total size of the functional and qualitative requirements combined.
The calculation differs between IFPUG and Mark II FPA. In IFPUG, the adjustment may change the functional size index up or down by a maximum of 35%. In Mark II FPA the adjustment may increase the functional size index by a maximum of 12.5% and may decrease it by a maximum of 35%.
In both cases, the method of adjustment for the qualitative requirements is largely discredited as being unrealistic. Many practitioners ignore the adjustment and work using the functional size index alone. In this case, some other technique is used to account for the qualitative requirements.
The ISO Draft Standard for Functional Size Measurement Methods seems to deliberately ignore the qualititative requirements as a contributor to the "functional size" of an application. A separate standard, ISO 9126 mentioned previously, does advocate measureable specification and evaluation of qualitative requirements. This seems to endorse the practice of treating the "functional size" and the "non-functional qualities" of an application as distinct, separately specifiable and deliverable attributes.
|GIFPA Ltd. 2016|