Comparison of Mark II and IFPUG FPA

Search:   

by: Grant Rule

A Comparison of the Mark II and IFPUG Variants of Function Point Analysis

Introduction

This document aims to show the similarities between the two most popular variants of function point analysis while also pointing out the main differences.

It is hoped that this will assist practitioners to understand the common principles and objectives that underpin these techniques. In particular, it aims to help those on the point of selecting a functional size measurement technique for use in a newly established software metrics programme. For those interested, it enables practitioners to use both techniques in parallel with the minimum effort.

It does not aim to make a value judgement regarding which is the "better" technique that is left as an exercise for the reader.

The two variants compared are those documented in the IFPUG FPA Counting Practices Manual Release 4.0 and the UFPUG Mark II FPA Counting Practices Manual Version 1.0.

 

Mapping Approach

The usual approach for interpreting between multiple natural languages, such as the various national languages of the countries in European Union, is to choose a "base" language, and to map each of the subject languages to that. In this way comparison necessitates, for N languages, only N-1 mappings, rather than N x (N-1). Hence, in order to establish a mapping between IFPUG (Albrecht) function points and Mark II function points, it is useful to map both to the ISO draft standard for Functional Size Measurement ISO94. This facilitates subsequent comparison with other variants.

This comparison of the two most popular techniques of function point analysis (FPA) therefore uses the terminology introduced by the ISO draft standard.

The ISO draft standard requires that any technique for functional size measurement (FSM) must measure the functional requirements of the application under consideration.

The term functional requirement is not defined in the standard, but we take it to mean "something that the application must do". This is in contrast to the qualitative or non-functional requirements of the application; that is, how well the application must perform in terms of a wide range of quality and resource consumption characteristics. A hierarchy of qualitative requirements is the subject of a separate ISO standard ISO9126.

FPA techniques attempt to quantify all of the functional and qualitative requirements. However, in both the techniques discussed here, this is achieved by initial determination of the functional size, then application of an adjustment to this size value, to cater for the additional effort involved in fulfilling the qualitative requirements. The main differences between IFPUG and Mark II FPA are to be found in the way in which the functional size is determined, so we will consider that first.

The ISO draft standard requires that any technique for functional size measurement (FSM) must measure the functional requirements of an application in terms of one or more types of base logical components (BLC).

A base logical component is defined as "An elemental unit of functional user requirements defined by and used by an FSM Method for measurement purposes".

Each type of base logical component must have an explicit relationship with the application boundary, where the boundary is defined as "The conceptual interface between the software under study and its users".

Any FSM technique must provide rules for establishing the identity of instances of the various base logical component types instantiated by the software under consideration. Rules must also be provided to govern how a numeric value, representing the functional size, may be assigned to each BLC instance. Once identified, the functional size of the application is determined by simple summation of the size of each of the instances of the base logical component types.

The ISO draft standard thus provides us with a structure on which we may base our comparison of Mark II and IFPUG function points.

 

Expression of Functional Requirements

Both the Mark II UFPUG94 and IFPUG FPA IFPUG94 techniques express the functional requirements in terms of base logical components.

Table 1: Base Logical Components

Mark II

IFPUG v 4.0

Logical Transaction

External Input (EI)

External Output (EO)

External Query (EQ)

Internal Logical File (ILF)

External Interface File (EIF)

Mark II uses a single type of BLC and expresses all functional requirements as a catalogue of logical transactions.

IFPUG uses five BLC types, as shown in Table 1, but does not use the catalogue concept no relationship between external inputs and external outputs is made explicit.

Some other relationships are made clear (see below).

    An ILF is maintained by one or more EIs although an EI containing "control information may or may not maintain an ILF".

    An EQ "is an elementary process made up of an input-output combination that results in data retrieval" from one or more ILFs or EIFs (my italics). "No ILF is maintained during processing".

However, there are some surprises. For example, an EO is defined as "An elementary process that generates data or control information sent outside the application boundary." This seems to permit output messages to contain data that originates (either by direct retrieval or calculation) from neither an ILF or an EIF! This appears to be an error of omission one assumes that there is an implied relationship between an EO and one or more ILFs and/or EIFs from which it extracts information, whether or not that data is manipulated, edited and formatted.

The ISO standard requires every FSM to define the relationships, if any, between the BLC types. This is trivial in Mark II FPA, as there is only one BLC type. However, in IFPUG FPA, the set of defined relationships seems incomplete (as illustrated above).

 

Constituent Parts of Base Logical Components

The ISO standard requires an FSM to assign appropriate numeric values to each BLC. However, it does not specify how the FSM is to derive such values. IFPUG and Mark II FPA fulfill this requirement. In both cases, numeric values are assigned after identifying and evaluating the constituent parts from which the BLC types are composed.

The base logical component types are composed of various parts. Each of these parts is visible within the functional requirements.

Tables 2 and 3 show how each constituent part is realised in the external world for Mark II and IFPUG FPA respectively.

Table 2: Constituent Parts of Mark II Base Logical Components

Base Logical Component

Constituent Part

Functional Requirement

Logical Transaction

Input Message

an Event, User Query or Timed Trigger
 

Output Message

a Report, Display or Response
 

Process Part

references to retained data expressed logically as Entity Types in third normal form

 

Table 3: Constituent Parts of IFPUG Base Logical Components

Base Logical Component

Constituent Part

Functional Requirement

External Input

Input Message

an Event or Timed Trigger plus references to retained data expressed logically

External Output

Output Message

a Report or Display plus references to retained data expressed logically

External Query

an Input/Output Pair

a User Query and its corresponding Response plus references to retained data expressed logically

Internal Logical File

Retained Data maintained by the application

related groups of relational tables (Entity Types) contained within the Logical Data Model of the application

External Interface Files

Retained Data maintained by some other application

related groups of relational tables (Entity Types) contained within the Logical Data Model of the other application

The logical expression of business data is common to both techniques. Regardless of the terminology used, what is always of concern is the set of business requirements.

The IFPUG Counting Practices Manual indicates that external inputs and external outputs may consist of "data or control information". The use of the term "control information" refers to data that is not physically input by the human user, but rather is originated by some automatic aspect of the application. For example, system dates and times taken from the system clock; automatic "notification that an employee has completed 12 months on a job assignment and that a performance review is required". Control information is defined as "data used by the application to ensure compliance with business function requirements specified by the user". Hence, this control information would still need to exist in the absence of a computerised solution it is equivalent to logical data in a logical message.

Logical data models, expressed as a set of relational tables (i.e. entity types), is one of the most widely supported techniques used in the information technology (IT) industry. Also it is one of the few software engineering techniques that has a solid mathematical underpinning, in the form of Dr. Edgar Codds relational algebra.

Hence, it is logical to describe user requirements for retained business data in terms of a logical data model. Often, it is presented graphically, as an entity relationship diagram. Such a model is comprised of data tables in normalised form, usually third normal form (3NF). This gives the optimum logical data structure, ensuring each data attribute occurs the minimum necessary number of times.

Figure 1 illustrates the various components recognised by the two FPA techniques.

Figure 1: Graphical Comparison of the IFPUG and Mark II Paradigms
( Click on the "Magnifying Glass" icon to open an enlarged view in a separate window. )

It must be emphasised however, that the diagram is not the model! Nor is the problem dependent on the techniques used to solve it. The entity types and the relationships between them exist in the external world, whether or not we choose to describe them, and regardless of which techniques or notations we use to discuss and communicate about the situation.

In Mark II FPA, each entity type is treated as independent and references to entity types are counted per logical transaction.

In IFPUG FPA, entity types are grouped to form internal logical files (ILF), if within the application boundary, or external interface files (EIF) if outwith the application boundary. References to entity types then are counted as file type references (FTR) per external input (EI), external output (EO) or external query (EQ).

 

The IFPUG CPM v.4.0 specifically states that there is not necessarily a one to one relationship between third normal form (3NF) entity types and Internal Logical Files and External Interface Files (page 513). However, the mapping of ILFs & EIFs to 3NF entity types remains somewhat unclear.

For instance, where a many to many relationship exists between two entity types, 3NF requires this to be split into two one to many relationships supported by means of an associative entity (sometimes called a link entity). The key of such an associative entity consists of the unique identifiers of the "parent" entity types, concatenated together. Sometimes such an associative entity contains nothing but the key but in other situations it contains data attributes that describe or qualify the relationship. Typically, the many to many relationship modelled by the associative entity is optional that is, a "parent" entity on one side of the relationship may be related to no instances of the "parent" entity on the other side of the relationship.

From the IFPUG rules for ILF & EIF recognition, it is clear that the associative entity should contribute to the count in some way, as it defines a user-recognisable relationship.

However, IFPUG CASE STUDY 1 IFPUG94-2 contains the following (on page 815).

For a relational table:

Does it have non-key attributes ? (Must be YES to count as an ILF.)

Is the table there for user needs, not implementation decisions ?

This implies that, where an associative entity consists of only the key attributes, it is treated as an additional Record Element Type of the "parent" entities (one or both is not clear). However, where the associative entity contains non-key data, it is treated as a distinct Internal Logical File! This distinction seems unnecessarily perverse and makes accurate identification of the ILFs & EIFs early in the project lifecycle almost impossible as a detailed knowledge of the data contents of each table is required before it can be classified.

 

Relationship Between the BLC Types and the Boundary

Both techniques, as required by the standard, indicate that there are clear relationships between the types of base logical component and the boundary of the application under consideration. These relationships are tabulated in Tables 4 and 5.

Table 4: Mark II Base Logical Components

Base Logical Component

Constituent Part

Relationship with Boundary

Logical Transaction

Input Message

must cross the boundary, incoming
 

Output Message

must cross the boundary, outgoing
 

Process Part

must be wholly retained within the boundary

 

Table 5: IFPUG Base Logical Components

Base Logical Component

Constituent Part

Relationship with Boundary

External Input

Input Message

must cross the boundary, incoming

External Output

Output Message

must cross the boundary, outgoing

External Query

an Input/Output Pair

the input part must cross the boundary, incoming; the output part must cross the boundary, outgoing

Internal Logical File

Retained Data maintained by the application

must be wholly retained within the boundary

External Interface Files

Retained Data maintained by some other application

must be wholly retained within the boundary of the owning application

 

Deriving the Base Counts

Both techniques couch their rules for evaluating the size of their respective base logical components in terms of specified base counts. These are counts of specification objects from which the constituent parts of the BLC types are composed. In both techniques, counts are made of the number of data attribute types in the message parts of the BLCs and the number of references to data retained within the application or related peer applications.

Although the techniques use different terminology, the definitions used are similar. Tables 6 and 7 paraphrase the definitions used in the counting practices manuals for two techniques.

Table 6: Definitions of Counted Elements in Mark II FPA
Data Item "an item of business information that is indivisible for the purposes of the transaction being sized and that is associated with an input or output data flow. For the purposes of FPA a data item is synonymous with a field type or a data element type."
Entity Reference A reference, that is, a create, read, update or delete access, made to an entity type, where an entity type is "anything in the real world about which the system is required to store information" and is "the subject of a relation in third normal form".

 

Table 7: Definitions of Counted Elements in IFPUG FPA
Data Element Type "is a unique user recognisable, non-recursive field".
File Type Reference "is an internal logical file read or maintained" or "an external interface file read".
Record Element Type "is a user recognisable subgroup of data elements within an Internal Logical File or an External Interface File".

Note that, whereas in Mark II FPA counts result from the transactional inputs and outputs and the necessary references to retained data made during the course of those transactions, in IFPUG FPA internal logical files and external interface files are identified and their constituent parts (i.e. data element types per record element type) expressly counted as contributing to delivered functionality in their own right.

Also, in Mark II, one entity reference is counted for each entity type accessed during the course of a logical transaction. In IFPUG FPA, one file type reference is counted for each ILF or EIF accessed during the course of an external input, output or query. As both ILFs and EIFs are groups of logically dependent entity types, this practice results in lower values being credited by IFPUG FPA for the contribution made to the size of a transaction by those parts of it that access the data retained in the application. (Of course, this bias against data accesses is balanced by separate counts for the ILFs and EIFs).

Note that both techniques treat sub-types of an entity type in much the same way.

In Mark II FPA "Some transactions may differentiate between sub-types of a primary entity and perform different operations on them". "If, within a single transaction, a sub-entity is required to be handled differently from other sub-entities of the same entity" each separately accessed sub-entity is counted as a distinct entity reference.

In IFPUG FPA, every "subgroup of data elements within an Internal Logical File or an External Interface File" is classified as either a mandatory subgroup or an optional subgroup. This seems to be equivalent to entity subtypes that have complete coverage and those that have partial coverage. respectively. In either case, "Count a Record Element Type for each optional or mandatory subgroup of the Internal Logical File or the External Interface File."

To all intents and purposes, and unremarkably, the specification objects that must be identified are the same for both techniques. They are as follows.

    Input messages, that enter the application boundary and the data attribute types of which they are composed.

    Output messages, that leave the application boundary and the data attribute types of which they are composed.

    Error messages, that leave the application boundary and the data attribute types of which they are composed.

    Entity types in third normal form.

    Additionally, for IFPUG FPA, the data attributes types stored on the entity types.

The main differences between the two techniques arise from how the base counts are constructed, not from what is counted.

To determine the base counts, the practitioner must perform the following steps.

  1. Identify the respective base logical components.
  2. Count the numbers of data attribute types (i.e. data items or data element types) in the associated messages. Error messages contribute to the data attribute counts but are treated differently by the two techniques. In IFPUG FPA, error message attributes are treated as attributes of the input message; in Mark II FPA, they are treated as additional attributes of the output message .
  3. Count the number of accesses to the retained data. The two techniques again result in different values. In IFPUG FPA, one is counted for each reference to a group of entity types that form an ILF or an EIF. Conversely, in Mark II FPA, one is counted for every entity type referenced.

IFPUG FPA practitioners must perform these additional steps.

  1. Count the number of entity types (i.e. the record element types) in the groups of inter-dependent entity types that form each internal logical file and each external interface file.
  2. Count the number of data attribute types stored on the entity types (i.e. the record element types).

There are some further minor variations. For instance, Mark II uses the concept of the "system entity". This distinguishes data tables that contain only implementation-dependent information from those that contain business information that would exist even in the absence of a computer system. Mark II then limits the count of references to the "system entity" to zero or one per logical transaction. IFPUG FPA does not count implementation-dependent data at all.

Tables 8 and 9 illustrate the relevant base counts for Mark II and IFPUG FPA respectively.

In Mark II, these base counts are used directly in the calculation of the functional size index, expressed in Mark II unadjusted function points.

In IFPUG FPA, these base counts are used to determine the magnitude of each base logical component. Using tables, provided by the IFPUG Counting Practices Manual, the magnitude of each BLC is accessed as small, average or large, based on the respective values of the base counts of data attributes and references to entity types, or the number of record element types in the case of ILFs and EIFs.

In fact, the IFPUG Counting Practices Manual uses the term complexity, with ratings of low, medium and high, rather than the term magnitude. However, this is confusing the assessment has nothing to do with the simplicity, intricacy or processing complications related to the BLC (which would introduce implementation issues into the evaluation); the assessment is made simply on the relative size of the numbers involved. Hence, the term magnitude is preferred here, to avoid confusion with the Mark II understanding of the term technical complexity.

 

Table 8: Mark II Base Counts

Base Logical Component

Constituent Part

Base Count

Logical Transaction

Input Message

Count of the data attribute types in the message
 

Output Message

Count of the data attribute types in the message
 

Process Part

Count of the entity references made

 

Table 9: IFPUG Base Counts

Base Logical Component

Constituent Part

Functional Requirement

External Input

Input Message

Count of the data attribute types in the message and a separate count of the number of references made to record element types

External Output

Output Message

Count of the data attribute types in the message and a separate count of the number of references made to record element types

External Query

an Input/Output Pair

Separate counts of the data attribute types in the pair of messages and separate counts of the number of references made to record element types

Internal Logical File

Retained Data maintained by the application

Count of the record element types (entity types) plus a separate count of the number of data element types in each record

External Interface Files

Retained Data maintained by some other application

Count of the record element types (entity types) plus a separate count of the number of data element types in each record

 

Weighting - Correcting for the Different Kinds of Contribution

Input messages, output messages and references to retained data each make their own contributions to an application. Each is a necessary part of a system and requires the existence of the other parts. However, the contribution each makes is of a different kind.

Input messages must acquire and validate incoming data. Output messages must format and write data across the boundary. References to retained data require operational reads, or create, update or delete operations.

Hence, the base counts are counts of unlike things. In order to combine them, to derive a single numeric value for the functional size index, we must contrive to normalise the base counts to use a single unit. This is achieved by using a suitable system of relative weights.

Note that the term "weight" is used in the IFPUG document "Function Points as an Asset" IFPUG92 page 8, in the definition of the Work Product and Work Effort Metrics.

In Mark II FPA, three weights are used; one for Input Types (Wi) one for Output Types (Wo) and one for Entity References (We). The values for these weights are usually referred to as the industry average weights (see Table 10) and are chosen to add up to 2.5. This contrivance is intended to maintain correspondence with IFPUG function points. The weighting values have been calculated, and are periodically validated, from historic data of numerous projects in a number of large development organisations. The data from which the average weights are calculated is published and the values may be calibrated.

 

Table 10: Weights used in the Mark II Counting Practices Manual v. 1.0

Base Logical Component
  Base Count

Weight in Mark II Function Points
Logical Transaction #IT Input (attribute) Type

0.58
  #OT Output (attribute) Type

0.26
  #ER Entity Reference

1.66

In IFPUG FPA, the weighting system is more complicated due to the larger number of base logical component types. Each type of base logical component is assigned a weighted number of function points, depending upon its BLC Type and upon its magnitude (small, medium or large). For instance, a small external output is assigned 4 IFPUG function points, while medium and large external outputs are assigned 5 and 7 IFPUG function points respectively.

Table 11 provides the full list of 15 weights used in IFPUG FPA.

 

Table 11: Weights used in the IFPUG Counting Practices Manual v. 4.0

Base Logical Component Weight in IFPUG Function Points

External Inputs

small

3
 

medium

4
 

large

6

External Outputs

small

4
 

medium

5
 

large

7

External Queries

small

3
 

medium

4
 

large

6

Internal Logical Files

small

7
 

medium

10
 

large

15

External Interface Files

small

5
 

medium

7
 

large

10

Interestingly, the IFPUG Counting Practices Manual gives no justification for the values used in this weighting system.

In both IFPUG and Mark II techniques, the summed totals of the respective weighted counts gives the functional size of the application, expressed in function points. This value is termed the functional size index.

 

Adjusting for the Qualitative Requirements

The techniques use similar approaches to account for the qualitative requirements.

In both cases, a list of non-functional system characteristics are evaluated on a scale of zero to five. A value of zero means "no influence" and a value of five means "strong influence throughout" every stage of the application development.

The techniques differ in the number of system characteristics evaluated IFPUG uses 14; Mark II uses the same 14 but adds another five. Mark II also permits practitioners to add further system characteristics to the list (but few people, if any, do so).

Once the degree of influence of the chosen set of system characteristics has been evaluated, they are added to give the total degree of influence of the entire set. This value is then used in a calculation that adjusts the functional size index to give a new value that supposedly represents the total size of the functional and qualitative requirements combined.

The calculation differs between IFPUG and Mark II FPA. In IFPUG, the adjustment may change the functional size index up or down by a maximum of 35%. In Mark II FPA the adjustment may increase the functional size index by a maximum of 12.5% and may decrease it by a maximum of 35%.

In both cases, the method of adjustment for the qualitative requirements is largely discredited as being unrealistic. Many practitioners ignore the adjustment and work using the functional size index alone. In this case, some other technique is used to account for the qualitative requirements.

The ISO Draft Standard for Functional Size Measurement Methods seems to deliberately ignore the qualititative requirements as a contributor to the "functional size" of an application. A separate standard, ISO 9126 mentioned previously, does advocate measureable specification and evaluation of qualitative requirements. This seems to endorse the practice of treating the "functional size" and the "non-functional qualities" of an application as distinct, separately specifiable and deliverable attributes.

 

Related Information

back to top

Articles

Aspects of Function Point Analysis
There are more benefits from FPA than just deriving size.

Introduction to Function Point Analysis
Defining the size of software has been described as like "trying to nail jelly to a wall" ...

Issues with IFPUG Counting Practices Version 4
Function Points is referred to as a measurement. It is important to realise it is a statistical measure. Function point counters are not measuring systems so much as statistically sampling them

Using Measures to Understand Requirements
Many approaches fashionable with technically-oriented practitioners clearly fail to satisfy the need for clarity of requirements. Some even trade short-term acceleration for long-term maintenance & support costs. What is missing? What can be done to ensure that new technologies help rather than hinder? This paper suggests some simple process improvements that might have made all the difference in a number of cases.

Using COSMIC for Real-Time and Embedded Systems
Exploring the use of COSMIC-FFP based estimation in a real-time and embedded systems context.

Software Size Measurement
Undergoing a renaissance, Functional Size Measurement is applicable thorughout the development, maintenance and support lifecycles.

Services

Applying Software Metrics

Data Collection
Services for identifying, collecting and checking measurements.

Starting a Measurement Programme
A measurement programme is part of a means to an end (one or more business objectives). To deliver any benefit the objective(s) must be clearly understood first and then the measurement programme must be designed to support them.

Supporting a Measurement Programme
Once successfully started, there are various activities required to keep the measurement programme operating effectively and the results relevant.

Assessing Capability

Functional Sizing Audits
To ensure that the selected functional sizing method is being used to produce reliable consistent results.

Tools and Techniques

COSMIC FFP
A method designed to measure the functional size of real-time, multi-layered software such as used in telecoms, process control, and operating systems, as well as business application software, all on the same measurement scale.

IFPUG Function Point Analysis
The original method of sizing, it is currently at version 4. This method is still the most widely used and works well in the business/MIS domain. It is applicable to object oriented developments.

Mark II Function Point Analysis
This method assumes a model of software in which all requirements or ‘user functionality’ is expressed in terms of ‘Logical Transactions’, where each LT comprises an input, some processing and an output component.

Training

Applying Software Metrics

Uses and Benefits of Function Point Analysis  
Learn how FPA can help your projects manage the acquisition, development, integration and support of software systems

FPA Follow-Up Workshop  Advanced Workshop 
An advanced workshop to help experienced practitioners resolve the issues that arise when using unfamiliar technologies.

Function Point Counting Workshop   
Apply your skills in a coached workshop – consolidate your skills and experience on the job.

Sizing E-commerce Applications  Advanced Workshop
An advanced workshop for practitioners wishing to apply functional size measurement to internet-based solutions

Tools and Techniques

COSMIC FFP for Sizing & Estimating MIS and Real-Time Software Requirements  Formal Course 
Learn how to measure the software component of software-intensive systems using the latest ISO-standard method

Practical use of IFPUG Function Point Analysis  Formal Course 
Learn the most popular technique for measuring the functional size of software applications and projects

Practical use of MkII Function Point Analysis  Formal Course 
Learn the UK Government’s preferred technique for measuring the functional size of software applications and projects

GIFPA Ltd. 2016
    
  
 e-mail: sales@gifpa.co.uk
  www.gifpa.co.uk

Copyright 1997-2016 GIFPA Ltd. 2016 All rights reserved.

                                               
Applying Software Metrics
Assessing Capability     
Estimating and Risk       
Improving Processes     
Measuring Performance
Sourcing                       
Tools and Techniques   
             
                
Services               
Training                
Events                  
Reference             
                
About GIFPA         
Opportunities
Copyright & Legal