The Importance of the Size of Software Requirements

Search:   

Document Info.
First Presented
NASSCOM 2001
Reference Category
Applying Software Metrics

Related Info...
Articles
Services
Training

Reference
Categories
Articles by Title
External Ref's
Other Web Sites

by: Grant Rule

Presented at the NASSCOM Conference, Hotel Oberoi Towers, Mumbai, India, 7th-10th February 2001

Abstract Size measurement methods have played a key role in helping to solve real-world problems of estimating, supplier/customer disputes, performance improvement and the management of outsourced contracts. This paper discusses the history of functional size measurement and the status of the COSMIC–FFP method.
Keywords Software economics, functional size, estimating, requirements, project management, contract management, outsourcing, scope control, testing

Introduction

The predictable acquisition and management of software projects is of considerable economic importance Efficient and effective management of software projects is of tremendous economic importance to all organisations, whether customers or suppliers and whether software development and maintenance is in-house or outsourced. Without the predictable management of projects, it becomes impossible to manage long-term contracts and partnerships. Customers, in order to manage their own businesses, need software acquisition routes that are reliable, producing predictable results within budget, on time and of good quality. Supplier organisations must be able to react flexibly to changing demands, while retaining the ability to account for the impact of change so as to justify the necessary recompense.
But many projects are doomed before the team starts work Unfortunately, the software community is dogged by a history of poorly managed projects. But on closer examination, many projects perceived as failures were never viable; the teams concerned were always on a ‘death march’. The failure is not one of software development, but rather a collective failure by the customer and supplier to agree a set of requirements that can be delivered with the available resources, on schedule, for acceptable cost.
Management control is achieved only via feedback Estimating and managing a project’s effort, staffing, schedule, cost, risks, quality and other factors is crucial. Yet all these are measures of input. Process management is enabled by feedback loops. So it is also necessary to measure the output from the software process. This starts with measurement of the size of the requirements. To solve a problem, it is necessary to measure its size, in order to assess the various solution options, calculate the relative costs, compare the benefits, before finally committing to one preferred approach.

A Process Maturity Profile of the Software Community

Poor project management is a cause of low maturity Every few months the Software Engineering Institute (SEI) publishes a profile of the global software community, based on process appraisals reported to their database. Results over the past few years have consistently identified project management as a prime opportunity for improvement (see Table 1).
 

While Software Project Planning has recently dropped off the ‘least frequently satisfied’ list, Integrated Software Management continues to appear as a challenge for many organisations.

The SEI Profiles report only on those organisations that are sufficiently self-aware to bother with capability maturity appraisals. However, it seems likely, based on practical experience and anecdotal evidence, that there are many more organisations whose management of software projects leaves much to be desired.

 

May 1998

Software Quality Assurance and Software Project Planning are the least frequently satisfied KPAs among organizations assessed at ML1

Integrated Software Management, Organization Process Definition and Training Program are the least frequently satisfied KPAs among organizations assessed at ML2

August 1999

Software Quality Assurance is the least frequently satisfied ML2 KPAs among organizations assessed at ML1

Integrated Software Management, Training Program and Organization Process Definition are the least frequently satisfied ML3 KPAs among organizations assessed at ML2

March 2000

Software Quality Assurance is the least frequently satisfied ML2 KPAs among organizations assessed at ML1

Integrated Software Management, Training Program and Organization Process Definition are the least frequently satisfied ML3 KPAs among organizations assessed at ML2

August 2000

Software Quality Assurance is the least frequently satisfied ML2 KPAs among organizations assessed at ML1

Integrated Software Management, Training Program and Organization Process Definition are the least frequently satisfied ML3 KPAs among organizations assessed at ML2


Table 1: Results from SEI Process Maturity Profile of the Software Community from May’98 to August’00

  For instance, the Standish Group’s CHAOS Study , found that only 16% of projects completed within budget, on time, while 53% were significantly over budget and late. The remaining 31% of projects were cancelled before completion… which is possibly better than letting the ‘death march’ continue, but still an indicator of poor management and an economic failure.
  The Standish Group found that the average ‘growth’ of the actual costs compared to the predicted costs was over 89%… that is, the estimates predicted only just over half of the actual costs!
 

Software Productivity Centre of Canada reported a recent market study of 1,000 projects and identified "inadequate planning" as a key risk to those projects. They noted the top five planning weaknesses that threaten projects with derailment or delay as:

  • Inability to scope the project to meet schedule constraints
  • Over-estimating team productivity
  • Lack of continuous planning in response to scope changes
  • Poor contingency preparation
  • Resource and skill availability

Subcontract Management also is an area of concern Additionally, the SEI points out that, "Software Subcontract Management (SSM) is not applicable/not rated in many assessments" and suggests that this be taken into account in any interpretation of the results. In fact, SSM is rated for only about 38% of ML1 organisations and of those, it seems only about 6% achieve a ‘Fully Satisfied’ rating.
  This concurs with GIFPA’ own practice experience.
  We are often called in as an independent third party, to help organisations resolve contract management problems. Typically these arise from the customer’s over-ambitious objectives, combined with the supplier’s over-eagerness to win business and lack of understanding of the capability of its own software processes. All compounded by both sides employing contract negotiators who maybe know a lot about contract law and accounting, but little about software estimation, development performance, requirements and project management, or process improvement.
  To our amazement we often find that those people with the best understanding of project performance (eg. members of the Process Quality & Management Group, Software Engineering Process Group, Software Project/programme Office and Measurement Specialists) are explicitly excluded from contributing to the negotiations and establishment of a contract. To me, this is almost a definition of the term ‘shooting yourself in the foot"!

When software acquisition is contracted out, poor project management affects both customer and supplier

Even where an organisation has achieved higher maturity levels and has resolved some of these issues, it is often the case that the organisations with which they form partnerships are at comparatively lower maturity levels. After all, high maturity organisations continue to be a minority (see Figure 1).

In the situation where software acquisition is contracted out, there is a better than even chance that one or both parties to the contract will be a low maturity organisation. In which case, both parties to the contract are liable to suffer from the problems of poor project management, as the sins of the one are visited upon the other.

 


Figure 1: Profile of the Software Community August 2000

  The SEI’s regular reports seem to imply that organisations offshore to the USA are gradually pulling ahead of the USA in the process maturity stakes.
  However, it remains likely that, where high maturity organisations in India and elsewhere provide software acquisition services to customers in the USA (and elsewhere), the maturity of the customer may prove lower than that of the supplier. This places on the supplier the onus of managing the contractual relationship and related projects in a well-controlled and measured way.
  This is commercial self-defence. Experience suggests that where a high-maturity and a low-maturity organisation work together, the lower maturity organisation tends to ‘drag down’ the capability of the higher-maturity organisation, unless explicit steps are taken to sustain that high-maturity.

The CMMsm identifies ‘size’ as crucial to project management The introduction to the Software Project Planning key process area in the ‘Key Practices of the Capability Maturity Modelsm, Version 1.1’ (CMMsm) states that the ‘software planning process includes steps to estimate the size of the software work products and the resources needed’. Similarly, the introduction to the Integrated Software Management key process area states that ‘The management of the software project's size, effort, cost, schedule, staffing, and other resources is tied to the tasks of the project's defined software process.’ (see Table 2). Note: emphases are mine.
  If we understand the CMMsm to represent some form of ‘best practice’, then it is clear that it is critical for organisations to understand the ‘size’ of the problems they wish to solve and of the software products that are the subject of so much intellectual effort.

 

Software Project Planning

The software planning begins with a statement of the work to be performed and other constraints and goals… established by the practices of the Requirements Management key process area. The software planning process includes steps to estimate the size of the software work products and the resources needed, produce a schedule, identify and assess software risks, and negotiate commitments…

Integrated Software Management

Integrated Software Management involves developing the project's defined software process and managing the software project… [so that the] plan… describes how the activities of the project's defined software process will be implemented and managed. The management of the software project's size, effort, cost, schedule, staffing, and other resources is tied to the tasks of the project's defined software process.


Table 2: The SPP and ISM key process areas of the CMM v1.1 emphasise the relationship between size and project activity

The CMMI continues to emphasise the need for a quantified approach to projects Even in the latest version of the Integrated CMMsm, the introduction to the new Integrated Project Management key process area says that ‘Managing the project’s effort, cost, schedule, staffing, risks, and other factors is tied to the tasks… described in the project plan…’ (see Table 3).

 

Integrated Project Management

The purpose of Integrated Project Management is to establish and manage the project and the involvement of the relevant stakeholders according to an integrated and defined process… the establishment of a shared vision for the project and a team structure… [to] carry out the objectives of the project…

Ensuring that the relevant stakeholders associated with the project coordinate their efforts in a timely manner: (1) to address product and product component requirements, plans, objectives, issues, and risks; (2) to make their commitments; and (3) to identify, track, and resolve issues…

Managing the project’s effort, cost, schedule, staffing, risks, and other factors is tied to the tasks… described in the project plan…


Table 3: The IPM key process area of the CMMI also emphasises the relationship between the requirements, plans and tasks

Software size is significant to the global economy

Failed projects have an enormous economic impact The development and maintenance of software now accounts for perhaps one percent of the world’s economy. If reports that only 16% of software projects complete on time and within budget are accurate, the economic impact of the remaining 84% of ‘failed’ projects is enormous.
The software community often does not measure its outputs Yet there are no commonly accepted methods of measuring the output of this industry. We may measure how many people are employed in the software community and the costs expended, but not how much they produce. It’s as if the car industry knew the cost of the materials and energy used in the cars it produced, but couldn’t actually count the number of cars.
Users pay to have their requirements fulfilled; the code is of little concern

Of course, the software industry could measure the number of lines of program code it produces. But as there are hundreds of programming languages, each with its own expressive power, we have no common industry standard for such a unit of measure. Especially when, increasingly, a single software application employs several different languages and scripts.

Also of course, no software user regards mere code as the product for which they pay so much; to a user, the deliverable is the satisfaction of their business requirements.

Measures of functional size express the quantity of information processing delivered What the software community ideally needs, is a measure of the quantity of information processing functionality the customer requires of the software, independent of the technology used and the people who produce it. This is what ISO-14143 calls a measure of ‘functional size’. Such measures are of crucial importance, not only to project management, but as an enabler to a wide variety of critical economic decisions.

‘Size’ is relevant from requirements definition to final delivery

 

Software suppliers usually are under pressure to estimate development effort (and costs) early in the life of a project. Often there is a need to make commitments based on a partial and inadequate understanding of the customer’s requirements. Any technique that helps to remove ambiguity, improve requirements definition and enables customers and suppliers to agree terms and control contract scope and progress, ought to be welcomed by the software community (see Figure 2).

Functional size measurement methods are such a family of techniques. This includes Albrecht’s ‘Function Point Analysis’ (FPA), Symons ‘MkII Function Point Analysis’, Boeing ‘3D Function Points’ (3DFP) and, the latest and most widely applicable, ‘COSMIC-Full Function Points’ (COSMIC-FFP).

 


Figure 2: The various uses of size measurement – a mind map

Improve Requirements Definition These techniques help engineers to produce software requirements that are measurable and can be sized. They reduce ambiguity and improve the rigour with which requirements are documented. Thus they can be used as the basis for agreement between customer and supplier.
  The Use Case technique is one of the most popular modern methods of documenting requirements. However, in practice there seems to be much disparity in how different people understand use cases. Various practitioners, including IBM, report finding up to 32 different interpretations.
  Furthermore, GIFPA has observed that there is a tendency for developers to define the ‘easy’ use cases first and in great detail, while documenting more ‘difficult’ use cases very briefly, putting them aside ‘until later’.

 

Figure 3: Analysis quality of Use Cases cf. relative Functional Size

  The result in a number of projects we’ve observed , is that 80% of the project budget and schedule is expended on ‘easy’ use cases (which may represent only 20% of the functionality). Then, as the project deadline approaches, the project team realises that the 20% of uses cases that remain, the ‘difficult’ ones, represent maybe 80% of the functional size (see Figure 3). But by the time this realisation dawns, there is insufficient budget and time remaining to complete the required work. Hence the project is late, over budget, the customer is dissatisfied and someone has to find the unbudgeted funds.
 

During 2000, GIFPA performed a Functional Sizing & Estimating Study for one supplier that had been working on Phase-1 of a multi-phase project for some 12 months. Phase-1 had already overrun by several months, making the management wonder about the commitment represented by subsequent phases. The study showed that the project was some five times larger than the supplier had originally understood. After discussions with the customer, the supplier withdrew from the project at a cost to them of £5m GBP ($7.5m USD), leaving the customer with nothing to show for nearly 18 months of work.

Early measurement of the requirements would have exposed the real size of the project and enabled better decision making. Simply applying measurement to the requirements identified various issues of ambiguity and highlighted the amount of uncontrolled scope-creep that was being experienced.

Estimate project effort, schedule and costs based on the requirements When a customer wants a new software application developed by a supplier, the customer needs an estimate of the development cost as the requirements evolve, to ensure the optimum cost/benefit investment decision. In order to determine a price, the supplier has to estimate the development effort, staffing and hence the resulting costs, starting only from the size of the requirements.
  Such estimates are needed early. But clearly, the earlier the estimate is made, the more uncertain it is. Hence estimation needs to be repeated, improving the precision as understanding of the requirements attains more detail.
  For example , a supplier organisation performing a bespoke project in Denmark called upon GIFPA to assist with sizing and estimating the requirements. The supplier had just made delivery of Phase-1 of the project and completed User Acceptance Testing of the product with their customer. However, this success was only achieved at the expense of many late nights and the provision of additional resources to the project team.
  The results of GIFPA’ initial study of Phase-1 indicated that the supplier was being asked to deliver in some 7 months functionality that had previously taken between 18 and 24 months to deliver. That is, about 3 times faster than previous projects of similar size performed by the customer’s in-house staff. Yet the supplier was being castigated by the customer because of their apparent "inflexibility" and "poor productivity".
  Both the supplier and customer organisations recognised from their experience of this first phase that better understanding and control of the requirements were needed. Subsequent phases were to apply a systematic estimation process and the plans and commitments were to be based on those requirements.
 

The requirements of each phase were documented as Use Cases (and were found to suffer from the common quality failings noted above). These use cases were analysed and sized using Functional size analysis. This size information, combined with productivity figures from earlier phases and data from other projects using similar technology, was used to produce an early estimate of effort, schedule and cost, along with a staffing profile for the project.

The sizing and estimating procedure is conducted three times during each phase…

 

When Why Precision
  1. As early as possible, using some tried and tested heuristics to estimate the functional size (the traditional ‘back of the envelope’ estimate)
At this stage this provides not only the data essential to test feasibility and for project planning, but it also identifies ambiguous use cases and those that are defined in insufficient detail Usually to around
± 30–40%
  • After the information obtained during early functional size analysis has been used to resolve ambiguities and normalise the detail to a consistent level of granularity
  • This produces the ‘main’ estimate for the phase and establishes the ‘size baseline’ against which all subsequent Requests For Change can be tracked Often better than ± 15%
  • Finally, just as the developed software is ready for User Acceptance Testing (UAT)
  • This final measurement of the functional size is used as a check to ensure that what was required has actually been developed and also enables refinement of the plans for the UAT and implementation steps Often better than ± 5%

    Table 4: Make estimates at three stages in each project phase

      In addition to three estimates based on the size of the bulk of the requirements, each distinct Request For Change is sized separately. The information is used in an analysis of the likely impact and cost/benefit, enabling the Change Control Board to decide whether to accept or reject the Request For Change.
    Inter-Counter Consistency is very good for experienced analysts During the Study for Phase-1, three independent measures of functional size were made, representing a ‘blind test’ (see Table 5). Two were detailed measures, which agreed to within 2%, while the third, made using a ‘fast estimating’ heuristic expected to be accurate only to (say) +/– 25%, actually agreed with the detailed measures by between 11% and 13%.
      Another ‘blind test’ conducted on a different, smaller job, this time for a client in the Netherlands produced two separate functional size measures. One was made by the client’s staff and one by an GIFPA consultant. The results agreed to within 1%

     


    Table 5: Inter-Counter Consistency can be excellent

      In general, GIFPA finds that experienced software measurement specialists produce far more consistent results than software engineers whose main task is software development, but who have a little measurement training ‘on the side’. The inter-counter consistency of specialists is better than +/– 5%. That of recently trained project staff is typically around +/– 23% .

    Evaluate and manage a project’s feasibility

    An organisation with a limited set of resources can achieve only so much in a given time… and resources are always limited, in terms of funds, skills, availability, time or some other dimension. What can be achieved is determined both by the quantity of resources and the capability of the organisation’s software process. Obviously then, the feasibility of a project is constrained by the capability, in terms of the quantity of output that can be produced for a given expenditure in a given time.

    As we have noted earlier, it is typical to find that a customer organisation tends to be over-optimistic with respect to the requirements that can be solved with given resources. Suppliers, unless they want to gain a reputation for failed projects, are wise to test the feasibility of each project before making any commitment as to cost and duration.

    Some suppliers use an approach where, based on past capability and performance, contractual limits are set on the size of project they are willing to attempt for a specific customer. Let us suppose that the customer and supplier agree that 1000 COSMIC functional size units (csfu) of software will be produced for a given price, calculated via a US$/cfsu rate of, say, US$350/cfsu. That is, the price is US$350k. This agreement can be made at the outset, before the requirements are considered in detail.

    It will take a team of around 10 people about an elapsed year (12 calendar months) to complete a project of around 1000 cfsu.

      However, there is bound to be some uncertainty. So perhaps the supplier agrees a range with upper and lower limits for the software size they are prepared to develop for the agreed price. Let’s suppose the upper limit is set at 1100 cfsu and the lower one at 900 cfsu. The customer and supplier agree to adjust the price if the size of the requirements strays outside these limits.
      If, when the requirements are examined in detail, they are found to be (say) 1200 csfu, the supplier has a basis for negotiating the price upwards, to US$420k. However, not only will this project now require more resources (maybe 13 or 14 people), it is also likely to require more time, as larger teams communicate less efficiently. Attempting to compress the natural rythmn and schedule of a 1200 cfsu projects into the duration of a 1000 cfsu project is liable to incur a large productivity penalty, as effort and duration are not tradeable in proportion .
      Therefore a better approach is for the supplier to regard the upper limit as a threshold to trigger re-negotiation and re-scoping of the project. By reducing the functional requirements back to the originally agreed size range, rejecting or postponing those with the lowest customer priority, the feasibility of the project is maintained.

    Control Project Scope Creep

    A similar approach can be applied to manage changes to the requirements that are proposed after development has commenced.

    If or rather when, after the price is agreed, the customer changes the requirements, the supplier needs to measure and control the effects of the Requests For Change. Measuring the size of each incremental change enables the supplier to determine whether the change can be accommodated within the existing budget, whether the change should be rejected, or whether there is a need to re-negotiate the agreed price and schedule. The use of size ‘thresholds’ is again recommended.

      Unfortunately, Requests For Change (RFC) made late in a project’s lifecycle tend to impose larger amounts of rework than earlier RFCs. Therefore it makes sense to dissuade people from proposing such late RFCs .
      Some suppliers do this by imposing a surcharge on late RFCs, based on a combination of their functional size (US$/cfsu) and the time of submission eg. (Month_Number/10)1.74 (for an example, see Table 6).

     


    Month


    Month*0.1


    (mth*0.1)1.74

    Price Increment

    New Unit Price

    Multiple of Orig. Price

    1

    0.1

    0.018

    $6

    $356

    1.02

    2

    0.2

    0.061

    $21

    $378

    1.08

    3

    0.3

    0.123

    $43

    $421

    1.20

    4

    0.4

    0.203

    $71

    $492

    1.41

    5

    0.5

    0.299

    $105

    $597

    1.70

    6

    0.6

    0.411

    $144

    $740

    2.12

    7

    0.7

    0.538

    $188

    $929

    2.65

    8

    0.8

    0.678

    $237

    $1,166

    3.33

    9

    0.9

    0.832

    $291

    $1,457

    4.16

    10

    1

    1.000

    $350

    $1,807

    5.16

    11

    1.1

    1.180

    $413

    $2,221

    6.34

    12

    1.2

    1.373

    $481

    $2,701

    7.72

    Table 6: Surcharging Requests For Change submitted late in a project

    Manage Long-term Outsourcing Contracts In a long-term outsourcing relationship the customer would like to benefit from measurably improving the price/performance of its software supplier over the life of the contract. Measures must be independent of the technology used, as this will surely evolve during the life of the contract
      In addition, the partners must establish a well-defined baseline as a reference against which ‘improvements’ can be evaluated, and put into place an infrastructure to collect, record and analyse the performance data, along with the necessary resources. Software engineering staff must be trained to use the techniques and provision must be made in both project and organisational plans for the time and funding to enable the performance measurement to take place.
     

    Exactly which party provides the resources to facilitate institutionalised measurement and performance-tracking practices is open to negotiation between the customer and supplier. GIFPA can cite examples of both approaches.

    However, it is common to find that the first party to the contract wants assurance that the second party’s staff are applying and reporting the measurement practices correctly and that the results are valid and verifiable. Hence, GIFPA often finds itself playing the role of ‘independent third party’ – we compare this to the English concept of the ‘quantity surveyor’ in the construction industry – assessing and auditing the counting practices used.

      The functional size measurement techniques we advocate are public domain methods, with Design Authorities that are independent of any one vendor or customer. They provide standards that set out the rules and procedures to be applied to ensure correct, consistent and repeatable measurement. Actual practices can be audited with respect to these published standards. Similarly, the validity of the results can be assessed by sampling a proportion of the completed measures and then having a different analyst repeat the measurement.
     

    In one example, GIFPA has for the past three years provided such Functional Size Audit services to a large UK retail bank.

    This client has outsourced all its software development and maintenance activities to a supplier via a contract valid for seven years. Each year, the supplier is obliged by the contract to measure and report the functional size of all software requirements fulfilled during the year, whether new requirements or enhancements (additions, changes and deletions) made to existing requirements.

    The supplier has several hundred software engineers involved in this contract, with one or two specialist staff dedicated to managing the collection, quality assurance, analysis and reporting of the data across the entire outsourced organisation. Around 30 staff have been trained to use the functional sizing technique (in this case, MkII Function Point Analysis) and the functional size measurements are made by trained staff from one project being seconded for short periods to the project for which measurements are needed.

      The supplier is very conscientious and the measurement staff use, as well as the standard public domain references, local counting practices supported by ‘case histories’ for unusual situations. They operate well defined documentation and quality control procedures to ensure that the measurements are consistent and repeatable by different individuals. The supplier’s Quality Assurance group performs reviews of the practices, procedures and results, reporting issues and tracking them to resolution.
      Never-the-less, regardless of all these efforts by the supplier, as the measurement data is used to determine the price of the services supplied to the customer, it is of such importance that the customer needs assurance that the procedures and results are reasonable and truthful. Hence, the customer employs GIFPA as an independent third-party ‘auditor’ to verify and validate the suppliers practices.
      The results over three years have shown that there is a mean size error between the un-audited and the audited results of between +/–2% and +/–2.5% for each individual project. Over the entire annual workload, this reduces to an error of less than +/–1% (due to the ‘swings and roundabouts’ effect).
      Clearly this gives very good confidence to both customer and supplier that these measurements form a stable basis for the contractual arrangements.
     

    Effort, project duration, staffing numbers and defect|failure data etc also is collected. The size data is used with other measurement data to evaluate the supplier’s performance. Again, there are contractual commitments to improve productivity, time-to-market and product quality over the duration of the contract.

    The benefits to the customer are a huge improvement in the predictability of its costs, demonstrable value-for-money and continuous improvement of the productivity and time-to-market achieved.

    The supplier benefits by building a trusted and close relationship with its partner, increasing the longevity of the contract, while being able to showcase strong and improving performance to other potential customers. Projects are better estimated and more predictable, so staff utilisation rates are better planned and more efficient.

    Improve application development and maintenance & support performance

    Functional size analysis methods also are important for management of the maintenance and support of software.

    The typical scenario is one where there is a large portfolio of existing software applications that are more or less critical to the smooth operation of business. These applications therefore must be kept operational. If failures are experienced by the users, the problems must be reported, their criticality assessed, the problem prioritised and assigned for resolution in an appropriate time. Each problem report should be tracked to resolution. Over time, it is to be hoped that the incidence of failures will decrease and user satisfaction will improve, although this is far from being ‘guaranteed’.

     

    But how is the development organisation to know whether its maintenance & support group is working efficiently and effectively? How do you determine how much effort should be expended to keep an application operational? How can different applications, of varying size, be compared to determine which is of the better quality?

    The answer is, by understanding the functional size of each application in the portfolio.

    Once the relative size of each applications (and possibly each component sub-system) is known, it is possible to obtain metrics such as those below…

      Metric Definition

    Unit
     
    • Unit Support Cost
    Total Size of the Software Portfolio
    Total Cost of M&S Group

    csfu
    US$
     
    • Support Capability
    Functional Size Supported
    Full Time Equivalent person

    csfu
    FTE
     
    • Defect Density
    No. of Defects Detected during period
    Functional Size of the Application

    defects
    cfsu
      These metrics and others such as time and effort to fix, Mean Time To Fail, etc. with the accompanying trend information, enable management and improvement of the software support function. And help make the case for investment in defect prevention activities.
      For example, in another contract, a multi-national car manufacturer has outsourced all software maintenance & support activities to a large supplier. The supplier once again is contractually obliged to measure and report both the functional size of the software requirements enhanced (added, changed or deleted) each month, and also to keep track of the size of each application and the amount and rate of growth in application size.
      The contract price is based on quantified expectations of the supplier’s performance, but thresholds have been established to trigger corrective action if the performance falls below or above the expected range.
      Each application area is responsible for applying the measurement techniques and for performing quality controls. An Project Office at each site collates the data, records, analyses and reports it. The Project Office co-odinates training in the relevant methods, performs a QA role and also arranges periodic ‘health-checks’ (provided by GIFPA) to ensure that the counting practices and procedures used by the application groups remain correct and consistent.

    Use for Asset Valuation In recent years, some organisations have started to value their software assets in a uniform way. This allows it to be included on the balance sheet at replacement cost, independently of any technology that might be used for replacement. The Australian Government is understood to be the most recent major convert to this approach. (This problem is part of the more general question of how you value the assets of companies that have dominantly intellectual property as opposed to physical assets.)

    A short history of Functional Size Measurement

    1977 Allan Albrecht’s brilliant insight

    The first method of measuring the functional size of business software, Function Point Analysis (FPA) was developed over twenty years ago by Allan Albrecht at IBM.

    His insight was to measure the size of the functional requirements, rather than to count the number of lines of code, thus moving away from a dependence on specific technology, people and development methods.

      At the time this was a great breakthrough, permitting comparisons across projects, organisations and time. Later work by Albrecht, in conjunction with John Gaffney, stabilised and popularised the technique, leading in 1984 to the formation of the International Function Point Users Group (IFPUG) in the USA.

    1977 Allan Albrecht’s brilliant insight

    The first method of measuring the functional size of business software, Function Point Analysis (FPA) was developed over twenty years ago by Allan Albrecht at IBM.

    His insight was to measure the size of the functional requirements, rather than to count the number of lines of code, thus moving away from a dependence on specific technology, people and development methods.

      At the time this was a great breakthrough, permitting comparisons across projects, organisations and time. Later work by Albrecht, in conjunction with John Gaffney, stabilised and popularised the technique, leading in 1984 to the formation of the International Function Point Users Group (IFPUG) in the USA.

     

     
    Figure 4: The Evolution of Functional Size Measurement

    Function Point Analysis is growing stale… More than 20 years on, however, Albrecht’s technique is regarded as insufficiently accurate for many of the demands such as those described above. Also, expressed as it is in the terminology of the 1970’ties, it is increasingly difficult to apply to modern business methods and ways of specifying software requirements.
    …but has evolved and is still widely used In spite of the difficulties, Albrecht’s method is routinely used by some large software producers such as IBM and EDS for estimating and scope control. It is used in some of the world’s largest outsourcing contracts to measure and control supplier performance. Eg. the contract between Rank Xerox and EDS and that between JP Morgan and the Pinnacle Alliance (Anderson Consulting, CSC & IBM).
    1984 Charles Symons introduces MkII FPA A more modern derivative, MkII Function Point Analysis (MkII FPA), brought functional size measurement into the database-dominated world of the 1980’ties. This technique is widely used in the UK, especially in the finance and insurance domains. Also, it is widely used for estimating and contract management. UK Government Departments, for example, use MkII FPA in their long-term outsourcing contracts with EDS, Anderson Consulting and FI Group.
    Function Point Analysis was not designed for highly-constrained software A significant problem with these techniques, however, is that they were not designed for highly-constrained software (eg. software that must respond in ‘real time’). This sort of software is used in operating systems, telecommunications, process control, embedded software, avionics and suchlike. (see Figure 5).

     


    Figure 5: COSMIC-FFP accounts for more software functionality than do earlier techniques

      As computers increase in power and ubiquity, the demands on the software become more and more constrained. Although batch applications and simple information retrieval systems are still required, much software nowadays contains at least some elements that are constrained in terms of response time, throughput, computing resources, etc. Lightly-constrained software is less common, while highly-constrained, multi-layer software is becoming the norm. The software community needs measures suitable for all these highly-constrained domains.
      The UK Ministry of Defence, for example, would like a functional size measure to help control the value-for-money from suppliers, who develop major weapons and avionics systems and maintain them throughout their life. The life of these products can be 25-30 years and the total value of such contracts is very large.

    The COmmon Software Measurement International Consortium

    1999/2000 COSMIC Field Trials

    Two years ago, a group of software metrics experts decided to tackle the problem.

    This group has over 150 fte years of experience in using earlier functional sizing techniques and includes the inventors of two of those techniques.

    Operating under the name of the Common Software Measurement International Consortium (‘COSMIC’), they developed a software functional size measurement technique that works equally well for business and for real-time software.

    The method is in fact a rather precise approach to requirements determination, where the size measurement of the requirements is obtained almost as a by-product of the analysis.

    During 1999/2000, the ‘COSMIC FFP’ method, as it is known, successfully completed a series of field trials in various commercial organisations in Australia, Canada and a number of European countries. These trials covered several application domains, including avionics, banking, energy, telecommunications, defence and small business systems.

    The method has been taught in India, Japan and the USA, and has already been adopted onto the ISO work programme for eventual international standardisation.

      The COSMIC team now has representatives from seven nations, under the joint leadership of Professor Alain Abran of the University of Quebec at Montreal, Canada and Charles Symons, of Software Measurement Services of the UK. The team continues its work to refine, extend, teach and promote these measurement methods.

    Characteristics of the COSMIC-FFP technique

    The COSMIC FFP method draws on long experience with existing functional sizing methods, but it has been designed on sound theory and incorporates new ideas. It is firmly focused on the ‘user functional view’.

    COSMIC can be applied at any time during the software product life cycle. When requirements are poorly understood and uncertain, estimating heuristics can be used; as more detail becomes available, estimating heuristics give way to true measurement.

    Size expressed in COSMIC functional size units (cfsu) is derived without reference to the effort expended during development, the methods and tools used by the developers, the physical implementation and technical environment of the software.

    It is the first software functional sizing method that:

     
    • Is expressed in terms that can be interpreted easily by software engineers working with MIS and/or real-time software – There is one simple model for both lightly- and highly-constrained software, in any layer or tier of a multi-tier architecture.
    • Has been extensively piloted with major corporations around the world
    • Has been designed to conform to the ISO standard on the principles of functional sizing prior to gaining acceptance as an ISO standard
    • Has been designed by an international team of software metrics practitioners and academics
    • Is completely in the public domain – see www.cosmicon.com and www.lrgl.uqam.ca
      COSMIC does not attempt to measure non-functional requirements, such as speed-of-response, transaction throughput, etc. Nor does it pretend to account for algorithmic complexity.
      Results from the Field Trials and from early commercial use of the technique are encouraging. They suggest that measurements are comparable with those from earlier methods for lightly-constrained software (which is an advantage for those organisations wishing to migrate to COSMIC from an older technique). Also, the new technique does account for much of the highly-constrained functionality that was ignored by earlier approaches (which is exactly what the designers intended).

    Conclusion

    The software community can’t go on like this

    Business users and customers for software have for long put up with an immature software community that does not deliver projects on time, within budget. While some ‘cutting-edge applications’ make the crowds gasp with wonderment, much business-oriented software has a reputation for poor product quality. The industry is so notorious for delivering incomplete products, that customers ‘expect’ to perform testing that other industries would be ashamed to, or legally prevented from, imposing upon them. But customers have learned from the Y2K experience.

    The economic impact of the amazing number of late and/or cancelled projects is enormous. At each of the personal, project, organisation, national and global levels.

    This cannot go on.

    There are no more excuses

    High-maturity organisations and others following models such as the SEI’s Capability Maturity Model have demonstrated that it is possible to apply quantitative management techniques to the software process.

    But for a long time, the difficulty of measuring the size of the output from the software process has limited the extent to which process control could be applied. Feedback is needed in any control loop, and in the absence of a measure of the output from the software process, is has proved difficult for many organisations to establish such feedback. Hence, the software process has run away, uncontrolled.

    The advent of COSMIC removes this excuse.

    Been there, done that

    As the cases cited have illustrated, functional size analysis helps to produce requirements that are agreed, correct, complete, consistent, testable and traceable. COSMIC measurement produces quantified information that assists at almost every step of the software development process and in each step of project planning, tracking and management. It is crucial to the evaluation of improvements to processes, methods, tools and technology.

    If they can do it, you can do it. So do it.

    A Challenge

      For some years now, there has been a very wide spread in the relative performance of different projects (see Figure 6). The range in Project Deliver Rate (wh/fsu) covers nearly 3 orders of magnitude, from 0.2 work hours per functional size unit, to some 80 work hours per functional size unit. That is, productivity figures ranging from 5 fsu per work hour of effort, to barely 0.0125 fsu per work hour.

     


    Figure 6: Project performance is distributed over a very wide range

     

    The best performing projects perform at around 0.2 work hours per functional size unit ie. 5 fsu/wh (see Figure 7). But not many are reported.

    Do your projects fall into this category?

     


    Figure 7: The highest performing projects in the ISBSG database

     

    So here is the challenge for the next few years.

    Can the software community break the ‘performance barrier’ and move the bulk of projects to the point where every organisation can expect Project Delivery Rates to be between 0.1 and 1 work hours per functional size unit?

    Can the very highest performing, most mature organisations achieve projects with Productivity Rates in excess of 10 fsu/wh? Who will be first?

    Will it be you?

    Further information

      For further information, please contact…
         
     

    Software Measurement Services Ltd

    124 High Street
    Edenbridge
    Kent TN8 5DQ
    United Kingdom

    T: +44 (0) 1732 863 760

    F: +44 (0) 1732 864 996

    E: PG_Rule@compuserve.com

    W: www.gifpa.com

         
     

    AmitySoft Technologies Private Ltd

    18/5 Velachery Road
    Little Mount - Saidapet
    Chennai - 600 015
    India

    T: +91.44.230.1891

    E: jayakumar@amitysoft.com

    W: www. amitysoft.com

         
     

    ResourcesOnNet Ltd

    Fountain Court 2 Victoria Square
    Victoria Street
    St. Albans
    Hertfordshire. AL1 3TF
    United Kingdom

    T: +44 (0) 1727 884 755

    F: +44 (0) 1727 884 810

    E: sharif.choudhury@resourcesonnet.com
    E: unnati@giasdl01.vsnl.net.in

    W: www.resourcesonnet.com

     

    The slides are available in the PDF version of this paper (275Kb / 31 slides)

     

    Related Information

    back to top

    Articles

    Software Size Measurement
    Undergoing a renaissance, Functional Size Measurement is applicable thorughout the development, maintenance and support lifecycles.

    Using Measures to Understand Requirements
    Many approaches fashionable with technically-oriented practitioners clearly fail to satisfy the need for clarity of requirements. Some even trade short-term acceleration for long-term maintenance & support costs. What is missing? What can be done to ensure that new technologies help rather than hinder? This paper suggests some simple process improvements that might have made all the difference in a number of cases.

    Related Services

    Applying Software Metrics

    Software Size Measures
    Technology independent, systematic and objective methods for analysing, estimating and measuring the functionality of a software system.

    Estimating and Risk

    Estimating Size
    Estimating Size from detailed requirements and detailed designs.

    Measuring Requirements and Changes
    Measuring the functional size of change requests and estimating their impact in terms of cost, duration, effort etc.

    Tools and Techniques - Functional Size Measures

    COSMIC FFP
    A method designed to measure the functional size of real-time, multi-layered software such as used in telecoms, process control, and operating systems, as well as business application software, all on the same measurement scale.

    IFPUG Function Point Analysis
    The original method of sizing, it is currently at version 4. This method is still the most widely used and works well in the business/MIS domain. It is applicable to object oriented developments.

    Mark II Function Point Analysis
    This method assumes a model of software in which all requirements or ‘user functionality’ is expressed in terms of ‘Logical Transactions’, where each LT comprises an input, some processing and an output component.

    Training

    Applying Software Metrics

    Counting Object-Oriented Applications and Projects  Advanced Workshop
    An advanced workshop for practitioners wishing to apply functional size measurement in object-oriented environments.

    Tools and Techniques - Functional Size Measures

    COSMIC FFP for Sizing & Estimating MIS and Real-Time Software Requirements  Formal Course 
    Learn how to measure the software component of software-intensive systems using the latest ISO-standard method

    Practical use of IFPUG Function Point Analysis  Formal Course 
    Learn the most popular technique for measuring the functional size of software applications and projects

    Practical use of MkII Function Point Analysis  Formal Course 
    Learn the UK Government’s preferred technique for measuring the functional size of software applications and projects

    GIFPA Ltd. 2016
        
      
     e-mail: sales@gifpa.co.uk
      www.gifpa.co.uk

    Copyright 2001-2016 GIFPA Ltd. 2016 All rights reserved.

     

                                                   
    Applying Software Metrics
    Assessing Capability     
    Estimating and Risk       
    Improving Processes     
    Measuring Performance
    Sourcing                       
    Tools and Techniques   
                 
                    
    Services               
    Training                
    Events                  
    Reference             
                    
    About GIFPA         
    Opportunities
    Copyright & Legal