| || |
by: Grant Rule
Many approaches fashionable with technically-oriented practitioners clearly fail to satisfy the need for clarity of requirements. Some even trade short-term acceleration for long-term maintenance & support costs. What is missing? What can be done to ensure that new technologies help rather than hinder? This paper suggests some simple process improvements that might have made all the difference in a number of cases including:
Business organisations need, above all else, huge predictability.
In order to manage their budgets and the commitments to their stakeholders, business organisations rely on software projects completing on-schedule, for predictable costs. If they fail in this, the organisation may be absorbed or destroyed altogether.
However, the business and technical environment in which commercial organisations operate is forever changing. Organisations are subject to pressures from customers, competitors, legislators, predators, shareholders and staff and the pace of change seems to be increasing. Hence, there is contention between the need for predictability and the dynamic nature of the environment.
Some fashionable approaches to software development have been derived to resolve this conflict.
Arguably, structured and object-oriented methods, including the Unified Modeling Language (UML), have been produced to reduce the variability of the technical processes of software development and maintenance. Similarly, project management methods, from the waterfall and spiral models, through Rapid and Joint Application Development and now eXtreme Programming, have evolved in reaction to changing requirements.
Through all this, the ability to understand and to be able to communicate about, the requirements allocated to software remains a necessity. Without a clear understanding of the requirements, and the changes to the requirements, it must remain difficult to deliver a predictable software process.
2.1 It is important to understand the granularity of the requirements
The Use Case technique is one of the most popular modern methods of documenting requirements. However, in practice there seems to be much disparity in how different people understand use cases. Various practitioners, including IBM, report finding up to 32 different interpretations . As different individuals apply different rules, even within one project, the results can be ambiguous.
Over the past few years, GIFPA consultants have observed a number of projects using the use case technique, in a variety of application domains, and have identified a number of common issues.
The main concern is that there is a tendency for developers to define the easy use cases first and in great detail, while documenting more difficult use cases very briefly, putting them aside until later. At the extreme, we have found use cases that say the equivalent of do stuff or browse content, wholly failing to describe the required interaction between the actor and the application.
The result in a number of projects is that around 80% of the project budget and schedule is expended on easy use cases (which may represent only 20% of the functionality). Then, as the project deadline approaches, the project team realises that the 20% of uses cases that remain, the ones that are difficult or complex, represent maybe 80% of the functional size (see Figure 1). By the time this realisation dawns, there is insufficient budget and time remaining to complete the required work. Hence the project is late, over budget, the customer is dissatisfied and someone has to find the unbudgeted funds or make-do with inadequate functionality.
2.2 It is important to analyse the problem, rather than someones interpretation
Another factor affecting the productivity of some projects is a result of the way the elicitation and description of requirements is organised. Often the customer has a specialist group, lets call it Business Planning, that is deemed to be responsible for defining the customers business requirements. This group interacts with the end users and managers to determine the customer organisations needs. They also have the responsibility to convey the customer requirements to the supplier. On the supply side, there may be a systems analysis group, responsible for determining the customer requirements. However, in practice they communicate to and through the customers Business Planning group. They are not permitted to talk directly to the end users (that would interfere with the normal business processes and production). All communication between the suppliers systems analysts and the end users is filtered through Business Planning. This is a recipe for much misunderstanding.
We have seen several projects that involve a chain of activity (see Figure 2) where
Business Planning documents business requirements in textual form they then pass the resulting documents to the suppliers Systems Analysts. The Systems Analysts take the textual requirements and document a set of high level use cases, often cutting & pasting the text directly from one document into another. The added value, if any, is limited to a use case diagram of doubtful worth. Subsequently, there is a further activity by a system designer to analyse the high level use case and to produce one or more detailed use cases. Again, often the text is directly copied and little value is added, although the format may change.
This process wastes much time, effort and budget, adds little value and fails to describe requirements such that they can be understood unambiguously and implemented efficiently. The last two steps are self-referential and cannot possibly improve on the initial statements, because they only refer to an interpretation of the problem, rather than to the problem itself.
In order to derive additional detail and remove ambiguity, the suppliers systems analysts must be given access to the end users and managers and allowed to analyse the requirements themselves. The customers business planning group may help, establish the scope, determine the feasibility and benefits from the customers viewpoint, but they must not be allowed to filter (ie. act as a barrier to) communication between the supplier and the owners of the requirements.
2.3 Measuring requirements enforces rigour and highlights ambiguity early
Software measurement techniques help engineers to produce software requirements that are unambiguous and can be sized. They improve the rigour with which requirements are documented. Thus they can be used as the basis for agreement between customer and supplier.
During 2000, GIFPA performed a Functional Sizing & Estimating Study for one supplier that had been working on Phase-1 of a multi-phase project for some 12 months. Phase-1 had already overrun by several months, making the management wonder about the commitment represented by subsequent phases. The study showed that the project was some five times larger than the supplier had originally understood. After protracted discussions with the customer, the supplier withdrew from the project at a cost to them of £5m GBP ($7.5m USD), leaving the customer with nothing to show for nearly 18 months of work.
An example of the use case template used by this project is presented in Table 1.
Illustrations of the way this template was used is given in Figure 3 & Table 2.
Early measurement of the requirements would have exposed the real size of the product and enabled better cost estimation and decision making by both customer and supplier. Simply applying measurement to the requirements identified various issues of ambiguity and highlighted the amount of uncontrolled scope-creep that was being experienced.
In practice, as a means of describing the requirements allocated to software, use case descriptions seem to suffer from the following problems:
These faults can be counter-balanced by incorporating the MkII Function Point Analysis (FPA) concept of the logical transaction into the use case description.
In fact, the functionality described by the use case in Table 2 consisted of that shown in Table 3.
This maps very well to the MkII FPA logical transaction , which is defined as
the lowest level business process supported by a software application. It comprises three elements: input across an application boundary, some related processing, and output across the application boundary. Each logical transaction is triggered by a unique event of interest in the external world, or a request for information and, when wholly complete, leaves the application in a self-consistent state in relation to the unique event.
Hence, the Primary Course and Alternate Course parts of the use case template can be replaced (or at least supplemented) by a table (see Table 4) that decomposes the Actor/Application interaction into logical transactions. This can be done very early in the product lifecycle, can be refined later as necessary, enforces a consistent level of granularity, highlights ambiguity where it exists and is inherently measurable.
Late in 1999, a supplier organisation performing a bespoke project in Denmark called upon GIFPA to assist with sizing and estimating requirements. The supplier had just made delivery of Phase-1 of the project and completed User Acceptance Testing of the product with their customer. However, this success was only achieved at the expense of many late nights and the provision of additional resources to the project team. Furthermore, the supplier was being castigated by the customer because of their apparent inflexibility and poor productivity, which was lower than that previously achieved by the customers in-house software staff.
The project used use cases to document requirements. These requirements were initially documented by a business planning group which handled all interactions with the suppliers systems analysts. The end-users were located at over 200 different sites. The architecture was three-tier client/server.
The results of GIFPA initial study of Phase-1 indicated that the supplier was being asked to deliver in some 7 months, functionality that had previously taken between 18 and 24 months to deliver. That is, about 3 times faster than previous projects of similar size performed by the in-house staff.
Both the supplier and customer organisations recognised from their experience of this first phase that better understanding and control of the requirements were needed. Subsequent phases applied a systematic estimation process and the plans and commitments were based on those requirements.
Many organisations already have a wealth of software. Their concern is not so much the development of new applications, but the maintenance and support of their existing software portfolio. This is especially a problem when the market in which the organisation operates is highly volatile and competitive, the size of the software applications is large, the volume of data that must be processed is high and the quality of the service provided to customers must be best in class. All the above criteria apply to the world of mobile telephony.
Telecommunications software bridges the exceedingly vague borders between highly constrained signal-routing software operating close to real-time and the less constrained information processing applications used for customer billing, management information, etc. However, as the tariff to be applied to a mobile telephone call, and indeed whether there is sufficient credit available to make the call, has to be calculated before the call is connected, even the less constrained applications have to perform well.
Telephony also is a highly competitive business. Organisations must rapidly take advantage of each new technological advance in order to present new services to their customers, or risk losing market share to competing companies. Consumer-oriented business is cyclical in nature, driven by seasonal sales peaks, such as the Christmas period, that dominate annual sales of, for example, pay as you go mobile phones. Due to the lead-time necessary to prepare marketing and sales campaigns, it is often necessary for organisations to prepare and run advertising campaigns for new services before the software capability to support those services is available. In such an environment, the software enhancement strategy is based around many relatively small, monthly releases.
The management task is complicated even further when business critical applications are outsourced to specialist suppliers over whose staff the organisation has no direct control.
In the case in question, the Programme Office has identified the application areas that absorb around 80% of the maintenance & support budget. Each of these areas is supported by a dedicated team that handles many Requests-For-Change (RFC) during each year. Each RFC is prioritised from the business perspective and the impact of that change is assessed from the perspective of the software engineers, both in the application area immediately concerned and across other application areas that may be impacted (see Figure 4). Often there is a need to co-ordinate and synchronise multiple RFCs in more than one application area in order to deliver a new business requirement.
The RFCs scheduled for completion each period are sized using MkII FPA and this information, along with records of effort expended, duration and cost are reported to senior management quarterly. This information is used to help inform decisions about the estimates and feasibility of future work and also to evaluate the productivity and performance and in-house groups and external suppliers.
An external agency, GIFPA, is used to collect functional size figures for each period, while effort figures are collected via an purpose built Time Recording System, supplemented by project management records made by individual project managers. Data relating to operational failures originated from software defects are collected via the help desk systems and an independent testing group. This measurement regime currently is exercised to varying degrees by the different application areas. It is a continual struggle to improve the quality of the project management data when so much effort is focused on delivering new services on time. The improvements likely to make most impact in the short term are:
For the past three years GIFPA has provided a Functional Size Audit service to a large UK retail bank. This client has outsourced all its software development and maintenance activities to a supplier via a contract valid for seven years. Each year, the supplier is obliged by the contract to measure and report the functional size of all software requirements fulfilled during the year, whether new requirements or enhancements (additions, changes and deletions) made to existing requirements.
The supplier has several hundred software engineers involved in this contract, with one or two specialist staff dedicated to managing the collection, quality assurance, analysis and reporting of the data across the entire outsourced organisation. Around 30 staff have been trained to use the functional sizing technique (in this case, MkII Function Point Analysis) and the functional size measurements are made by trained staff from one project being seconded for short periods to the project for which measurements are needed.
The supplier is very conscientious and the measurement staff use, as well as the standard public domain references, local counting practices supported by case histories for unusual situations. They operate well-defined documentation and quality control procedures to ensure that the measurements are consistent and repeatable by different individuals. The suppliers Quality Assurance group performs reviews of the practices, procedures and results, reporting issues and tracking them to resolution.
None-the-less, regardless of all these efforts by the supplier, because the measurement data is used to determine the price of the services supplied to the customer, it is of such importance that the customer needs assurance that the procedures and results are reasonable and truthful. Hence, the customer employs GIFPA as an independent third party auditor to verify and validate the suppliers practices.
The results over three years have shown that there is a mean size error between the un-audited and the audited results of between +/2% and +/2.5% for each individual project. Over the entire annual workload, this reduces to an error of less than +/1% (due to the swings and roundabouts effect).
Clearly this gives very good confidence to both customer and supplier that these measurements form a stable basis for the contractual arrangements.
These cases show that organisations are using measurement techniques to improve the correctness, completeness, consistency, testability and traceability of functional requirements. Very simple techniques can be used easily to improve the quality of common requirements approaches. These techniques enable the requirements to be quantified and the measurement procedures employed and the results produced can be subject to audit and quality assurance. Hence they can form a solid foundation for pricing software, negotiating contracts and controlling scope creep.
Understanding the product size is crucial to understanding the software process and for managing project constraints such as duration, time-to-market and productivity, along with other factors that affect customer satisfaction.
Measurement techniques such as these are a necessary first step in implementing process improvements in a systematic way.
The slides are available in the PDF version of this paper (406Kb / 36 pages)
|GIFPA Ltd. 2016|