Software creation: how can you evaluate your design and be more productive?
The evaluation and costing of software design, including the size, effort, cost and time involved, is often the source of lively discussion among software evaluators. Project managers are generally responsible for this activity.
Software development concentrates a number of different activities, calling on specialized knowledge in areas such as requirements gathering, analysis and managementanalysis and management; software design, coding and independent verification and validation (IV&V); implementation, deployment, installation and commissioning. Each of these activities is carried out by a qualified person, using a variety of tools of varying degrees of complexity.
What is productivity? Definition
Productivity is defined as the rate of production for given inputs. Productivity is expressed in terms of "so many production units per day" or "so many production units per hour". Productivity is also defined as the ratio between inputs and outputs.
In the context of this article, productivity refers to the rate of production of an output unit using a set of inputs, for a given duration.
Problems in assessing software size
In today's IT industry, there are several units of measurement for software size. These units include, for example, function points, use case points (UCP), object points, functionality points, Internet points, test points, function point analysis (FPA), lines of code (LOC) and so on. There is no established measure for converting software size into any of these units.
Strangely enough, in these measures, software size is adjusted (increased or decreased), depending on factors such as complexity. Yet size is an immutable character. For example, a kilo of cheese does not become heavier or lighter if the person weighing it is more or less experienced in weighing, or if the scale is mechanical or electronic. Let's take another example: a kilometer remains a kilometer, whether it's a young person or an elderly person who walks that distance, or whether the distance is measured on a freeway or a busy street.
However, the speed at which the results are obtained changes. If we take the above examples, the older person will certainly cover the kilometer more slowly than the younger person. On the other hand, a kilometer is covered more quickly on the freeway than in town.
What's more, there's no agreement on how to count LOCs. Should we count logical or physical propositions? And how should online documentation be treated? Should it be taken into account or not?
These are just a few of the main issues involved in assessing software size.
Productivity concerns
The software industry is obsessed with the possibility of formulating a single, empirical, all-encompassing rate of productivity.
Attempts have been made to define productivity as 10 person-hours per function point, while preperson-hours per function point can vary from 2 to 135, depending on product size, team size and other factors. "Defining productivity" means assigning a figure representing the effort required, expressed in person-hours, to develop a unit of software size, so that software size (in function points) can be converted into development effort (in person-hours). Sometimes, intervals are chosen, for example from fifteen to thirty hours per PCU. At other times, empirical formulas are created on the basis of a set of factors, as in the case of the "constructive cost model" (COCOMO).
The problem with these productivity measures is that they lump all activities - requirements analysis, design, review, testing, etc. - into a single measure. - into a single measure. Yet the skills required for these tasks are different, as are the tools used, the inputs and outputs. Grouping them together under the heading of "software development" and giving a single measure of productivity can only provide a very rough estimate, and never an accurate one.
Designing software better and faster: how to become more productive?
Software development involves the following activities:
- project preparation activities, including feasibility studies, financial budgeting and project validation (financial and technical approval, and "project launch")
- project launch activities, such as identifying the project manager, creating a project team and setting up the development environment; project planning; setting up various protocols, such as service level agreements and progress reporting procedures; project training
- software engineering activities, including user needs analysis; software requirements analysis; software design, coding and unit testingintegration, functional, negative, system and acceptance testing; preparation of documentation
- deployment activities, including hardware and system installation; database creation; application software installationapplication software installation; pilot testing; user training; parallel phase and actual deployment
- project closure activities, including documentation of good and bad practices; project analysis (end of project); archiving files; publication of resources; release of project manager from obligations; launch of software maintenance.
When we talk about industry "ground rules" (accepted, common-sense procedures) for productivity, it's hard to determine which activities are included in the productivity rate. No one would bet on measuring productivity, even though it's a basic industry rule.
Let's take a look at the nature of these activities:
- Requirements analysis: understanding and documenting what the user needs, wants and expects, so that software designers fully understand and can design a system in strict compliance with the stated requirements. Dependence on external factors is high.
- Software design: consider the various options available for hardware, system software and development platform; arrive at the optimum choice for each; design an architecture that meets stated requirements and customer expectations. The architecture must be compatible with current technologies, and the design documented in such a way that the programmers understand and deliver a product that conforms to the user's original specifications. There are many alternatives, and since software design is a major, strategic activity, mistakes can have serious consequences.
- Coding: develop software code that conforms to the design and contains as few errors as possible (it's so easy to unintentionally leave "bugs" in).
- Code review: study code written by another programmer, decipher its functionality and try to predict any errors the customer might encounter when using the software.
- Testing: trying to discover any defects that might be left in the software. However, finding almost all defects is impossible. What's more, testing the software in its entirety is an impractical activity.
As the nature of these activities is so different, it's obvious that productivity for each of them is not uniform (and therefore cannot be described by the same figure). The pace of work differs for each of these activities.
They do not depend on the quantity of code produced, but on other factors, such as :
- requirements, which depend on the efficiency and clarity of their source (users or documentation)
- design, which depends on the complexity of processing, the alternatives available and the constraints under which functionality must be designed
- code revision, which depends on coding style
- control, which depends on how the code is written (the more errors there are, the longer it takes to test and retest)
- coding itself, which depends on the quality of the design.
As a result, we need to establish different productivity figures for each of these activities.
Let's draw a parallel for industry, with punching for example. The activities to be carried out are: 1) setting up the machine; 2) setting up the tools; 3) loading the job; 4) punching the hole; 5) deburring the hole; 6) cleaning; 7) delivering the sheet for the next operation.
If several holes are punched, the time "per hole" decreases, as setup activities are one-off activities.
Consequently, if we consider the coding of a unit, for example, the activities to be carried out could be: 1) receive the instructions; 2) study the design document; 3) code the unit; 4) test and debug the unit for specific functionality; 5) test and debug the unit for any use; 6) test and debug the unit for any use.unit for any application; 6) remove unnecessary code from the unit; 7) regression test the unit; 8) transfer the unit to the next step.
Similarly, we can propose micro-activities for each software development phase.
Productivity figures: empirical or based on a methodical study?
Each of the above activities has a different success rate. Standard times for each of these activities need to be established. Once this has been done, work study techniques, such as synthesis or analytical estimation, should be used to estimate the total duration of the assignment.
Whether time study techniques are used to carry out individual productivity studies or to gather empirical data, software development is neither totally mechanical nor totally creative. It is also unrealistic to plan activities with a strong creative component; work study methods take this aspect of software development into account. Much research is being done on "executive productivity", and perhaps methods for "timing" productivity in software development will be available in the future. At present, empirical data seems to be the solution of choice.
Where do we get empirical data? The first option is through lead-time studies using industrial engineering techniques. The other, easier and more reliable way, is to rely on historical data provided by time swaths.
The majority of time management software used in the industry is payroll and billing oriented. They do not collect data at the lowest level to establish productivity trends. Most of these programs log two or three levels of data, in addition to date and time. A project is always recorded at the first level, and the second and third levels may be occupied by a module and a component, a component and an activity, or a similar combination. In addition to the date and time at which the employee is present, timesheets must record five levels of data: the project, the module, the component, the development phase and the task performed. In this way, data would be available to establish productivity measures empirically and realistically.
At present, software development activities focus on macro-productivity. This trend needs to change, and we need to move from macro to micro-productivity. To do this, we need to change our timesheet software and the depth of the data we collect.
Studying productivity at the micro-level has the following advantages:
- Better predictability of software development
- Better quality estimates to improve pricing during project development and finalization phases
- More precise objectives when allocating tasks, thus increasing the confidence of software publishers
- More accurate cost estimates
Conclusion
It's important to understand the difference between the terms productivity and capacity. Productivity is the success rate of a micro-activity; capacity is the success rate of an installation (factory, organization, etc.), and many activities are included in the capacity diagram. For software evaluation purposes, the focus must shift from macro-productivity (capacity) to productivity (for micro-activity). Empirical data collection is preferred in order to obtain productivity measures for the various software development activities, as time and task study techniques cannot provide a complete picture of productivity.The collection of empirical data is preferred in order to obtain productivity measures for the various software development activities, since time and task study techniques cannot provide satisfactory results when the mission presents a high degree of creativity (which is the case for software publishing). In order to collect empirical data, it is necessary to improve scheduling software. We recommend this procedure for calculating productivity figures at all micro-levels.
About the authors
Murali Chemuturi is an industrial engineering expert at the Indian Institution of Industrial Engineering. He has spent over thirty years in professional organizations, including ECIL, TCS, Metamor and Satyam. He first worked in manufacturing, then in IT. He currently heads Chemuturi Consultants, and has a particular interest in software for the software development industry. He has conducted several corporate training programs for software project management and software evaluation.
Sarada Kaligotla has completed her Master's degree in Computer Applications and holds Project Management Professional (PMP) certification from the Project Management Institute (PMI), as well as being a Certified Software Quality Analyst (CSQA) from the Quality Assurance Institute (QAI). She currently works for Blue Cross Blue Shield in Massachusetts. Sarada has six years' experience in the software industry, as well as in project and development management.