Tuesday, July 21, 2009

Immature in Using Metrics to Support Investment in Software Testing

In the current economic environment, one would think that we would be using all means to secure investment in software testing. However at a recent industry forum it became apparent to me that the level of maturity in using metrics is somewhat limited.

At a recent ACS Testing SIG meeting on metrics, we discussed what metrics could be used to support the business case for investment in testing. The meeting was largely an open forum discussion seeking opinions and contributions rather than a presentation.

Of the participants, many admitted to using defect metrics, but rarely used other metrics such as time and budget allocation, effort, coverage, and test outputs. This is somewhat disappointing to have a poor measure of the work that we undertake. It leaves us vulnerable to cutbacks, as other managers who may be more verbally gifted may take away our budget. Metrics provide a useful way of supporting our benefit.

The meeting went on to review the kinds of metrics that testers use. The following list provides some suggested metrics:
  • Defects
    • Type
    • Cost
    • Age
    • Status
    • proportion found in stage
  • Coverage
    • automated v manual
    • planned v executed
    • requirements
    • code
  • Effort, time, cost, schedule
  • Test outputs
    • test designed
    • test executed
  • Test inputs
    • size
    • complexity
    • developer hours
    • project budget
  • Risk
Some comments were made that metrics can be broadly grouped into two categories:
  • Efficiency - can we do more testing using less effort
  • Effectiveness - can we acheive a better outcome from our testing effort
While the discussion focused on supporting the business case for software testing in tighter economic times, it is important to note different uses of metrics. Metrics are used by test managers for the following:
  • Managing Progress - producing estimates, how complete is testing, how much more to go, ...
  • Quality Decisions - is the product good, bad or indifferent, are we ready for release, ...
  • Process Improvement - what are the areas that future improvement should target, how do we know our process has improved, ...
  • Business Value - what benefit has testing produced for the business in terms of reduced costs, increase revenue, reduced risk, ...
The choice of what metric to use can be daunting. It is not a good idea to collect everything, as you get overwhelmed in what data to use to make a management recommendation. It is worth looking at Goal-Question-Metric (GQM) promoted by Victor Basili to help choose appropriate metrics.

There was a lot of discussion that metrics assessing productivity can be dangerous. My personal view is that we should you use metrics to influence productivity, but that the following points need to be kept in mind:
  • Productivity or performance measures do change behaviour. People will start aligning behaviour towards meeting the metric target. A poorly chosen metric or target could create behaviour that you never intended.
  • One metric wont tell the correct story. Measuring productivity in terms of tests per hour completed, may mean that people run poor tests that are just quick to run. You may need several metrics to get a better perspective, for instance collecting information on defect yields, defect removal ratios from earlier phases and so on to get better picture.
  • Given that metrics will change behaviour, you may change your metrics from time-to-time to place emphasis on improving or changing other parts of your process or performance.
  • Metrics should be used by the manager to ask more questions, not a hard and fast rule to make a decision. A metric may lead you to make deeper enquiries with individual testers or developers.
  • Managers need to build trust with the team in how they us metrics. If the team don't trust how you will use the metrics the will likely subvert the metrics process.
When we were discussing using metrics to justify the business case for testing it is very easy to get caught up in the technical metric, that is important to the test manager. However, when discussing with other business stakeholders, you need to talk their language. You may need to explain the metric in terms of what it means in terms of cost saving or increased revenue for it to have its impact. Don't explain it in terms of how many more tests cases are executed per hour, instead explain it is $'s per hour saved. Other business may need it explain in other terms, such as risk reduction or satisfying compliance.

It appears that we still have a way to go with software testing metrics. As a profession we need some clarity about how as Test Managers we can use metrics to gain support, and provide evidence of our benefit. Lets hope that as we mature we will start stepping up to sell our return-on-investment more effectively.


  1. Very interesting article. YOu must also look up http://blog.stagsoftware.com

  2. Nice article dude...


  3. Nice article dude...


  4. Have you looked at the website http://www.psmsc.com or read the book practical software measurement book you can download version 4 from the website if you are a member. Version 4 is a far more complete edition than the book. I also like the systems engineering leading indicators guide found on the white papers page

  5. Very good concise article!!!

    However I have the following points::

    Throughout the software testing industry various metrics is created, captured and analyzed, at the end of each test cycle. However, seldom we get time to implement those learnings in the next test cycle of the product release. I would like to highlight this as a real RISK or PROBLEM AREA in the software Testing. If the implementation of the learnings from Metrics Analysis can be mandatorily implemented, testers' life can be improved by leaps and bounds. Simultaneously the quality of the product will also improvesignificantly.