Wednesday, November 28, 2012

How to measure the effectiveness of your testers - some metrics suggestions


A long time ago I studied an MBA and on one of the Management Accounting papers, we were provided with a lot of data about some sales teams.  We were supposed to analyse the information and state which sales staff were performant.  It was a bit of a trick question really - drawing the exam candidate into seeing which sales people made the most sales and therefore announcing that they were the ones most worthy of a pay rise/bonus.  Of course, some of the sales staff had peachy sales regions to cover and it would have been hard NOT to sell stuff - others were harvesting stony ground and any sales they achieved should have been well rewarded.

That's the problem with metrics - often we measure what is easy to measure and use it to motivate our staff - this can be a very bad thing and drive behaviour which is not optimal for the organisation.

Now I work in IT and software testing in particular, and I have come across organisations looking to collate metrics for their software testing effort while working at my software testing consultancy business.

I hear people jump to metrics like - the number of bugs raised per day - I have even read blogs suggesting such metrics.  This is an interesting point I believe, as many crowd sourcing software testing organisation use a pay-per-bug model - perhaps they are measuring (and rewarding) the wrong thing because it is easy to measure?

Clearly this is not a good metric to measure - which hopefully the sales team example illustrates.

But, it isn't easy to come up with good ones - but I'll give it a go for people to shoot down, or maybe build on - maybe between us we can come up with some good ones!

So what is the behaviour we want to reward, and therefore what metrics might we measure to encourage that behaviour?

Well, how about we want to find bugs earlier in the SDLC and avoid serious bugs in production?  Therefore how about these metrics?

  • For a given software module, measure the number and seriousness of bugs raised in the live environment.  The lower this number, the better the testers and developers who worked on that module during development and QA phases.  This sort of metric is measurable using bug tracking systems such as Bugzilla, Jira and others.  However, be careful - if the testers are finding bugs right up to the release date, maybe you are releasing your software too soon and haven't given the testers a long enough QA phase.
  • Further to the above metric, analysis of bugs found in the live environment can be conducted, such as whether they should have been easy, moderate or hard to find in the testing phase.  Clearly, if testers who tested a module are letting 'easy to find' bugs through to production, something needs to happen - look at the way they test, some training or maybe they didn't have enough time?
  • For a QA phase, measure the number of bugs raised over time.  The earlier the bugs are raised the better the tester - we want to reward this behaviour as quicker feedback results in quicker and cheaper fixes.

I'm not totally happy with these metrics, but when you start to think about this topic, you will find it is decidedly tricky!

Please share your ideas below!

Cheers,
Tony.

No comments:

Post a Comment