Thursday, April 25, 2013

When CI triggered tests are failing - this might be the issue...

I had an issue which was bugging me for some time and have found a fix.

I had some Test Complete GUI tests which ran just fine when I manually ran them from Jenkins, but when the build was triggered by an upstream project or by an SVN check-in, the tests failed.

I was getting an External Error in a pop up - which was misleading - to me anyway.

It turned out, the solution was to open an RDP session from another server on to the Jenkins slave as the Jenkins user - this provides a desktop for the Jenkins user, which then enables the GUI tests to run successfully.

Hope this saves someone some head scratching.

Thursday, April 4, 2013

Automation Engineers are from Mars, Manual Testers are from Venus...


I can't stand the them and us attitudes that seem pretty prevalent about automation engineers versus manual testers.  You hear things like:

  • automation engineers are hopeless testers - give me a manual tester any time
  • manual testers of course can't possibly code - give me a programmer

Of course, there are some people who perhaps can only do one of those things well, but I believe many can do both well with practice.  I like to think of automation co-existing with manual testing and both augmenting the other.

The belief that automation only finds the same bugs misses the automation process.  Yes, I agree, if you just run the scripts, then the same code paths will be exercised and the same checks will be done, but the process of writing automation scripts isn't as simple as that.

The fact is, as the System Under Test (SUT) changes and new releases are put into the test environment, the automation scripts run, and due to changes in the GUI and the work flow, the tests will often 'break' - this may be due to a bug being found - great - and we have found it fast.  Often however, it won't be due to a bug - it will be a 'bug' in the automation script - or to be more precise, the automation script isn't flexible enough to cope with the change to the SUT.  So what happens?

Well, the automation engineer will start to investigate.  He or she will probably manually look at the SUT and or step through the automation script.  So, he will start to focus on the part of the SUT which has changed (and therefore an area where a real bug may have been introduced).

Another area where automation engineers find bugs is in the process of writing a script for the first time:

Often the automation engineers won't be overly familiar with the SUT - this is a good thing - as a fresh pair of eyes I believe can often find unexpected bugs - the new pair of eyes doesn't exercise the SUT in the expected well trodden path - they can act as a 'monkey' test.  Just last week (using TestComplete), I was automating a new test on some new software (to me) and found a bug - the automation script found the bug, but it was my manual 'testing' - I was just playing with the system really to try and understand how to stop a process through the GUI - that had broken the SUT.  The web app had to be restarted to 'fix' the SUT - quite a serious bug - and one that hadn't been found to date by manual testing.

And finally, the key thing for me is that automation testing takes a massive burden off of the shoulders of the manual testers, who no longer have to plough through boring manual scripts making lots of error prone checks.  The automation can do that and they are freed up to do higher value exploratory testing.

Wednesday, December 19, 2012

Proud to be a Java Code Geek!

Hi All,

Just to say I'm proud to be part of Java Code Geeks - if you are interested in all things Java, go take a look and learn how to code! I'm taking a look at Android Apps over Christmas!

Friday, December 7, 2012

Software testing is common sense - right?

I'm talking here about functional testing, not performance testing, automated testing or penetration testing where you may need specialist technical skills. I may be playing devil's advocate - I'll let you decide.

I read and hear zealots advocating various tools and techniques, plying their trade. Consultancies all have their own 'new', bespoke way of doing things. It's special and better than the rest - but it isn't cheap.

I sometimes wonder, does it all need to be this complicated? Do I need to learn these techniques to know what to do and how to behave?

I don't think so - there I have said it - what do you think?

I think there are a handful of things a tester needs to have at the forefront of their mind when testing:

  • what are the business (or otherwise) goals of the software?
  • what problem is the software trying to solve?
  • what risks are there - how likely are they and how serious are they?
  • have empathy with the customer/user - get inside their heads - behave like them
So, do testers need the latest new-fangled jargon and terminology? Well, I think you know where I stand. I think testers need:
  • To be intelligent
  • To have business acumen (when testing business software)
  • Common sense
  • Good communication skills
  • People skills
  • Attention to detail
  • (you can't make a silk purse out of a sow's ear no matter what framework you use)
Testers should free themselves from the processes sometimes imposed on them to focus the mind on the problem at hand - the software and the business goals of that software.

Tuesday, December 4, 2012

Is your website mission critical?

For some businesses, their website isn't just a static or even dynamic information giving tool. For some, it is more than just an advert providing information on how to get in touch. For some, it is the absolute life-blood of their business.

A few years back I worked for an airline. I can't remember the exact figures, but the website turned over approximately £300M per annum and that was approximately, for the sake of argument, 80% of the business.

If the website went down for any length of time, it was very serious indeed. In a very competitive market place where margins are tight, it could spell the end. Not only is there the immediate cost of lost sales and the added cost of having to increase the head count in the call-centre, but the damage to the brand is considerable. If customers have a bad experience, they can be quite unforgiving and go elsewhere.  The cost of trying to win back those customers is very high indeed, and may even be impossible if your competitors look after them right. Also, disgruntled customers tend to be more vocal than the happy ones - so you can be sure they will be telling anyone who will listen not to bother going to XYZ airlines!

So what can you do?

Well, I have blogged before about software testing and whether you need it or not. As the owner of a company which advocates and provides specialist software testing services, I would wouldn't I!?

So software testing can go a long way to avoid situations where your website, the life-blood of your company, is running smoothly.

However, it isn't the be all and end all. Sometimes, even the most robust software, well written and thoroughly tested can go down. Why? There are lots of reasons. Perhaps it relies on a flaky 3rd party service. Perhaps there is a power cut and the UPS only has 3 hours of juice in it. Maybe there is a hardware failure.

In these situations, day or night, you need to know about it, so you can fix it before your customers even notice if at all possible.

With it's expertise in test automation, my company can provide a bespoke system to monitor your e-commerce web site 24x7. In the case of an airline for example, we could drive bookings through the website, checking data returned during the booking process - did the seat assignment work, did the insurance booking work, did the car hire booking work, did the hotel booking work and of course, did the flight booking work! If any of these fail, an alert email and or text can be sent to the support desk and they can investigate. Get in touch if this is of interest to your business.


Monday, December 3, 2012

Twitter Scheduler Application

I'm quite new to Twitter, but believe it is a powerful tool for small businesses such as mine.  I wanted to make it even more powerful for my software testing consultancy business, without finding it taking up too much of my time.

The functionality I wanted was to be able to set up my planned tweets first thing in the morning, before the 'real' work began - and then leave some sort of application or robot working in the background to actually send the tweets when I wanted them sent.

This enables me to write a few tweets and get them sent at key points during the day (or night) for my target markets - for example when the West Coast USA is finishing the working day - I'm UK based.

There are commercial tools which have this functionality, but they are paid for and also I am told quite fiddly to set up.  Plus I wanted to write my own!

It turns out it is very easy....

I wrote the application in Java using the twitter4j library.  I needed only a handful of classes:

I had a Tweet class which had fields such as message, dateTimeToSend and hasBeenSent.

I had a TweetScheduleReader which utilised the Java CSV library.  This read in my planned tweets from a CSV file - reading in a list of messages, the time to send them and a flag as to whether they had been sent yet.  This gets converted into a List of Tweets held in memory.

Then there is a class with a main method which just called the TimerTask run method hence:

timer.schedule(timerTask, new Date(), DELAY);

So starting from the moment this line of code is called, every DELAY (in my case 1 minute), the timetask run method is scheduled.  So what gets run?  See below.

I had a class which extended TimerTask and the run method was this:


public void run() {

//Login to twitter
Twitter twitter = new TwitterFactory().getInstance();
twitter.setOAuthConsumer(CONSUMER_KEY, CONSUMER_KEY_SECRET);
String accessToken = getSavedAccessToken();
String accessTokenSecret = getSavedAccessTokenSecret();
AccessToken oathAccessToken = new AccessToken(accessToken,accessTokenSecret);

twitter.setOAuthAccessToken(oathAccessToken);

//read the list of tweets from CSV
List<Tweet> tweetsToSend = scheduleReader.getTweets();
if (tweetsToSend == null || tweetsToSend.size() == 0) {
scheduleReader.readTweetsFromCSV();
tweetsToSend = scheduleReader.getTweets();
}

Date currentTime = new Date();
List<Tweet> updatedTweetToSend = new ArrayList<Tweet>();
for (Tweet tweet : tweetsToSend) {
if (currentTime.after(tweet.getDateTimeToSend())) {
if (!tweet.isHasBeenSent()) {
//send it!
try {
                                                //I appended the date to make the tweet unique - else it gets rejected by             //Twitter
twitter.updateStatus(tweet.getMessage() + " " + new Date());
} catch (TwitterException e) {
e.printStackTrace();
}
//mark it as sent
tweet.setHasBeenSent(true);

}

}
updatedTweetToSend.add(tweet);
}

//update the tweets in memory marking them as sent if appropriate
scheduleReader.setTweets(updatedTweetToSend);

}

I found this tutorial very useful regarding the authorisation: http://www.javacodegeeks.com/2011/10/java-twitter-client-with-twitter4j.html





Wednesday, November 28, 2012

How to measure the effectiveness of your testers - some metrics suggestions


A long time ago I studied an MBA and on one of the Management Accounting papers, we were provided with a lot of data about some sales teams.  We were supposed to analyse the information and state which sales staff were performant.  It was a bit of a trick question really - drawing the exam candidate into seeing which sales people made the most sales and therefore announcing that they were the ones most worthy of a pay rise/bonus.  Of course, some of the sales staff had peachy sales regions to cover and it would have been hard NOT to sell stuff - others were harvesting stony ground and any sales they achieved should have been well rewarded.

That's the problem with metrics - often we measure what is easy to measure and use it to motivate our staff - this can be a very bad thing and drive behaviour which is not optimal for the organisation.

Now I work in IT and software testing in particular, and I have come across organisations looking to collate metrics for their software testing effort while working at my software testing consultancy business.

I hear people jump to metrics like - the number of bugs raised per day - I have even read blogs suggesting such metrics.  This is an interesting point I believe, as many crowd sourcing software testing organisation use a pay-per-bug model - perhaps they are measuring (and rewarding) the wrong thing because it is easy to measure?

Clearly this is not a good metric to measure - which hopefully the sales team example illustrates.

But, it isn't easy to come up with good ones - but I'll give it a go for people to shoot down, or maybe build on - maybe between us we can come up with some good ones!

So what is the behaviour we want to reward, and therefore what metrics might we measure to encourage that behaviour?

Well, how about we want to find bugs earlier in the SDLC and avoid serious bugs in production?  Therefore how about these metrics?

  • For a given software module, measure the number and seriousness of bugs raised in the live environment.  The lower this number, the better the testers and developers who worked on that module during development and QA phases.  This sort of metric is measurable using bug tracking systems such as Bugzilla, Jira and others.  However, be careful - if the testers are finding bugs right up to the release date, maybe you are releasing your software too soon and haven't given the testers a long enough QA phase.
  • Further to the above metric, analysis of bugs found in the live environment can be conducted, such as whether they should have been easy, moderate or hard to find in the testing phase.  Clearly, if testers who tested a module are letting 'easy to find' bugs through to production, something needs to happen - look at the way they test, some training or maybe they didn't have enough time?
  • For a QA phase, measure the number of bugs raised over time.  The earlier the bugs are raised the better the tester - we want to reward this behaviour as quicker feedback results in quicker and cheaper fixes.

I'm not totally happy with these metrics, but when you start to think about this topic, you will find it is decidedly tricky!

Please share your ideas below!

Cheers,
Tony.