Uncategorised

2 min read

by Colin Armitage on 21st September 2018

Why is test automation so cumbersome?

“The biggest barrier to test automation remains the level of maintenance required to sustain it.”

Software test automation has been available for over a quarter of a century but the practice has a mixed track record and often falls into disuse due to the effort required to maintain the created scripts. Achieving just a moderate level of automation coverage requires a considerable investment in budget and resources. With increasing software development complexity fighting business and IT drive for agility, traditional test automation has become too cumbersome for many to contemplate or sustain.

But why is test automation so cumbersome?

Traditional test automation systems originated in a world that moved at a much slower pace, where waterfall developments were the only game in town and no-one attempted to tackle fast-moving, mission-critical applications – they knew that the technology simply couldn’t keep up.

These products all get their capabilities from powerful scripting languages; something that sounds good in a presentation but has become a horror in the real world, requiring highly skilled and expensive test automation engineers, to build and maintain the test automation framework and assets.

Other ‘benefits’ of a coded approach were rapidly found to be of little practical use. The theory was that a coded automation script could be developed in parallel with code development. The truth was somewhat different as these test tools required knowledge of how the developers were naming the visual components – something that was neither consistent nor predictable.

“With all the benefits of a more fluid, flexible process, come challenges in how to assure the quality and governance of these ever-changing applications,”

Because of this, the code-based tools reverted to a  record’ mode to establish the initial script, which made them only usable once the application was complete.

This was more practical, but now the automation coding effort couldn’t even commence until sections of the code were complete and stable

It got worse. Most of each script that needed to be coded had nothing to do with testing the application. The engineers had to overcome many challenges before they could even get that far – handling unpredictable response times, retrieving displayed data needed for validation, and establishing checkpoints to signify when the application had completed a logical step.

But the death knell was what happened when the application to be tested changed. Suddenly all these laboriously created ‘assets’ were worth nothing and would not execute until the entire process had been repeated.

What happened next ranged from the sane to the almost comical. The sane organizations did what came naturally and gave up. Others were not to be defeated and threw even more expensive resources at the problem, some hiding the failure by outsourcing the entire test burden – often to companies who did most of the testing manually. All this for an initiative that was meant to reduce the need for resources, save time, and improve quality!

To put this in perspective, industry analysts state that the high-water mark in automation success is when 20% of an application has been automated. This is the high-water mark, mind you, not the average, 20% is the peak of what you can expect after a financial investment measured in hundreds of thousands of dollars and an effort investment measured in many man-years.

It doesn’t have to be like that. See what a fast and painless route to test automation looks like.

This blog is an extract from a white paper “Throwaway Test Automation”. You can read the whole paper here.

Related topics

Related

Ready to talk testing?

We’re ready to show you how we can help reduce your business risk and test faster than ever.

Request a demo