Blog

Software Quality Matters Blog

As Retailers Boost IT Spending, Be Mindful of the Minefields

16 April 2014

By George Wilson

Research out this week has earmarked the retail sector for soaring IT investment in 2014, with websites, mobile and IT system replacement being top of their wish lists. Law firm TLT has found that two thirds of the UK’s top 60 retailers expect their firms to grow this year and 80 per cent were convinced that IT will be instrumental in driving sales.

Mobile is the technology that retailers are getting most excited about, with two thirds planning to invest this year. And more than half of retailers plan to invest in their e-commerce platforms in 2014, in order to help them keep up to date with the rising tide of online shopping.

Retails Boost IT SpendingThe fact that retailers are embracing technology and ensuring that it is pivotal to their business growth strategies is to be applauded. But there is definitely a note of caution that retailers must heed if they are to plough more investment into IT.

For a start, both mobile and e-commerce are rapidly evolving, so retailers need to ensure their approach to delivering these apps enables them to make changes to content and functionality frequently and rapidly. This will allow them to experiment with offerings and customer experiences, make changes according to what works and what doesn’t, as well as respond quickly to competitor innovation.

But the caution is also about disaster prevention. In the last year, there have been some big IT disasters for retailers. Back in October, US retail behemoth Walmart fell victim to data discrepancies on its website, angering thousands of customers, causing a PR disaster and damaging its share price. A data glitch on the e-commerce site saw expensive electronic items, usually valued at over $500, on sale for $39.99. Bargain hunters swarmed to the Walmart site to snap up cheap products, but on realizing its mistake, Walmart cancelled purchases. And wasn’t the first time for Walmart. Weeks before, the company had problems with its food stamp loyalty systems, enabling shoppers to load up their online shopping carts with hundreds of dollars of free items.

UK based Screwfix.com also encountered problems earlier this year when all of the items on its website – including expensive power tools and sit on mowers – were mistakenly lowered to £34.99. Screwfix.com responded by cancelling orders, angering customers.

The lesson learned from this type of IT malfunction is that a proper and thorough approach to quality assurance and testing – the part of the IT process that ensures that systems, applications and websites are fit for purpose – is vital. Investing a lot in software and whizzy new applications might seem obvious and retailers might think of cutting corners elsewhere. But QA and testing – particularly validation testing where anomalies are picked up as a result of changes being made to other systems and applications – is not the place to slash the budget.

Tech disasters can cost retailers dearly. Customer loyalty, brand value, reputation and share price can all take a hit when a technology catastrophe hits. So you can’t put a price on the investment of spending time thinking these new technologies through and ensuring that all bases are covered when the button is pressed and these new applications go live. Retailers, tread carefully.

View 0 Comments on As Retailers Boost IT Spending, Be Mindful of the Minefields


Ensuring Quality in New Insurance Products

18 March 2014

Dog Illness Premiums? Young Driver Policies? Will They Work?

The insurance market is one of the most fiercely competitive in the world. The last few years have seen new entrants from more traditional insurer types, plus big retail and banking brands getting in on the act. This makes issues such as new products launches really important. A new insurance concept needs to be deployed quickly, before competitors get wind of what’s happening. Given the disloyal nature of insurance customers, being able to deliver what they want, when they want it is crucial.

Testing Quality In Insurance Products  Marketing and underwriters within insurers are constantly looking to push the envelope, trying to identify new markets and to develop insurance propositions that will help them to steal a march on competitors. That might be a new pet insurance proposition, personal items cover or a break-through product for young drivers. The underwriters then work to structure a realistic and profitable solution, checking that all boxes are ticked and the new product stacks up.

But a big problem facing the insurance industry is the complexity of the distribution network of these new products and ensuring that the product is presented accurately in these different realms. An insurer might be selling the product directly through its own website or call centres, through comparison websites, white-labeling its product for other companies to sell and on to the broker community through the established software houses. And ensuring their product is compliant with myriad regulatory requirements in all these environments, adds another layer of complexity.

Product development and deployment becomes a quality problem, not just because of the variety of places where customers can buy the product, but also in validating all the related data and technology used to support it. This is combined quality challenge is shared by IT and the business – testing isn’t just related to the systems and technological aspects of launching a new product. They are just as relevant to the data sets, processes, people and compliance wrapped up in the product. The complexity means the potential for things to go wrong is sky high. For example, a new product might be launched with the wrong rates displayed, as a number of insurers have found out to their chagrin. Almost every insurer has their own closet skeleton of a product, which was not thoroughly tested and caused substantial financial impact. In some cases resulting in large fines.

This brings another matter to the fore – testing is often conducted in a staging environment rather than a live environment. But all the really matters is what is on the live site. If the data isn’t tested in a live environment, defects could crop up – the wrong pricing data might appear, for example.

Ultimately the balance is important for insurers – balancing the need for getting new products to market has to be weighed up carefully against the risks of launching a good, compliant and accurate product. But what is clear, that as insurance environments get more and more complicated, organizations need a clear run at thorough and transparent QA and testing.

View 0 Comments on Ensuring Quality in New Insurance Products


Testing Center of Excellence – What is it for?

07 March 2014

Center of excellence. What a phrase. The NASA Space Center has truly been a center of excellence, making its reputation not only from its successes but for the manner in which it reacted to failure. It learned the lessons from each failure and then strove to ensure that not only were such failures not repeated but that the lessons were applied to future successful missions.

Taking such an attitude to the more humble discipline of application quality creates an enticing prospect. Let’s gather our finest minds to define best practice, to create new working methods and implement a set of tools to deliver that best practice in the most efficient manner to deliver the greatest and fastest ROI on the resource employed.

It is therefore hard to argue with the creation of Testing Centers of Excellence. Or is it? Let’s consider three reasons driving the creation of these centers.

Economies of scale. The global economy may be recovering but few businesses believe that the land of milk and honey is once more upon us. So costs and particularly head count remain tightly controlled but the business is equally hungry for new IT systems to hone their competitive edge and to underpin their recovery. Doing more with the same finite resources is a challenge and by centralizing testing skills a more efficient allocation of those resources may be achieved.

Focus. Newer development methodologies are replacing or co-existing with traditional waterfall developments. Agile, Kanban and the like have radically altered the relationship between development, quality assurance and user acceptance testing, so perhaps it is wise to absorb the quality challenge into a dedicated group who can figure out the best way forward.

Skills. And besides it is complex stuff with the tools themselves often ill-suited to the challenge. What chance does a regular QA team have of successfully executing an agile development using legacy tools from the likes of HP, IBM or Borland? Perhaps a specialized team in a ‘center of excellence’ can make these tools work even when they have historically failed.

The reality is different.

Good waterfall, agile and any other developments are based on excellent communication between everyone involved. Agile teaches us that ideally developers, testers and end users should all be permanently in the same room to ensure perfect alignment between need and delivered application. Quite how this can be achieved when one group is mentally, physically or organizationally partitioned away is anyone’s guess.

If the tool is hard to use or ill-suited to the task in hand it is simply the wrong tool. Man up. Tools developed for the challenges of the late 20th century are by definition unlikely to solve the problems we face twenty years later. If you’re spending a chunk of your time developing the tool rather than focusing on the quality task at hand, it is by definition unfit for purpose.

So are TCOE’s a bad thing? It all depends why they have been created.

If the goal is to get the quality leaders together and to continually evolve best practice then there is real benefit.
If the goal is to select the most suitable testing technology and map it to the agreed best practice then this will form the basis of enhanced communication and productivity across all projects.

If the goal is to disseminate knowledge by placing a quality leader in each development project to train their cohorts and to communicate the lessons learned, then this will enhance quality while keeping developments aligned to the business need.

However, if the reality is that TCOEs are the result of throwing labor at QA in compensation for outdated testing technology, the result will be a growing gap in the ability of the business to meet the fast evolving demands of its customers.

A gap that nimbler competitors will fill.

You can read more about HP replacement here.

 

View 0 Comments on Testing Center of Excellence – What is it for?


HP QC & QTP Annual Fees – good money after bad?

28 February 2014

Since HP acquired Mercury Interactive in 2006 there has been considerable disquiet in the market as support costs for Quality Centre and QTP have steadily risen. Many users have been able to negotiate discounts on their annual fees or to gain more flexibility in how their licenses can be deployed. HP’s licensing model is known for its complexity and some users have fallen foul of its restrictions and found themselves with an un-budgeted additional cost at the end of the year.

“I’m a growing pain, right?”

But this focus on direct costs entirely misses the point. Maintenance costs are an issue but it is not HP’s annual fee that represents the bulk of the pain. That pain is felt in the effort it takes to build automation in QTP and to maintain it as the applications under test change.

Successful automation with QTP takes well trained, costly staff and is a slow process. Even when the automation suite is complete it is very fragile as the target application is amended and enhanced. This is where the bulk of the expense lies and it it only offers a very low ROI.

Today’s successful companies demand agility and speed not fragility and slothfulness. So take a fresh look at your investment in HP, or the equivalent tools from IBM or any of the legacy vendors. The original investment will have been written off by now, so why continue to throw money, time and resources at a tool which you know is such a poor fit to your business needs.

Editors’ Notes:

About Original Software: Original Software enables organizations to meet their objectives more rapidly by delivering enterprise application functionality frequently and efficiently. Knowledge workers and IT professionals use our technology to streamline user acceptance testing, conference room pilots, manual testing and automated testing, project management, and regulatory audit of applications. The software provides the fastest way to capture and share business processes, validate application functionality, and manage projects in real-time. Customers report massive increases in productivity, enabling them to keep up with changing business needs while reducing cost. More than 400 organizations, of all sizes and industries and operating in over 30 countries count on Original Software every day.

View 0 Comments on HP QC & QTP Annual Fees – good money after bad?


Toyota – software glitch leads to global product recall

17 February 2014

By George Wilson

Toyota, the Japanese car giant, suffered a massive blow this week when it was forced to recall almost two million of its top selling Prius hybrid vehicles. A glitch in the cars’ software could set off cars’ warning lights, meaning it would enter failsafe mode and cause the vehicle to stop suddenly. The biggest hit to Toyota will be in Japan and the USA.

Failsafe mode means STOP NOW!

Of course, the real risk in this scenario is to drivers’ safety. But the corporate challenge for Toyota is not insignificant. The reputational damage to the brand is considerable – most environmental car purchasers might think twice before buying a Prius. This isn’t the first time this has happened for the Prius – weeks ago, US Prius models were recalled for faulty seat heaters. And in 2009, millions of Toyota models worldwide were recalled due to acceleration issues, which hammered Toyota’s share price.

So how was something as fundamental as a software issue to blame this time? And how were millions of Prius models released with this software glitch?

Of course, it’s only conjecture at this stage, but it might have been that the requirements for the software were not properly defined, or the integration between the different modules was not properly defined or tested.

People in the know might blame the testing – how was this software released with such a fundamental flaw? But testing is always based around testing the requirements. And if they got those wrong, or missed something in the design, then testing will be examining the wrong parameters.

Following Toytota’s acceleration issue five years ago, a number of court cases sprang up that found its electronic throttle system was flawed. The company had performed a “stack analysis” but had completely botched it in the words of the ruling, meaning software defects were the cause of a number of accidents.

Obviously, in this case, software defects actually cost lives. And in the automotive world, the risk is ultimately to people’s safety. In reputational terms for the car manufacturers, it also costs them dearly. So the message, again, is one that is clear and simple – technology processes have to be clearly defined, properly executed and tested, tested and tested again.

View 0 Comments on Toyota – software glitch leads to global product recall


When you drive your car, do you write code?

11 February 2014

By George Wilson

You are providing input and instructions and internal code is making the car do many of the things you want it to do (ok, maybe not the steering thankfully!). So, I suppose in your analogy you are coding your car to do what you want. You understand the language – Turn the key to start it. Select a gear. Press the accelerator pedal. I suppose you could say that is programming the car with instructions, but I don’t think most people would consider it that way.

“Clutch..mouse click…breaks!”

When you use MS word, you input data, press buttons and use keys to achieve what you want. You are providing instructions. Programming? Code runs, but not code you wrote.

When you use Original software’s, TestDrive automation solution, you provide instructions for what you want it to do. You do not need to know any programming language because you are not writing code. Quite a lot of our users are business users and functional testers. They are not writing code, they don’t know how and they don’t want to.

We can put code and functions into TestDrive, but the classic example of this would be to check that two values taken from the screen added together equal another value either somewhere else on the screen, in the database or in a spreadsheet. This has to be expressed in a code-like way, such as: IF A + B <> C THEN raise error “Value is wrong”.

But there is no code to get the values A, B, or C, to navigate the AUT to the places where these are captured, to provide the input to drive the application, to get the content and properties of any of the data or controls or to deal with the fact that things may be displayed in a different order.

But, the main point about all of this is productivity. No code to learn means a wider audience and applicability. It means no code to debug or fix. It means no code to maintain when the application changes, which means that automated testing can carry on without waiting for someone to fix scripts. It is just a much more modern and productive approach. It will become the norm.

View 0 Comments on When you drive your car, do you write code?


Data Glitches – how Screwfix.com got it wrong

31 January 2014

By George Wilson

The front page of the Telegraph this week carried a story on DIY online retailer, Screwfix.com. Shoppers couldn’t believe their luck when the retailer – selling everything from sheds to pricey power tools – cut all its prices to £34.99. Word of mouth meant people piled on to the site eager to snap up a bargain. One customer couldn’t believe his luck as he bought a ride on mower, usually priced at £1600.

Some customers who had arranged to pick up their purchases first thing on Friday were lucky, but others found their purchases had been cancelled and were reimbursed, as Screwfix and its parent company, Kingfisher PLC, which also owns B&Q, realized the mistake.

It does involve a bit of guesswork to figure out why this happened, but the runaway likelihood is that it was a data validation error. No doubt there will be an intensive investigation to identify the cause, but these things are not always IT problems.

Website validation can be a real problem for retailers and their e-commerce sites. Changes to a website can cause all manner for problems and can skew the data that is visible on the site. For example, a software upgrade or patch to a system can cause anomalies within a website and not necessarily to the section that has been changed. One change of code, or even data messed up in a product manager’s spreadsheet, could have repercussions in seemingly unaffected areas of the site. Walmart had a similar issue back in October.

So how realistic is it for retailers to validate every part of their site every time a change happens? IT teams often make a call on how extensive regression testing should be – but resources dictate that it’s impossible for everything to be tested. Once a system is live, the emphasis shifts to the business users who are responsible for the data – but they usually won’t have access to the automated testing solutions their technical colleagues use.

There are strategies that can help e-commerce providers like Screwfix.com. Automated testing and validation solutions aimed at maintaining ‘business as usual’ can run thorough content checking after every update flagging up any detected glitches immediately – this means that when retailers press the button on changes, patches, or upgrades, they can go live with more confidence. And validation isn’t just carried out before the site goes live – it should be an integral and ongoing part of any e-commerce website.

When problems like this occur, the fall out isn’t just having some bad high profile publicity and disgruntled customers. Investors often get spooked by IT failures and bad business practice and it can have a negative impact on a company’s share price. Making sure they have good governance in place and sound quality assurance measures bode well for online retailers.

View 0 Comments on Data Glitches – how Screwfix.com got it wrong


Banks aren’t spending enough on IT

16 January 2014

As the news broke before Christmas of yet another banking systems failure, which prevented customers from accessing their money and paying for goods, so did the argument that the main reason behind this proliferation of banking tech disasters is years of severe underinvestment in IT.

“Sorry sir, the computer says no!”

RBS boss Ross McKewan came out and said that the problems they have been experiencing have been down to underinvestment in underlying technology, which they are now trying to turn around. They know that these issues are seriously inconveniencing their customers, who will go elsewhere if they don’t get a better service. But RBS is no different to any other retail bank. All have suffered IT issues that have caused disruption to services. And it happens in the banking industry more often than most.

Apparently gross under investment in IT infrastructure is endemic in banking. Ovum research from 2012 said 75 per cent of European banks are using outdated core systems. Respondents complained that lack of skills and resources mean that core systems are really difficult to replace. This is partly due to what’s happened in the banking industry over the last thirty years. Banking tended to be very regional back in the 70s and in the 80s and 90s the industry became very acquisitive, with a handful of big high street players emerging. As a result, rather than having a single streamlined infrastructure, banks are generally made up of multiple legacy systems, which all hinge the operational running together, making the environments massively complex and difficult to maintain.

This means that more things are likely to go wrong. For example, if a bank implements a software upgrade, the software has to be updated across multiple legacy systems, many of which are interdependent. This increases the likelihood of a lapse in quality assurance and therefore the risk of defects.

The pace of technology adoption has also added massive pressure to the CIOs of banks. Customer hunger for receiving banking services on new devices is driving the need to implement mobile banking apps, digital wallets, new payment systems etc. And this focus on new technologies means there is less time and resource to focus on core systems.

But the fact remains that banks can’t go on operating in this way. They need to have the right technology in place, the right quality assurance strategy to protect themselves and their customers from tech disasters. Failure to do so will see them lose market share to more efficient operators.

View 0 Comments on Banks aren’t spending enough on IT


C-Suite Blog Series: the CFO and the technology hot potato

09 January 2014

By George Wilson

Before Christmas RBS suffered the latest in a long line of technical defects to hit the banking industry. But this one was a particular headache for the RBS CFO when he saw his company’s share price plummet 12 per cent on the news that RBS customers were experiencing considerable customer service disruption.

This only serves to underline the fact that technology, when it goes wrong, isn’t just a problem for the CIO and the IT department. The fall out can be huge. IT disasters can turn into PR crises of monumental proportions. Disgruntled customers take to Twitter and Facebook en masse causing indelible damage to reputations. This can spook Investors, who can lose confidence in the operational running of a business and worry about the impact on the company’s market position and start to offload their shares – a CFO’s worst nightmare.

But the risks for CFOs where technology is concerned don’t begin and end with IT disasters. For a financial director, an enterprise software upgrade might not instill terror the way it does for a CIO, but for finance heads and their departments, a software upgrade, particularly of an enterprise application upgrade can be a nightmare.

CFOs who are coming up to an Oracle EBS or SAP upgrade might well be feeling the heat. This major upgrade will affect the finance function more than any other business division. So it’s vital that CFOs who are going through or about to go through this are aware of the challenges.

The main risk of upgrades is the possibility of a defect – or multiple defects – not being detected before applications go live. Once on the loose, these defects can cause all sorts of problems. Sometimes the impact of the errors is immediate and apparent – those are the ‘good bugs’. Immediate action enables damage limitation. But especially for finance teams, the impact might be latent, perhaps exposed in the invoicing module, interface or later reporting . They could cause glitches in the P&L. These are the ‘bad bugs’. The problem has built up like rain accumulating in the attic from a missing roof tile, and the problem gets expensive to fix and repair as the ceiling later collapses and furniture is ruined. As a business critical function, anything that causes problems for the F&A department, poses significant risks for the business as a whole.

And one of the main bugbears for CFOs and an issue that can be highly detrimental to the productivity of their division is the amount of time their team members need to spend testing to ensure the system is fit for purpose. For every 100 members of staff involved in validating a system, a business can expect to spend 5,000 man-days on testing alone. And they still need to do the day job.

For CFOs, technology can be a minefield. Anything that goes wrong on the technology front can cause operational problems and can reflect badly on the company to internal and external audiences, like investors. As more business owners, like CFOs, become responsible for technology initiatives, rather than the CIO and IT department, and as upgrades and patches become increasingly prevalent, becoming more savvy about the risks will be a smart move.

View 0 Comments on C-Suite Blog Series: the CFO and the technology hot potato


Are HP QTP & Quality Center the emperor’s new clothes?

17 December 2013

It may be a tale from our childhood but I can think of no better analogy for the current state of test management and test automation tools market: The Emperor’s New Clothes. Now for those of you who cannot recall the story let’s have a quick recap.

Look at what you need and you compare it with what you've got

The Emperor’s New Clothes

The Emperor by definition was a powerful chap and one who wanted acclamation and praise from his court. So when a couple of con-men pitched up at court promising His Highness the ultimate in designer fashion they found an eager audience in the main man. Now neither conman knew much about tailoring so instead they convinced the Emperor that a non-existent figment of their imagination they held in their arms, was in fact the finest suit ever made, un-equalled in all of the kingdom. The Emperor fell for their pitch hook, line and sinker. So convinced was he of its beauty that he paraded himself to his court. Sadly the court, being so used to only saying the things he wanted to hear, was emasculated and no-one had the gumption to speak up.

Things did change when the Emperor decided to parade himself through the city but by this time the con-men were long gone.

So what’s this got to do with testing?

Let’s consider what we want from our tools. What does the ultimate ready to wear, waterproof, uncrushable and debonair tool-set look like?

1. Every project starts with a plan and that plan will be the backbone of the project. But every project is different and a good tool can adapt to every approach utilized. Waterfall hasn’t gone away and for some companies it never will. However the converse is not true. Pretty much every company we know has embraced agile methodologies to a greater or lesser extent. And the agile world is fluid rather than static. Teams look to refine their agile approach based on their experiences and evolving industry best practice. So your application quality management platform needs to support multiple concurrent methodologies with the ability to consolidate common data. Now ask yourself if your current tool can do that. If you are starting to see the proliferation of multiple tools each with the same objective then you already know the answer.

2. Much of the testing will be manual. Much of the manual testing will be done by power users from the line of business who can ill afford the time you demand. Shouldn’t a tool set make manual testing fast, to minimize the impact on everyone involved and to capture the business knowledge to lessen the burden in future projects?

3. And when it comes to test automation, be brave, take a deep breath and very quietly repeat “faster, better, cheaper”. Go on, try it again. Now be really brave and ask yourself whether the automation tools you use are delivering on that mantra. Slow, costly, fragile and ill-suited to agile developments are phrases that may come to mind instead.

So there you go. If you look at what you need and you compare it with what you’ve got I think you’ll find you’re as naked as the day you were born.

You can read more about an alternative to HP QTP or QC here.

View 0 Comments on Are HP QTP & Quality Center the emperor’s new clothes?


Older Posts »