Software Quality Matters Blog
Center of excellence. What a phrase. The NASA Space Center has truly been a center of excellence, making its reputation not only from its successes but for the manner in which it reacted to failure. It learned the lessons from each failure and then strove to ensure that not only were such failures not repeated but that the lessons were applied to future successful missions.
Taking such an attitude to the more humble discipline of application quality creates an enticing prospect. Let’s gather our finest minds to define best practice, to create new working methods and implement a set of tools to deliver that best practice in the most efficient manner to deliver the greatest and fastest ROI on the resource employed.
It is therefore hard to argue with the creation of Testing Centers of Excellence. Or is it? Let’s consider three reasons driving the creation of these centers.
Economies of scale. The global economy may be recovering but few businesses believe that the land of milk and honey is once more upon us. So costs and particularly head count remain tightly controlled but the business is equally hungry for new IT systems to hone their competitive edge and to underpin their recovery. Doing more with the same finite resources is a challenge and by centralizing testing skills a more efficient allocation of those resources may be achieved.
Focus. Newer development methodologies are replacing or co-existing with traditional waterfall developments. Agile, Kanban and the like have radically altered the relationship between development, quality assurance and user acceptance testing, so perhaps it is wise to absorb the quality challenge into a dedicated group who can figure out the best way forward.
Skills. And besides it is complex stuff with the tools themselves often ill-suited to the challenge. What chance does a regular QA team have of successfully executing an agile development using legacy tools from the likes of HP, IBM or Borland? Perhaps a specialized team in a ‘center of excellence’ can make these tools work even when they have historically failed.
The reality is different.
Good waterfall, agile and any other developments are based on excellent communication between everyone involved. Agile teaches us that ideally developers, testers and end users should all be permanently in the same room to ensure perfect alignment between need and delivered application. Quite how this can be achieved when one group is mentally, physically or organizationally partitioned away is anyone’s guess.
If the tool is hard to use or ill-suited to the task in hand it is simply the wrong tool. Man up. Tools developed for the challenges of the late 20th century are by definition unlikely to solve the problems we face twenty years later. If you’re spending a chunk of your time developing the tool rather than focusing on the quality task at hand, it is by definition unfit for purpose.
So are TCOE’s a bad thing? It all depends why they have been created.
If the goal is to get the quality leaders together and to continually evolve best practice then there is real benefit.
If the goal is to select the most suitable testing technology and map it to the agreed best practice then this will form the basis of enhanced communication and productivity across all projects.
If the goal is to disseminate knowledge by placing a quality leader in each development project to train their cohorts and to communicate the lessons learned, then this will enhance quality while keeping developments aligned to the business need.
However, if the reality is that TCOEs are the result of throwing labor at QA in compensation for outdated testing technology, the result will be a growing gap in the ability of the business to meet the fast evolving demands of its customers.
A gap that nimbler competitors will fill.
You can read more about HP replacement here.
View 0 Comments on Testing Center of Excellence – What is it for?
Since HP acquired Mercury Interactive in 2006 there has been considerable disquiet in the market as support costs for Quality Centre and QTP have steadily risen. Many users have been able to negotiate discounts on their annual fees or to gain more flexibility in how their licenses can be deployed. HP’s licensing model is known for its complexity and some users have fallen foul of its restrictions and found themselves with an un-budgeted additional cost at the end of the year.
“I’m a growing pain, right?”
But this focus on direct costs entirely misses the point. Maintenance costs are an issue but it is not HP’s annual fee that represents the bulk of the pain. That pain is felt in the effort it takes to build automation in QTP and to maintain it as the applications under test change.
Successful automation with QTP takes well trained, costly staff and is a slow process. Even when the automation suite is complete it is very fragile as the target application is amended and enhanced. This is where the bulk of the expense lies and it it only offers a very low ROI.
Today’s successful companies demand agility and speed not fragility and slothfulness. So take a fresh look at your investment in HP, or the equivalent tools from IBM or any of the legacy vendors. The original investment will have been written off by now, so why continue to throw money, time and resources at a tool which you know is such a poor fit to your business needs.
About Original Software: Original Software enables organizations to meet their objectives more rapidly by delivering enterprise application functionality frequently and efficiently. Knowledge workers and IT professionals use our technology to streamline user acceptance testing, conference room pilots, manual testing and automated testing, project management, and regulatory audit of applications. The software provides the fastest way to capture and share business processes, validate application functionality, and manage projects in real-time. Customers report massive increases in productivity, enabling them to keep up with changing business needs while reducing cost. More than 400 organizations, of all sizes and industries and operating in over 30 countries count on Original Software every day.
View 0 Comments on HP QC & QTP Annual Fees – good money after bad?
By George Wilson
Toyota, the Japanese car giant, suffered a massive blow this week when it was forced to recall almost two million of its top selling Prius hybrid vehicles. A glitch in the cars’ software could set off cars’ warning lights, meaning it would enter failsafe mode and cause the vehicle to stop suddenly. The biggest hit to Toyota will be in Japan and the USA.
Failsafe mode means STOP NOW!
Of course, the real risk in this scenario is to drivers’ safety. But the corporate challenge for Toyota is not insignificant. The reputational damage to the brand is considerable – most environmental car purchasers might think twice before buying a Prius. This isn’t the first time this has happened for the Prius – weeks ago, US Prius models were recalled for faulty seat heaters. And in 2009, millions of Toyota models worldwide were recalled due to acceleration issues, which hammered Toyota’s share price.
So how was something as fundamental as a software issue to blame this time? And how were millions of Prius models released with this software glitch?
Of course, it’s only conjecture at this stage, but it might have been that the requirements for the software were not properly defined, or the integration between the different modules was not properly defined or tested.
People in the know might blame the testing – how was this software released with such a fundamental flaw? But testing is always based around testing the requirements. And if they got those wrong, or missed something in the design, then testing will be examining the wrong parameters.
Following Toytota’s acceleration issue five years ago, a number of court cases sprang up that found its electronic throttle system was flawed. The company had performed a “stack analysis” but had completely botched it in the words of the ruling, meaning software defects were the cause of a number of accidents.
Obviously, in this case, software defects actually cost lives. And in the automotive world, the risk is ultimately to people’s safety. In reputational terms for the car manufacturers, it also costs them dearly. So the message, again, is one that is clear and simple – technology processes have to be clearly defined, properly executed and tested, tested and tested again.
View 0 Comments on Toyota – software glitch leads to global product recall
By George Wilson
You are providing input and instructions and internal code is making the car do many of the things you want it to do (ok, maybe not the steering thankfully!). So, I suppose in your analogy you are coding your car to do what you want. You understand the language – Turn the key to start it. Select a gear. Press the accelerator pedal. I suppose you could say that is programming the car with instructions, but I don’t think most people would consider it that way.
When you use MS word, you input data, press buttons and use keys to achieve what you want. You are providing instructions. Programming? Code runs, but not code you wrote.
When you use Original software’s, TestDrive automation solution, you provide instructions for what you want it to do. You do not need to know any programming language because you are not writing code. Quite a lot of our users are business users and functional testers. They are not writing code, they don’t know how and they don’t want to.
We can put code and functions into TestDrive, but the classic example of this would be to check that two values taken from the screen added together equal another value either somewhere else on the screen, in the database or in a spreadsheet. This has to be expressed in a code-like way, such as: IF A + B <> C THEN raise error “Value is wrong”.
But there is no code to get the values A, B, or C, to navigate the AUT to the places where these are captured, to provide the input to drive the application, to get the content and properties of any of the data or controls or to deal with the fact that things may be displayed in a different order.
But, the main point about all of this is productivity. No code to learn means a wider audience and applicability. It means no code to debug or fix. It means no code to maintain when the application changes, which means that automated testing can carry on without waiting for someone to fix scripts. It is just a much more modern and productive approach. It will become the norm.
View 0 Comments on When you drive your car, do you write code?
By George Wilson
The front page of the Telegraph this week carried a story on DIY online retailer, Screwfix.com. Shoppers couldn’t believe their luck when the retailer – selling everything from sheds to pricey power tools – cut all its prices to £34.99. Word of mouth meant people piled on to the site eager to snap up a bargain. One customer couldn’t believe his luck as he bought a ride on mower, usually priced at £1600.
Some customers who had arranged to pick up their purchases first thing on Friday were lucky, but others found their purchases had been cancelled and were reimbursed, as Screwfix and its parent company, Kingfisher PLC, which also owns B&Q, realized the mistake.
It does involve a bit of guesswork to figure out why this happened, but the runaway likelihood is that it was a data validation error. No doubt there will be an intensive investigation to identify the cause, but these things are not always IT problems.
Website validation can be a real problem for retailers and their e-commerce sites. Changes to a website can cause all manner for problems and can skew the data that is visible on the site. For example, a software upgrade or patch to a system can cause anomalies within a website and not necessarily to the section that has been changed. One change of code, or even data messed up in a product manager’s spreadsheet, could have repercussions in seemingly unaffected areas of the site. Walmart had a similar issue back in October.
So how realistic is it for retailers to validate every part of their site every time a change happens? IT teams often make a call on how extensive regression testing should be – but resources dictate that it’s impossible for everything to be tested. Once a system is live, the emphasis shifts to the business users who are responsible for the data – but they usually won’t have access to the automated testing solutions their technical colleagues use.
There are strategies that can help e-commerce providers like Screwfix.com. Automated testing and validation solutions aimed at maintaining ‘business as usual’ can run thorough content checking after every update flagging up any detected glitches immediately – this means that when retailers press the button on changes, patches, or upgrades, they can go live with more confidence. And validation isn’t just carried out before the site goes live – it should be an integral and ongoing part of any e-commerce website.
When problems like this occur, the fall out isn’t just having some bad high profile publicity and disgruntled customers. Investors often get spooked by IT failures and bad business practice and it can have a negative impact on a company’s share price. Making sure they have good governance in place and sound quality assurance measures bode well for online retailers.
View 0 Comments on Data Glitches – how Screwfix.com got it wrong
As the news broke before Christmas of yet another banking systems failure, which prevented customers from accessing their money and paying for goods, so did the argument that the main reason behind this proliferation of banking tech disasters is years of severe underinvestment in IT.
“Sorry sir, the computer says no!”
RBS boss Ross McKewan came out and said that the problems they have been experiencing have been down to underinvestment in underlying technology, which they are now trying to turn around. They know that these issues are seriously inconveniencing their customers, who will go elsewhere if they don’t get a better service. But RBS is no different to any other retail bank. All have suffered IT issues that have caused disruption to services. And it happens in the banking industry more often than most.
Apparently gross under investment in IT infrastructure is endemic in banking. Ovum research from 2012 said 75 per cent of European banks are using outdated core systems. Respondents complained that lack of skills and resources mean that core systems are really difficult to replace. This is partly due to what’s happened in the banking industry over the last thirty years. Banking tended to be very regional back in the 70s and in the 80s and 90s the industry became very acquisitive, with a handful of big high street players emerging. As a result, rather than having a single streamlined infrastructure, banks are generally made up of multiple legacy systems, which all hinge the operational running together, making the environments massively complex and difficult to maintain.
This means that more things are likely to go wrong. For example, if a bank implements a software upgrade, the software has to be updated across multiple legacy systems, many of which are interdependent. This increases the likelihood of a lapse in quality assurance and therefore the risk of defects.
The pace of technology adoption has also added massive pressure to the CIOs of banks. Customer hunger for receiving banking services on new devices is driving the need to implement mobile banking apps, digital wallets, new payment systems etc. And this focus on new technologies means there is less time and resource to focus on core systems.
But the fact remains that banks can’t go on operating in this way. They need to have the right technology in place, the right quality assurance strategy to protect themselves and their customers from tech disasters. Failure to do so will see them lose market share to more efficient operators.
View 0 Comments on Banks aren’t spending enough on IT
By George Wilson
Before Christmas RBS suffered the latest in a long line of technical defects to hit the banking industry. But this one was a particular headache for the RBS CFO when he saw his company’s share price plummet 12 per cent on the news that RBS customers were experiencing considerable customer service disruption.
This only serves to underline the fact that technology, when it goes wrong, isn’t just a problem for the CIO and the IT department. The fall out can be huge. IT disasters can turn into PR crises of monumental proportions. Disgruntled customers take to Twitter and Facebook en masse causing indelible damage to reputations. This can spook Investors, who can lose confidence in the operational running of a business and worry about the impact on the company’s market position and start to offload their shares – a CFO’s worst nightmare.
But the risks for CFOs where technology is concerned don’t begin and end with IT disasters. For a financial director, an enterprise software upgrade might not instill terror the way it does for a CIO, but for finance heads and their departments, a software upgrade, particularly of an enterprise application upgrade can be a nightmare.
CFOs who are coming up to an Oracle EBS or SAP upgrade might well be feeling the heat. This major upgrade will affect the finance function more than any other business division. So it’s vital that CFOs who are going through or about to go through this are aware of the challenges.
The main risk of upgrades is the possibility of a defect – or multiple defects – not being detected before applications go live. Once on the loose, these defects can cause all sorts of problems. Sometimes the impact of the errors is immediate and apparent – those are the ‘good bugs’. Immediate action enables damage limitation. But especially for finance teams, the impact might be latent, perhaps exposed in the invoicing module, interface or later reporting . They could cause glitches in the P&L. These are the ‘bad bugs’. The problem has built up like rain accumulating in the attic from a missing roof tile, and the problem gets expensive to fix and repair as the ceiling later collapses and furniture is ruined. As a business critical function, anything that causes problems for the F&A department, poses significant risks for the business as a whole.
And one of the main bugbears for CFOs and an issue that can be highly detrimental to the productivity of their division is the amount of time their team members need to spend testing to ensure the system is fit for purpose. For every 100 members of staff involved in validating a system, a business can expect to spend 5,000 man-days on testing alone. And they still need to do the day job.
For CFOs, technology can be a minefield. Anything that goes wrong on the technology front can cause operational problems and can reflect badly on the company to internal and external audiences, like investors. As more business owners, like CFOs, become responsible for technology initiatives, rather than the CIO and IT department, and as upgrades and patches become increasingly prevalent, becoming more savvy about the risks will be a smart move.
View 0 Comments on C-Suite Blog Series: the CFO and the technology hot potato
It may be a tale from our childhood but I can think of no better analogy for the current state of test management and test automation tools market: The Emperor’s New Clothes. Now for those of you who cannot recall the story let’s have a quick recap.
The Emperor’s New Clothes
The Emperor by definition was a powerful chap and one who wanted acclamation and praise from his court. So when a couple of con-men pitched up at court promising His Highness the ultimate in designer fashion they found an eager audience in the main man. Now neither conman knew much about tailoring so instead they convinced the Emperor that a non-existent figment of their imagination they held in their arms, was in fact the finest suit ever made, un-equalled in all of the kingdom. The Emperor fell for their pitch hook, line and sinker. So convinced was he of its beauty that he paraded himself to his court. Sadly the court, being so used to only saying the things he wanted to hear, was emasculated and no-one had the gumption to speak up.
Things did change when the Emperor decided to parade himself through the city but by this time the con-men were long gone.
So what’s this got to do with testing?
Let’s consider what we want from our tools. What does the ultimate ready to wear, waterproof, uncrushable and debonair tool-set look like?
1. Every project starts with a plan and that plan will be the backbone of the project. But every project is different and a good tool can adapt to every approach utilized. Waterfall hasn’t gone away and for some companies it never will. However the converse is not true. Pretty much every company we know has embraced agile methodologies to a greater or lesser extent. And the agile world is fluid rather than static. Teams look to refine their agile approach based on their experiences and evolving industry best practice. So your application quality management platform needs to support multiple concurrent methodologies with the ability to consolidate common data. Now ask yourself if your current tool can do that. If you are starting to see the proliferation of multiple tools each with the same objective then you already know the answer.
2. Much of the testing will be manual. Much of the manual testing will be done by power users from the line of business who can ill afford the time you demand. Shouldn’t a tool set make manual testing fast, to minimize the impact on everyone involved and to capture the business knowledge to lessen the burden in future projects?
3. And when it comes to test automation, be brave, take a deep breath and very quietly repeat “faster, better, cheaper”. Go on, try it again. Now be really brave and ask yourself whether the automation tools you use are delivering on that mantra. Slow, costly, fragile and ill-suited to agile developments are phrases that may come to mind instead.
So there you go. If you look at what you need and you compare it with what you’ve got I think you’ll find you’re as naked as the day you were born.
You can read more about an alternative to HP QTP or QC here.
View 0 Comments on Are HP QTP & Quality Center the emperor’s new clothes?
‘Mission critical’ is something of a buzzword, but if ever it could be applied to a business application it would be to ERP. A flaky ERP system is the corporate equivalent of a bad heart. Operational efficiency, workforce productivity – all these elements hinge on ERP. So it’s no wonder that IT professionals and business owners recoil with horror at the thought of an upgrade. Projects typically cost many millions and are very risky and disruptive to undertake.
This is a hot issue at the moment for organizations looking down the barrel of an Oracle E-Business Suite upgrade. The life-support plug will soon be pulled on EBS 11i and corporations are weighing up the cost of upgrade against the risk of sticking with their current version. It is no wonder businesses are reticent. To start with, the EBS upgrade involves a new set of core financial modules, meaning significant disruption for finance and accountancy departments.
But whether it’s an Oracle EBS upgrade, an SAP upgrade, or any other application upgrade, there are hidden costs that organizations face.
The first is the cost to the business. ERP upgrades were once the domain of the IT department – not any more. The business has a massive contribution in terms of verifying and testing all of the changes that are made on their business processes. What this means is that business users can spend an unbelievable number of man days testing to ensure the system is fit for purpose and learning the new ropes. Look at this for a statistic – for every 100 members of staff involved in validating a system, a business can expect to spend 5,000 man-days on testing alone. Yes that’s right – 5000 days! The time-drag on the business – and the productivity of the department that’s involved in the upgrade – is huge.
The second cost relates to human capital. This validation testing work is very laborious, with business users having to repeat these tests time and time again. Boring, right? It’s no wonder that upgrades have been shown to have a negative impact on employee satisfaction and can increase churn rates.
The third cost relates to the margin of error and the risk of defects going live when manual testing could let them slip through the net. Defects can cause all types of problems, for example, causing serious issues for the finance department, where invoices don’t get logged and gremlins affect the P&L. For the sales department, data may get corrupted or sales records disappear.
The hidden costs of upgrades can never be completely eradicated, but there are strategies organizations can deploy. Providing application users with simple technology to streamline testing should be part of that strategy, as it will reduce the time demand on the business. It can also improve defect detection, meaning the organization can go live with confidence.
Upgrades aren’t going away. If anything, they’re getting more prolific. On the bright side, organizations will become much better at dealing with them. So having the right procedures in place, the right tools and the right attitude should help corporates stay ahead in the upgrade and patch lifecycle.
View 0 Comments on C-Suite Blog Series: The Hidden Costs of Upgrades – an Insight for CEOs
Older Posts »
Read more about alternatives to HP
Like the brick cell phone, HP testing tools have had their day
Ensuring that your business applications are fit for purpose might not be sexy but it is fundamental to the success of your organization. The bedrock upon which this quality is built is the testing performed by IT professionals and business users throughout the development process.
But testing can be labor-intensive and for business users it is a painful and unwelcome distraction from their contribution to the line of business. It was therefore natural that many companies looked to implement technology to ease the burden of testing, seeking to speed the testing cycles, increase quality and lower their testing costs.
An entire testing tools industry developed, offering simple waterfall test management and coded automation for application UI. The coding language may differ, but in essence the likes of IBM, Borland and the most successful vendor, HP (through their acquisition of Quality Center and QTP) are all offering the same value proposition.
Struggling to keep up
These tools are now proven to be only usable by specialist automation engineers. They extend development timescales and are limited in their capability, which contrasts sharply with the original needs of speed and quality. As to cost, the requisite skills alone make a hole in any budget while the continual hikes to maintenance charges and often chargeable upgrades have brought the very concept of ROI into disrepute.
This failure to achieve speed, quality and cost savings is unfortunate, but there is a more fundamental problem. To survive, businesses now need to be agile (whether or not they are agile in their developments) and these legacy tool sets cannot keep up. Some of you will have already accepted that truth and will have reverted to a manual approach to testing as the burden of creating or maintaining coded automation became untenable. Others will have started to implement additional technology to try and address the yawning holes left by legacy tools.
Perhaps you are looking for a quality solution to augment your existing investment and address the high value, fast moving areas of your business. Or maybe you are ready to replace your existing tools with an enterprise-wide solution that supports multiple development methodologies and offers rapid ROI. Either way, Original Software stands alone as an alternative that has earned its battle honours in the environments that matter.
Read more about alternatives to HP
View 0 Comments on The problem with HP