Why the 2016 Test Automation Magic Quadrant Gartner Report is Wrong by Joe Colantonio.
Perhaps not in the way Joe thinks. This is my response to his interesting piece.
Joe I think you raise some interesting points about the Gartner report and its implied conclusions.
You point out that Gartner are “wrong”, or at least slow, but I think their bigger error is separating automation from the testing suites analysis. This is the first MQ that focusses purely on functional test automation, whereas prior to this Gartner took a more holistic view, reviewing all the elements the software elements that can support a complete approach to quality. However they did expand the specific functional requirements to include packaged applications, browsers and mobile. We should all be encouraging organisations to look at the end goal – Successfully implementing software that meets the needs of the business as quickly and as cost effectively as possible.
That wider view includes ensuring the needs are understood, the process is well managed, that appropriate testing is built-in and that the whole team focus is on delivering quality. That includes everyone involved. Selenium may be a perfectly suitable solution for those with coding skills and a bent in that direction, but it only addresses the needs of a small part of the team. It actively excludes the majority of people who are involved in testing. People who are immediately resigned to manual testing alone if the only automation solution available to them is Selenium (or any other code-based approach). Efficiency is lost, costs rise, quality falls.
A further error in this more focused analysis is the horizontal axis position of the big guys like IBM & HP. Their size cannot be disputed, but innovation? Some readers might wonder what products developments they have brought to automation that justifies their right-ward position.
It is odd that Selenium is omitted from the Gartner report, but I disagree with your proposition that it is the future and the solution for now. I disagree for the reasons above and the fact as you point out in your report, that whilst you don’t like “code-free” solutions, they do seem to work very well according to the people who use them. If you think about it, writing code to test code does not sound like a very futuristic approach.
I think a perspective that you and Gartner probably share is a view based on software development. But that only drives a small percentage of the testing carried out now, and less so into the future. Thus you are both missing that big picture of software quality and what it means to the business, and the user. Let’s take SAP, HANA, Oracle EBS, or Salesforce for example. Substitute any meaningful off-the-shelf solution or cloud platform you like. Who does most of the testing? Certainly, the vendors will do a considerable amount, but the man-hours they put in pale into insignificance compared to accumulated effort from the customers, business users and external QA teams trying to ensure their businesses will be improved by an upgrade or a change. And what of the future of these applications? Why growing of course. So, users and persons who understand the business and who do not have, nor do they want to have, Selenium coding skills, will become more and more involved in software testing, documenting change and enabling training to meet that fundamental goal of successful implementation of improved software.
Thankfully there are code-free solutions that take that bigger view, incorporating all the various needs towards that goal from process, to manual testing, automated testing, test data management, and documentation.