Running a Regression Test

Let's Go

You’ve built your data driven, regression test pack complete with a logic driven Play List and Scripts covering a complete end to end test. What you need now is to know if anything whatsoever has changed since your selected base line.

Results

While you can sit and watch your test execute, possibly on multiple physical or virtual devices to reflect different configurations, in reality, the test will probably be scheduled to run as part of a code promotion or simply overnight.

Once complete you will have a set of results for the completed execution that will highlight failed Quality Checks, performance data and much more.

Tree before and after change

But Did Anything Change?

TestDrive uniquely and automatically captures every important attribute of every element on every screen or page. By selecting a Base Line for comparison TestDrive will now highlight every difference between the two executions, subject of course to any exceptions, such as dates perhaps, that you define.

The Carousel

The process of handling these differences is as important as the differences themselves. It is almost certain that some differences were expected – but are there other differences, collateral damage that the regression test is designed to find?

Collecting Your Bags

A great process is to get the Business Analysts or whoever originated the intended changes to come and ‘collect’ their bags. When viewing a test result you can simply select a reported difference and mark it as expected.

TestDrive automatically creates a full audit trail of these differences so you can easily see who collected each bag and why they did it.​

A Perfect Ending?

Collecting the bags shouldn't take long and if everything has gone as intended, all bags will have been collected, no collateral damage occurred and you can get on with your next development cycle or sprint..

The Lonely Bag(s)

But what if the carousel isn't empty? We've all seen at the airport and wondered about their fate.

In a regression test unclaimed bags are the red flags - these are differences between two test executions that were not expected. They need to be understood and resolved.

The safety net has done its job - your regression test just proved its worth!

Want to know how to build a regression test pack  or  about test automation in general.

What else?