The automation paradox of graphical interface tests
Most teams test a feature before freeing it. The reason for automating graphical interface tests is that there is a regression problem. A single functionality change could break something else in the software, some unpredictable part. The unpredictability means that the teams must try everything before liberation, which is expensive and expensive. Hope with the automation of graphical interface tests may be to find these bugs faster, allowing quick release. This is something quality Legend W. Edwards Deming called "mass inspection". Its goal for the automotive industry in the 1960s and 70 was to change the work process to eliminate mass inspection. While US companies did not know it largely, Deming's success is part of what allowed the Japanese automotive revolution.
Also Read: AutomationTesting Company in USA
Software teams pursuing automation due to a quality problem seek to have a mass inspection - and many of that - all the time. This translates into a strange process, where the team first injects defects, then uses tests to find them and delete them as quickly as possible. After all, someone else does this work. Having a magical mythical tool to find problems can add to this problem.
Also Read: AutomationTesting Company in USA
Unless the fault injection rate decreases, the test tools can best find the majority of previous problems. There will be reworking and try again. Companies that seek automation of graphical interface tests without examining the strict of their software engineering practices will find mixed results for their efforts. Combined with other classic errors below, this problem can paralyze a test automation project.
Error: Eating the whole elephant
When Deming spoke of mass inspection, he meant every part of an automobile. With software, there is usually an infinite combination of entrances, states, and timing. Add interaction between the components and an incredible range of open possibilities. Dr. CEM KANER, leader to test computer software and professor of Florida Tech software engineering, calls for this impossibility to test completely. In its Black Box software test course, Dr. Kaner suggests that a major challenge of testing is to select the few most powerful tests to execute.
Also Read: AutomationTesting Company in USA
The teams that continue automation should have a coverage concept, as well as rules to understand how automated tests will be. Another problem is trying to modernize all tests at the same time, from the beginning, as a project. These projects are not only expensive but as the software changes under test writers, the new tests add inertia capable of slowing down the delivery instead of speeding up.
Also Read: AutomationTesting Company in USA
With regard to coverage, three common approaches must test all features in an exhaustive way, to create "epics" that are complete user work, or to create small extracts that are easy to debug that test the basic functionality. In our experience, the first two approaches are subject to failure. The first, testing all the scenarios, will simply be too much time and intensive resources. As the user interface changes, the tests will need maintenance, creating additional work.
Also Read: AutomationTesting Company in USA
The second approach, to create "epic", will explore the user's trip. These are usually complete end-to-end scenarios, connecting to search, add to the basket and the box, while a test. These tests do many things, just like real users and can do complex things ... and they will be fragile. The debugging and tracing of the problem will be more difficult since more configuration may be necessary, which makes it possible to re-examine the defects that need to be resolved. Also, more time passes, new flaws appear, which will require debugging, fixing and reaction tests in an endless cycle.
Also Read: AutomationTesting Company in USA
Comments