
The use of artificial intelligence (AI) in test automation is the latest trend in quality assurance. Testing, in general, and test automation, in particular, seems to have caught the “everything's better with AI” bug.
Since AI, machine learning, and neural networks are the hottest thing right now, it is perhaps inevitable that AI would find its way into test automation somehow.
I recall a time several years ago, back when test automation was still new, when one of my quality assurance teams was working on a project for a large customer. It was a mobile application with millions of users and monthly release cycles. The QA team was usually relaxed during the first two weeks of the cycle and working frantically thereafter till release—one of the peculiar side effects of agile software development that you don’t read about in the headlines!
Finally, one of the QA leads, fed up with twiddling his thumbs during the lean two weeks, started working on a framework for test automation. He wrote up some test scripts in Ruby with Selenium / Appium and Jenkins for a rudimentary pipeline, including some pretty reports with Red / Yellow / Green indicators for test failure/success.
He successfully did this for a couple of release cycles. That’s when we presented it to our Quality Chief, who pitched it to our customers and got them excited enough to pay for automated testing as a sub-project. We were on cloud nine, on the bleeding edge of things!
Bleeding edge took on a new meaning, however, in a few months when we discovered that fundamental truth about test automation that any QA automation engineer worth their salt wrestles with today
Test automation has not fundamentally changed since then because it still requires continuous monitoring and maintenance. Any possibility for dramatic improvement in the approach and implementation has only appeared with the recent advancement in the capabilities of AI.
Will AI actually be able to help automatically generate and update test cases? Find bugs? Improve code coverage?
The answer to that question is far from clear right now because we’re at the peak of the hype cycle for AI. A specific sub-field, deep learning, has caused a lot of this excitement.
Let's take a closer look at some applications of AI in test automation including unit testing , user interface testing, API testing , and maintaining an automation test suite.
Unit testing , often used as part of continuous testing, continuous integration / continuous delivery (CI / CD) in DevOps , can be a real pain in the… asteroid belt.
Typically, developers spend significant amounts of time authoring and maintaining unit tests, which is nowhere near as much fun as writing application code. In this instance, AI-based products for automated unit test creation can be useful, especially for those organizations that plan to introduce unit tests late in the product life cycle.
This is an area where AI is beginning to shine. In AI-based UI testing, test automation tools parse the DOM and related code to ascertain object properties. They also use image recognition techniques to navigate through the application and verify UI objects and elements visually to create UI tests.
Additionally, AI test systems use exploratory testing to find bugs or variations in the application UI and generate screenshots for later verification by a QA engineer. Similarly, the visual aspects of the System Under Test (SUT) such as layout, size, and color can be verified.
Even without AI, automating API testing is a non-trivial task since it involves understanding the API and then setting up tests for a multitude of scenarios to ensure depth and breadth of coverage.
Current API test automation tools , like Tricentis and SoapUI, record API activities and traffic to analyze and create tests. However, modifying and updating tests require testers to delve into the minutiae of REST calls and parameters, and then update the API test suite.
AI-based API automation testing tools attempt to mitigate this problem by examining traffic and identifying patterns and connections between API calls, effectively grouping them by scenario. Tools also use existing tests to learn about relationships between APIs, use these to understand changes in APIs, and update existing tests or create new scenario-based tests.
AI-based tools can evaluate changes to the code and fix several existing tests that don't align with those changes, especially if those code changes are not too complex. Updates to UI elements, field names, and the like need not break the test suite anymore.
Some AI tools monitor running tests and try out modified variants for failed tests by choosing UI elements based on the best fit. They can also verify test coverage and supplement the gaps if needed.
Test data generation is another promising area for AI models. Machine learning can easily generate data sets, like personal profile photographs and information like age and weight, based on trained machine learning models using existing production datasets to learn.
In this way, the test data generated is very similar to production data which is ideal for use in software testing . The machine learning model that generates data is called a Generative Adversarial Network (GAN).
Artificial intelligence has significantly impacted testing tools and methods, and test automation in particular. An overview of the current tools promising AI shows that, while many new features are being added, several of those features are still on their way to maturity.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
16 | |
14 | |
13 | |
11 | |
10 | |
10 | |
9 | |
8 | |
7 | |
6 |