What is the future of test automation?

Hear from Ten10 Principal Consultant Jon Woch as he considers the future of test automation and how automation is changing our approach to software testing
At Ten10, I’m primarily a delivery lead, providing governance and oversight on projects which are undertaken by the Ten10 teams. As part of my work, I get to see and collaborate with a variety of client teams over many different industry verticals. I see and use a large variety of technologies; both technologies that are used as part of the quality engineering delivery and the technologies that make up the systems that are being tested.
This article is to give my thoughts on what the future of test automation may look like in the near future. Not just from a technology viewpoint, but also areas in which I think automation is going to add greater value.
Test automation with AI
Let’s consider AI. We know it can drive cars and have human-like conversations. Can it help with test automation? I don’t think it’s fully there yet, but there are real possibilities here. We just need to allow more time to see what tools come out to support test automation.
Machine learning is how a computer system develops its intelligence, which is then used by the computer to help it mimic human cognitive functions, which is AI. I could focus on what tooling there is in the marketplace but as these tools are still yet to reach full maturity, I want to focus on areas I think it should and will help in the future.
Test creation
First of all, test creation development. AI will enable people to use natural language to create tests so you’ll have a layer abstraction from the technical implementation layer, and you can create tests without needing to have a full understanding of the code behind it. But AI should be able to help determine what tests we should be able to create based on the use of a system too. It’ll be able to do things such as analysis of logs and production and help determine areas in which we may need to apply more test coverage.
Test case maintenance
This is where the test case automatically changes if the application changes too. For example, tests need to use different selectors due to a change in the system. The tooling should look to use the most appropriate selector dynamically. There are tools out there that can do this already, but I think there is scope here for this to be improved.
Coupled with the natural language creation. In the future, the person creating automation tests may not need to worry about how to identify objects in the system that you are testing and will rely on tooling to select the correct components.
Test execution
The main area in which I think AI will assist is the actual test executions; choosing which test to execute based on historical test failures and understanding what new features are most likely to have an impact on existing functionality. This could be fully understanding the code base in more detail, log analysis of previous test failures, and even looking through production logs to see where issues have occurred.
User experience and SEO
Up until now, it’s been common that the performance of pages and applications in general is left to load and performance testers. Performance engineers over recent years have started to incorporate real user performance testing as part of that testing activity. There is no reason why we can’t start to look at how we can incorporate some user experience and functional performance testing as part of our test automation suite.
The speed of a page can have a significant impact from a monetary viewpoint. A blog by Portent for Digital Market Agency in 2022 demonstrated that an eCommerce site that had a page load speed of one second has a conversion rate two and a half times higher than a site with page load speeds over five seconds.
Look at the below metrics, which can all be measured quite easily and captured from your functional automation testing. Having good scores in these will help to make your site more attractive to users and customers and assist with search engine rankings.

- Largest Contentful Paint: How quickly the main content of a page loads. From a Google SEO viewpoint, you want the majority of your users to have an experience under two and a half seconds.
- First Input Delay: Time taken for a page to respond to a first interaction. Poorer sites may not respond quickly enough as the page is still downloading objects such as an image that prevents the page from being actionable by a user.
- Cumulative Layout Shift: This is a score based on how much the layout shifts during the lifetime of the page, which you will see when resources are still downloaded in the background and forces a page to be re-rendered. You want your page to have a score of 0.1 or less.
More test automation suites should consider the importance of performance testing. How can you do that? By far one of the biggest tools out there is Google Lighthouse, which you can get from Chrome DevTools. You can also look at incorporating Lighthouse CI into your existing automation scripts. Lighthouse CI is built on nodes so you can incorporate it quite happily into your Cypress frameworks and you can track the performance characteristics of your pages. I definitely think it’s one area for future automation which people should be focusing on now because it’s something which you can do and will add value to the results that you’re providing back to your business and stakeholders.
Accessibility testing
There are several types of accessibility testing:
- Manual testing involves the use of human testers who assess the application’s accessibility by using assistive technologies and testing the user interface.
- Automated testing involves the use of software testing to identify accessibility issues such as colour contrast, missing alternative text for images, and improper use of heading tags.
- User testing involves recruiting people with disabilities to use a product and provide feedback on its accessibility.
Accessibility testing is important to try to ensure that your application is accessible to as many people as possible and to help ensure it adheres to laws and regulations such as Web Content Accessibility Guidelines (WCAG) and, if you’re in the US, the American Disabilities Act (ADA regulations).
To conduct accessibility testing as part of your automation testing, you can use automated accessibility testing tools that are designed to identify accessibility issues in web and mobile applications. These tools use algorithms and rules to check whether your application adheres to relevant accessibility standards. There are several popular tools out there such as Axe-Core and Pa11y, and you should look to determine what works best for your automation on your existing framework. Once you’ve selected the most appropriate tooling that integrates into your functional automation framework, you can configure the tool to check for specific accessible issues based on your requirements.
Running the automated testing and analysing results will typically generate a report highlighting any accessibility issues detected in your application. As you make changes to your application, you need to rerun these tests to ensure we accept that these accessibility issues have been fixed and not regressed. But it is important to note that automated accessibility tests are not perfect. They may not detect all accessibility issues. It’s therefore recommended that you combine automated accessibility testing with manual testing and user testing to ensure that your application is accessible to as many users as possible.
Final thoughts
Nobody knows exactly what the latest tooling will be, but in the traditional open-source space more people are moving to frameworks using JavaScript or Typescript such as Playwright and Cypress where traditionally many people would have selected Selenium. There are numerous reasons for this, including the speed of test execution, more robust tests (less flaky) and automatically built-in features such as network traffic control and video capture.
Above all, the key thing to consider is test analysis of what to automate and what not to. What we are trying to achieve as part of the automation process is the ability to inform stakeholders quickly about the quality of the solution. Tools are starting to appear that will either crawl your website and try and produce tests or create API tests based on log analysis, but you need to still control what you should test. You don’t want to create a whole host of unnecessary tests that are adding little to no value and bloat your delivery pipeline.