How AI is changing the role of software testers

three people gathered around a laptop

Ten10 Managing Principal Consultant Stuart Day considers how the role of modern software testers is evolving because of the widespread implementation of AI

The conversation around artificial intelligence in quality engineering often centres on efficiency. We hear about AI’s power to automate repetitive tasks, accelerate test cycles, and boost productivity. While these benefits can be real, they do not come without added risk and complexity.

The more profound impact of AI is not just that it can, when used correctly, enable software testers to do their jobs more efficiently, but, like with the introduction of automation, it is fundamentally reshaping the role of the quality professional. The integration of AI into the software development lifecycle demands a mindset shift from a quality point of view. With new opportunities being created and new challenges faced, the role of the quality professional continues to evolve further, along with the approach to quality engineering as a whole.

As the adoption of AI continues to grow and its uses expand, it is changing the entire landscape of tech. This new landscape requires the quality professional to think differently, both strategically and technically. Developing deep domain knowledge and a new set of technical skills to unlock its biggest benefits, whilst identifying and managing new types of risks.

The idea behind quality engineering has always been about preventing bugs and architecting quality from the ground up. Only now, with AI, there are new tools to use and validate and different problems to solve.
As always with quality, finding the right balance is key. Let’s look at some of the ways the role of the software tester needs to continue to evolve to leverage AI in the most effective ways.

Testers as AI quality strategists

Historically, a significant portion of a software tester’s time was dedicated to the hands-on creation and execution of test cases. This all started to change with the introduction of test automation, which shifted a lot of the test execution to be run automatically, but still required someone to create those test scenarios in the first place.

With AI capable of handling the creation of the tests themselves from requirements, user stories, or even production data/insights, the primary function of a software tester is evolving even further from a hands-on creator/executor to a high-level AI quality strategist. This shift continues to elevate the role, requiring a broader and more analytical perspective, as well as the ability to know how to supplement different testing techniques to achieve the most successful outcomes. And with AI’s ability to generate vast amounts of data and test results, a key part of the role is to analyse this output to identify high-risk areas within the application. They need to apply their unique domain expertise to prioritise what truly matters, guiding the AI’s focus towards the most critical business functions.

Alongside this, there is a shift towards needing to become the architects of AI-driven testing frameworks. Their responsibility being focused on selecting the right tools, defining the parameters for AI-powered test generation through prompt engineering, and critically interpreting the results, to ultimately make strategic decisions that ensure the overall success of the project. We have seen this for a while with automation, and now AI adds another layer of complexity that needs to be considered.

woman pointing at a screen explaining something to a colleague

Testers as AI model validators

As organisations increasingly integrate AI and machine learning models into their products, a completely new responsibility is emerging for quality professionals: validating the AI models themselves. This task extends far beyond traditional functional testing. It requires a new mindset and a new set of skills to scrutinise systems that are, by nature, probabilistic rather than deterministic.

This requires a deep understanding of both the technology and the business context to explore the use of Artificial Intelligence and Large Language Models (LLMs) in the field of test automation, and also understanding how LLMs can be leveraged to test modern AI-driven applications themselves, including those built using autonomous agents and generative AI systems.

One of the most critical new duties is testing for bias and fairness. Testers must develop methodologies to examine AI models for hidden biases within their training data and algorithms. This is essential to ensure that an application behaves ethically and equitably for all users, preventing unintended discrimination. The tester becomes a guardian of ethical AI implementation.

Validating the logic of an AI model – often referred to as testing the ‘black box’ – is another key challenge. Testers need to design creative experiments to understand and challenge an AI’s decision-making process. This ensures its outputs align with business logic and user expectations, particularly in unpredictable edge cases.

Testers as resilience advocates

Resilience testing isn’t something new. It is something that testers and engineers have performed for years to verify that the system continues to function correctly under failures, load or disruptions to the underlying infrastructure. However, with AI, the goal and focus must shift from does it stay running to does it stay right. Now it’s about verifying that an AI model or pipeline continues to behave accurately, robustly and ethically whilst under any kind of data or environment stress. Under these conditions, the types of failures testers will be testing for are incorrect, biased, or unstable model outputs.

They can still use the same types of testing techniques that are common with resilience testing (such as fault injection, load, recovery, and even chaos engineering), only the approach is slightly different. For example, a normal fault injection would be carried out by introducing a hardware or network fault, whereas for AI, it’s about introducing data corruption and seeing how it handles it.

With the added layers of complexity that AI brings from a resilience testing perspective, monitoring and observability become even more important for measuring and maintaining these quality attributes over time. Quality professionals will likely need to play a much bigger role in this space moving forward.

man writing lines of code on three screens

Testers as data engineers

With the growth of AI, never has data quality been more important and at the forefront of people’s minds. Testing is becoming increasingly focused on data validation, and testers are being expected to possess skills once associated solely with data engineers and data scientists. They must be comfortable working with large datasets and transformation, understanding data pipelines, identifying patterns, and using data visualisation tools to validate data quality, identify issues and communicate their findings.

They need to build a strong understanding of the different attributes of data quality and use frameworks such as DAMA, which breaks things down into 6 core pillars: Accuracy, Completeness, Consistency, Timeliness, Validity and Uniqueness, whilst also having the ability to translate complex test data and outcomes into clear, actionable insights for business stakeholders. A tester who can demonstrate the business impact of a bug through truly understanding the data is far more valuable and influential than one who simply files a defect report.

Final thoughts

There continues to be much debate about if AI is going to replace software testers, and whilst there are certain areas of testing that it can make more efficient; such as test case generation, execution and bug detection, as this article shows, the role of the software tester isn’t being replaced, it simply needs to keep evolving to support the ever changing world of technology.