Hello, I'm Martin. As a tester, I've often wondered how to streamline and improve the testing process. The answer I've found lies in the realm of AI. Today, I want to share with you that artificial intelligence isn't just a buzzword; it's changing the game in the field of testing.
So, how can AI help us testers?
Artificial intelligence can be useful, for example, in test creation. It can assist us with both automated and manual tests, speeding up our work and facilitating error detection. Moreover, its contribution extends beyond test writing. AI can visually monitor changes in the application or provide advice when writing tests. Another advantage is data generation for testing. Thus, it represents a tool for increasing our efficiency and aiding in test organization, leading to better quality results.
Data generation
AI can generate a large amount of diverse data, which we can use for writing scenarios, test cases, mind maps, form data, user data, or even assist in writing documentation. However, not all tasks can be accomplished with a single AI tool; it's necessary to know several, as each focuses on slightly different aspects.
The first tool I'd like to introduce is Mockaroo. It's a tool capable of generating data in formats such as CSV, JSON, SQL, making it suitable for filling out form data or directly populating databases with diverse data. This tool doesn't require any coding. In Mockaroo, you can specify how many fields under which names you want to generate or upload a sample CSV file to define the structure. Then, you just specify how many fields you want and in what format you want the result.
I recently used this tool when testing a project where I needed to create many real estate forms. We have the option to import properties from CSV files, so I downloaded a sample CSV file, uploaded it to Mockaroo, and quickly generated 1000 properties that I used during testing.
Another tool is Gretel.ai, specializing in generating synthetic data, meaning the data it generates resembles real data but is artificially created. The tool emphasizes privacy protection and the security of sensitive information. It's suitable for situations where it's crucial to ensure that no real sensitive data is at risk. It works by uploading a file containing some real data, anonymizing the sensitive data by replacing it with fictional data, allowing working with data without risking sensitive information exposure.
Next widely used tool is ChatGPT, which can be used to generate scenarios, documentation, or assist in writing test cases. I personally use it almost like Google during testing; I ask it how to test various functionalities, like how to test login, as seen in the image. It serves as a helper to ensure I don't forget anything during testing.
Another useful tool is Taskade, which can be used to create mind maps, meeting agendas, or sprint planning. Personally, I find it most useful for creating mind maps, which can then be used as a testing map where I can check off what I've tested and what I haven't. However, it's necessary to adjust the fields as needed. Nonetheless, it helps prioritize and organize overall testing.
AI doesn't just generate data; it also serves as a coding assistant or can even generate code directly.
The first tool that assists in coding is GitHub Copilot, available for various editors, such as Visual Studio. Its main function is to generate code suggestions while writing code, which can significantly speed up writing and help prevent errors.
The second tool is Blackbox, also installable as an extension for Visual Studio Code. It works similarly to GitHub Copilot, suggesting code while writing. However, the advantage of Blackbox is that we can consult with it like with ChatGPT.
Another useful function for us testers is visual change detection, where AI can also help us. Aplitools is an excellent tool for detecting visual changes, integrable into Cypress or Playwright. After integration, you define where screenshots should be taken, and every time a test runs, a new screenshot is created, which is then compared to what was generated during the last test run, showing where changes occurred, and the tester decides if it's an issue.
Another tool for visual change detection is Percy, which, unlike Aplitools, creates a history of screenshots during testing, allowing better tracking of how content changes over time. The tools are not very different, and it's up to the user which one they prefer.
That's about the tools I've discovered and tried for some time. I just want to say that none of these tools work without some manual configuration and adjustment, which I think somewhat answers the question:
Will AI replace testers?
No, AI should be seen more as a tool that enhances testers' capabilities rather than a replacement for their roles. So, for now, it won't replace us. We don't know about the future, but since software is created by humans, I think there will always need to be someone overseeing it.