
Testing with AI: Overcoming Tester’s Block
Let’s talk about Test Idea Generation
You know what’s good about AI? Well, at least for our purpose: overcoming tester’s block. Major LLMs, like Claude or ChatGPT, are trained on all kinds of data out there. A lot of the training data has similar patterns.
For example, e-commerce sites, while they may not look alike, include the same behaviors. Log-in, selecting items, a cart. Something with money.
And if by chance your application has the same characteristics, you can use these tools for some serious test idea generation that applies to your case, too.
Now – we’re talking about AI. Even if your application is completely unique, something that no LLM has ever seen before, the models will suggest things for you. These guys cannot be held back.
But let’s say your application isn’t completely unique. If you’re staring at a blank page and don’t know how to start a test plan, just ask the bot.
Test Idea Generation – To The Rescue!
Let’s take a look at an example – Our Bigger Better Bookstore fictional store. Now, it’s very unique, but as luck would have it there’s another online book store around. Starts with an A, I think. (see what I did there?)
Let’s give it a simple prompt for brainstorming test ideas:

Which just that spits out cases like these:

Each one comes with at least 5 ideas to test. For example, for “User Authentication” it suggests going with:

If you’re new to the fictional book business, that would be a great start. Now, let’s say you want to start with those log-in cases, and you’re feeling lazy. This is where you shift from high-level ideas to specific test case generation. You’d probably ask something, like:

And you’d get:

Actual examples for valid inputs, and if that’s not enough:

Example for invalid user, including an expected result. And even:

Brute force attacks.
Test Idea Generation: So many cases in so little time!
Using this prompts, I’ve got 16 cases in less than a minute. And, of course, I could ask for more cases, and more examples. This is where ChatGPT for test ideas really gets going.

There’s something missing in the suggestions. I know – how the application looks. Need to get some details in. So, I asked my soon-to-be-overlord for more cases.

And I’ve got a list of cases for:

I’ve got a total of 33 ideas including:

And even some accessibility issues. So in two minutes, there’s a whole lot I can start from.
But you see, I have a problem.
What To Do First?

This is where AI for test planning really shines. You can ask it for help with prioritizing test cases:

And I get the list ranked by “Must Test”, “High priority” and so on. For example the “must” list includes:

It even tells me what’s the reasoning behind the ranking. Now I know where to start.
But what if it’s not my first day on the job? That means most of the top-ranking ones have been done already, and if I’m lucky, I may have some automated scripts running them.
I can then start from the Medium-level list.
Now, do I trust the genie? Nope. I need to review everything it gives me. Maybe the prioritization is wrong. Or inaccurate.
And if it is, I can decide what to do. I can try to prompt it towards the right direction. Or, I can just take off on my own.
In the real world, as a tester, I have some information and expectations for what I’m about to test. I can review and evaluate the suggestions, and spot missing cases.
Imagine how much time I can save.
But…
Warning: A Genie Is On The Loose
This whole process of AI-assisted testing is powerful, but as the person who makes the decision on what to test, it is my responsibility to understand what I’m doing and make the right calls. Test idea generation is great, but I can’t blindly just copy everything. I cannot trust it completely.
Accept the responsibility, and enjoy the time off. Who am I kidding? Spend it on testing!
Now it’s your turn. Do you use your bots for Test Planning? What do you let them do, and where don’t you trust them? Tell me in the comments.
The post Testing with AI: Overcoming Tester’s Block first appeared on TestinGil.