Testing with AI: Overcoming Tester’s Block

Published on August 12, 2025

Let’s talk about Test Idea Generation

You know what’s good about AI? Well, at least for our purpose: overcoming tester’s block. Major LLMs, like Claude or ChatGPT, are trained on all kinds of data out there. A lot of the training data has similar patterns.

For example, e-commerce sites, while they may not look alike, include the same behaviors. Log-in, selecting items, a cart. Something with money.

And if by chance your application has the same characteristics, you can use these tools for some serious test idea generation that applies to your case, too.

Now – we’re talking about AI. Even if your application is completely unique, something that no LLM has ever seen before, the models will suggest things for you. These guys cannot be held back.

But let’s say your application isn’t completely unique. If you’re staring at a blank page and don’t know how to start a test plan, just ask the bot.

Test Idea Generation – To The Rescue!

Let’s take a look at an example – Our Bigger Better Bookstore fictional store. Now, it’s very unique, but as luck would have it there’s another online book store around. Starts with an A, I think. (see what I did there?)

Let’s give it a simple prompt for brainstorming test ideas:

Prompt: I have an online book shop. Suggest test cases for the software.

Which just that spits out cases like these:

Answer
  for
  Test
  Idea
  Generation:
  User
  Authentication
  &
  Account
  Management
  Book
  Search
  &
  Browsing
  Book
  Details
  Shopping
  Cart
  Checkout
  Process
  Order
  Management
  Performance
  Testing
  Security
  Testing
  Integration
  Testing

Each one comes with at least 5 ideas to test. For example, for “User Authentication” it suggests going with:

Answer
  for
  Test
  Idea
  Generation:Test
  login
  with
  correct
  credentials
  Test
  login
  with
  incorrect
  credentials

If you’re new to the fictional book business, that would be a great start. Now, let’s say you want to start with those log-in cases, and you’re feeling lazy. This is where you shift from high-level ideas to specific test case generation. You’d probably ask something, like:

Prompt: Can you give me example cases for correct and incorrect credentials?

And you’d get:

Answer:

Actual examples for valid inputs, and if that’s not enough:

Answer:

Example for invalid user, including an expected result. And even:

Answer:
  Brute
  Force
  Protection
  Input:
  Multiple
  consecutive
  failed
  login
  attempts
  Expected
  result:
  Temporary
  account
  lockout
  or
  CAPTCHA
  challenge
  after
  predefined
  number
  of
  failures

Brute force attacks.

Test Idea Generation: So many cases in so little time!

Using this prompts, I’ve got 16 cases in less than a minute. And, of course, I could ask for more cases, and more examples. This is where ChatGPT for test ideas really gets going.

Meme: Too damn high level!

There’s something missing in the suggestions. I know – how the application looks. Need to get some details in. So, I asked my soon-to-be-overlord for more cases.

Prompt:

And I’ve got a list of cases for:

Answer:
  Cross-Platform
  UI
  Tests
  Web-Specific
  UI
  Tests
  Mobile-Specific
  UI
  Tests
  Authentication
  UI
  Tests
  Search
  &
  Navigation
  UI
  Tests
  Cart
  &
  Checkout
  UI
  Tests

I’ve got a total of 33 ideas including:

Answer:

And even some accessibility issues. So in two minutes, there’s a whole lot I can start from.

But you see, I have a problem.

What To Do First?

Meme: Ain't nobody got time for all that!

This is where AI for test planning really shines. You can ask it for help with prioritizing test cases:

Prompt:

And I get the list ranked by “Must Test”, “High priority” and so on. For example the “must” list includes:

Answer:
  Product
  Search
  Functionality
  -
  Users
  can't
  buy
  what
  they
  can't
  find
  Checkout
  Process
  -
  Directly
  impacts
  revenue
  Mobile
  Responsiveness
  -
  Substantial
  portion
  of
  traffic
  comes
  from
  mobile
  Basic
  Authentication
  -
  Security
  and
  user
  access
  Shopping
  Cart
  -
  Core
  purchase
  path

It even tells me what’s the reasoning behind the ranking. Now I know where to start.

But what if it’s not my first day on the job? That means most of the top-ranking ones have been done already, and if I’m lucky, I may have some automated scripts running them.

I can then start from the Medium-level list.

Now, do I trust the genie? Nope. I need to review everything it gives me. Maybe the prioritization is wrong. Or inaccurate.

And if it is, I can decide what to do. I can try to prompt it towards the right direction. Or, I can just take off on my own.

In the real world, as a tester, I have some information and expectations for what I’m about to test. I can review and evaluate the suggestions, and spot missing cases.

Imagine how much time I can save.

But…

Warning: A Genie Is On The Loose

This whole process of AI-assisted testing is powerful, but as the person who makes the decision on what to test, it is my responsibility to understand what I’m doing and make the right calls. Test idea generation is great, but I can’t blindly just copy everything. I cannot trust it completely.

Accept the responsibility, and enjoy the time off. Who am I kidding? Spend it on testing!

Now it’s your turn. Do you use your bots for Test Planning? What do you let them do, and where don’t you trust them? Tell me in the comments.

The post Testing with AI: Overcoming Tester’s Block first appeared on TestinGil.