AI in QA: automation of the TestRail update process

Published on March 27, 2024

In the previous article, I explained how to use ChatGPT for a simple task in QA daily work. Today I want to show how this task can be integrated into your test automation process.

Let’s take a plan from the previous article:

  1. Get automated test content.
  2. Generate a Given/When/Then use a case based on it.
  3. Update TestRail case.

Initially, I wanted to use GitHub API to get the content of automated tests. But the API has some rate limitations so I decided to make it from the code base with automated tests. Maybe API can be useful if you want to have a separate app for AI solutions.

So the first point “Get automated test content” is solved easily. I wrote a function to update cases by provided spec paths (this will be used for Pull Request changes):

def update_cases_by_spec_paths(file_paths)
file_paths = [file_paths] unless file_paths.is_a?(Array)
file_paths.each do |file_path|
content = File.read(file_path)
# generate given/when/then
# update TestRail case
end
end

The next step is to write a function to talk with OpenAI API. You can use any open-source library but I decided to write it with raw HTTP lib to simplify the example:

def generate_bdd_with_gpt(content)
uri = URI.parse("https://api.openai.com/v1/chat/completions")
request = Net::HTTP::Post.new(uri)
request["Authorization"] = "Bearer #{YOUR_API_KEY}"
request["Content-Type"] = "application/json"

request.body = JSON.dump({
"model": "gpt-4",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": YOUR_INITIAL_PROMPT },
{ "role": "user", "content": content }
]
})

response = Net::HTTP.start(uri.hostname, uri.port, use_ssl: uri.scheme == "https") do |http|
http.request(request)
end

completion = JSON.parse(response.body)["choices"][0]["message"]["content"]
completion.strip
end

API key you can generate via https://platform.openai.com/api-keys but avoid directly including it in the code snippet. Store it securely using environment variables.

The prompt I saved as a text file in the project dir.

Choosing the Right OpenAI Model

I went with GPT-4 because it’s good at understanding complex stuff. Making those Given/When/Then things from test info needs a powerful model, and GPT-4 fits the bill.

Here’s a quick guide to pick the right model for your project:

  • Simpler Tasks? GPT-3.5 might work just fine. Save the big gun (GPT-4) for tougher jobs that need more thinking.
  • Need Accuracy? GPT-4 is way better at getting the facts right. Use it if precise information is key.
  • Limited Resources? Actual prices can be discovered here.

The best model depends on what you’re trying to do. Play around with them to see which one works best for you!

The last step from the initial plan is to update a TestRail case via API. I have my own solution but any ready lib (in other languages) can be used:

# https://support.testrail.com/hc/en-us/articles/7077292642580-Cases#updatecase
def update_case(case_id, content)
@test_rail_client.send_post("update_case/#{case_id}", {
"custom_testrail_bdd_scenario": [
{
"content": "#{content}"
}
],
"template_id": @template_id
}.to_json)
end

Finally, how to apply it to the Pull Requests.

By using GitHub Actions, there is a standard approach to run a job for each PR:

on:
pull_request:
branches:
- 'develop'

To find “impacted” specs for each PR, you must checkout two branches: your ‘feature’ branch and ‘develop’ branch.

During the “run” step identify the “changed” specs and pass them to the docker run:

- name: Run tests
run: |
echo "$(git diff --name-only origin/develop | grep _spec)" >> changed_specs.txt
changed_specs=$(paste -s -d ' ' changed_specs.txt)
docker run -e TEST_SUITE="$changed_specs"--name tests-exec-${GITHUB_RUN_NUMBER} -i api-automation-${GITHUB_RUN_NUMBER}

In the end, call “update_cases_by_spec_paths” by identifying the run for PR.

test_rail_case_updater = AI::TestRailCaseUpdater.new(@api_key, @initial_prompt)
test_rail_case_updater.update_cases_by_spec_paths(ENV['TEST_SUITE'])

While this approach offers promising results, it’s important to consider limitations like potential inaccuracies in AI-generated content and to incorporate error handling for robust functionality.

As AI capabilities continue to evolve, the potential to revolutionize the way how we work is impressive. For example, consider DevGPT for integration into your JIRA with automatic PR creation.

Thanks for reading! Stay tuned for more articles exploring innovative applications of AI in QA.