Too Many AI Tools, Not Enough Guidance: Vision for a Centralised Prompt Repository from QE…

Published on May 20, 2025

Too Many AI Tools, Not Enough Guidance: Vision for a Centralised Prompt Repository from QE Perspective

Centralised Prompt Repo Conceptual Diagram

As a Quality Engineer for years, I’ve seen how much the landscape has changed.

With AI in place, we’re expected to maintain high-quality standards while moving faster than ever. As a QE, we own more than just testing — we drive automation, influence architecture, and embed quality into CI/CD workflows. And now, we’re being asked to integrate AI into everything we do.

The promise and the expectation from others? Better test generation, smarter reviews, faster insights and higher productivity. Might even have expectation that just replace QE with AI.
The reality? Tool overload, inconsistent outcomes, no shared QA intelligence, more silo, etc.

The New Chaos: AI is Everywhere

Due to the fast pace of the AI development, it feels like every other week there’s a new AI assistant on quality related: tools that can write tests, analyse code, review tests, and even generate test plans from requirements. By right, that should be a good thing and ease QE workload.

But here’s what’s actually happening on the ground:

  • Some teams use ChatGPT. Others try Claude or Gemini. A few use internal models like DeepSeek or fine-tuned LLaMA.
  • Prompts are written ad hoc. Nobody reuses or shares them.
  • Test reviews are inconsistent — some catch edge cases, others don’t, some use industry standard, some doesn’t.
  • Everyone’s reinventing the wheel because there’s no single source of truth for how we should use AI to support quality.
  • And for companies where public AI tools are off-limits, most of these solutions become unusable.
We don’t just need AI tools — we need Quality Engineering intelligence that works with AI, not in spite of it.

The Turning Point: It’s Not the Tool, It’s the Prompt

After months of experimenting with AI for test generation and test review, I realised:

The most valuable thing as the current state isn’t the AI assistant itself — it’s the prompt you give it.

That’s where your QE expertise lives. That’s where test strategy, business context, and risk understanding come into play. That’s where standards like the company specific standards, testing heuristics, and domain-specific knowledge get encoded.

Yet today, prompts are treated like throwaways. Temporary. Local. Buried in someone’s chat history.

That’s a huge missed opportunity.

What If We Treated Prompts Like Code?

That’s when the idea hit me:

What if there were a centralised, version-controlled, and community-driven repository for Quality-related prompts — built like an open-source project?

Just like we have internal GitHub repos for test automation frameworks, why not a structured library of reusable, documented, and standards-aligned prompts that help QEs do the following:

  • Review unit, API, and UI tests for completeness and correctness
  • Generate edge cases and negative tests from requirements
  • Create test strategies aligned with ISTQB or ISO standards or specific company standards
  • Suggest missing scenarios based on risk and impact
  • Improve flaky test diagnosis and mitigation
  • Support non-functional test coverage (e.g. performance, security)

The Centralised Prompt Repository: What It Could Look Like

Here’s how I envision it(example):

ai-quality-prompts/
├── prompts/
│ ├── test-review/
│ │ ├── unit/
│ │ ├── api/
│ │ ├── ui/
│ │ └── strategy/
│ ├── test-generation/
│ │ ├── from-requirements/
│ │ ├── negative-cases/
│ │ ├── boundary-tests/
│ │ └── exploratory/
│ └── prompts_readme.md
├── examples/
│ ├── input-output.md
│ └── usage-patterns/
├── CONTRIBUTING.md
└── README.md

Each prompt:

  • Comes with input examples
  • References QA principles or best practices (e.g. ISTQB, ISO 25010)
  • Is peer-reviewed like code
  • Can be adapted to any internal or external AI assistant

Why This Matters

For companies that only allow internal AI tools (for security, IP, or compliance reasons), public services like ChatGPT can’t be used.

But a centralised prompt repository can:

  • Work with your own internal LLMs
  • Be integrated into QA portals, CI/CD pipelines, or internal dev tools
  • Be continuously improved by QEs across projects
  • Enable onboarding of junior testers or new AI users with curated prompt examples
  • Enable new product team to setup the quality base faster

And because it’s version-controlled, any changes to prompts are trackable, reviewable, and auditable — just like our code.

Enabling Collaboration Like Open Source

Think about how open source works:

  • Clear contribution guidelines
  • Code reviews for quality
  • Shared ownership
  • Incremental improvements over time

That’s exactly how we can structure this prompt repo.

You don’t need to reinvent a test strategy prompt. If someone already created a well-crafted version for a microservices API architecture, just reuse it — or tweak it and contribute back your improved version.

The goal isn’t just automation — it’s collective QA wisdom, embedded into prompts, and made available for everyone.

Will centralised prompt repo benefits you?

If you’re facing any of the following:

  • Struggling to validate test coverage in fast-moving agile teams
  • Using AI but unsure if the output aligns with QA standards
  • Leading QA in a secure environment with internal LLMs
  • Looking for scalable ways to train your team on test analysis or review

…then it’s better to build your internal prompt repository.

Let’s bring Quality Engineers together to shape how AI supports our profession — not just by chasing the latest tool, but by standardising the intelligence that makes AI truly useful.

Potential Pitfalls & The Road Ahead

While the idea of a centralised, quality-focused prompt repository sounds promising, it’s important to recognise one major risk:

Without proper maintenance and governance, the repository can easily become a junkyard of duplicated, low-value prompts.

Over time, engineers might start contributing “same same but different” prompts for similar use cases — each with slight variations, making it harder (not easier) for QEs to choose the right one. This ironically reintroduces the same issue we set out to solve: too many AI prompts, not enough guidance.

If left unmanaged, teams may stop trusting or using the prompt repo altogether — leading to low adoption and wasted potential.

How to make It Easy to Use and Maintain

With the centralised repo built up but without implement it correctly, it will end up like those open-source repo and inner-source repo that no-one using or adopting. With that, we can learn from the experience and explore how to implement this repo:

  1. Define clear contribution standards and create a lightweight review process, like how open-source projects accept pull requests.
  2. Introduce versioning, tags, and categories for every prompt (e.g. v1.0, #api-test, #ISTQB-aligned).
  3. Templatise prompts based on common QE use cases (e.g. unit test review, scenario generation from user stories).
  4. Work with internal champions or QE leads to embed prompts into tools that teams already use (e.g. VSCode, Slack, CI pipelines).
  5. Offer starter kits or pre-built examples for projects to quickly adopt and try out.
The goal isn’t just to build a prompt repo. It’s to build a trustworthy and usable QE assistant library that evolves with your team’s needs.

In the long run, this can help transform how we approach test quality reviews and generation — faster, smarter, and grounded in best practices.

Building Smarter QE with AI

As AI becomes more embedded in our daily engineering workflows, QEs have a unique opportunity to rethink how we generate, review, and improve tests. But with so many AI tools and assistants available, guidance — not just access — is what teams truly need especially for junior QE. A centralised, well-structured prompt repository can offer that clarity. By aligning prompts to common QE use cases, templating them for consistency, and encouraging thoughtful contributions, we can turn prompt engineering into a powerful and practical part of the QE craft. Of course, this vision requires effort: governance, adoption strategies, and ongoing curation. But if done right, it’s not just a repository — it’s a community asset that scales quality across teams and projects.

The future of Quality Engineering isn’t just about testing more. It’s about testing smarter — with the right tools, guidance, and a shared foundation we build together.

Too Many AI Tools, Not Enough Guidance: Vision for a Centralised Prompt Repository from QE… was originally published in Government Digital Products, Singapore on Medium, where people are continuing the conversation by highlighting and responding to this story.