
How I created 3 Customised GPT chatbots to reply on my behalf so I could go on holiday in peace
How I created 3 Customised GPT chatbots on AIBots to reply on my behalf so I could go on holiday in peace

I’m product manager for AIBots, an internal platform for Singapore government officers to create and use customised AI chatbots powered by GPT, supplemented with their internal knowledge bases. It is similar to customised GPTs by OpenAI, though we first built this in July 2023.
While I had been interacting with users and fielding their questions, I didn’t think I personally had a use case for AI Bots until this.
As a product manager in a small team with just two engineers and one UX designer, all the questions from users and stakeholders get directed to me.
On average, I receive about 30–50 messages and emails a day that need my reply. While my boss or team could help with these, it would take them more time and their time is better spent on their work. I’m happy to take these questions, but we all need a break at some point to recharge.
There have been three main categories of questions that need my reply:
- Questions about Retrieval-augmented Generation (RAG)
- Questions about AIBots
- Questions about getting their use case approved
I thus created three bots to reply these question and may have included it in my out-of-office auto-responder.
Bot #1: Q&A Bot on Retrieval-augmented Generation

The RAG framework forms the baseline for AIBots, as this is how relevant content from a bot’s knowledge base gets incorporated into each query.
I received many questions about RAG and how it works. At that time, I was new to this space and while I understood the general principles, published research can give much more information in greater detail. Understandably, some content can get very technical.
Processing this information and generating a layman reply is something that AIBots (and by extension, RAG LLM chatbots) can do well, so I tried creating a bot that could potentially answer questions about RAG for me.
Step #1 — Adding the knowledge base + Configure system prompt
I uploaded research papers and write-ups I found most helpful in the knowledge base as follows:

I started with a simple system prompt to give the bot some context when replying.

This entire process took less than 5 minutes.
Step #2 — Testing with users and reiterating
I shared this bot with users who approached me wanting to find out more about RAG. A user reported that the answers were too technical, and I modified the system prompt to instruct the bot to reply in more layman language.

While no fine-tuning of models is involved here, this process of testing and reiterating the bot’s system prompt can result in more helpful responses for users.
Step #3 — Sharing it with everyone
As more people tested my bot and I verified that the responses were not inaccurate, I decided to allow all AIBots users to access this Q&A bot.
I created a shortened URL with https://go.gov.sg/aibots-about-rag (only accessible through our intranet) so I could easily type this in a reply when someone asks me a question.
This Q&A bot was the first bot I created. It was straightforward and simple as the knowledge base used was already available and there are no catastrophic consequences if the bot describes RAG inaccurately, i.e. hallucinates.
Bot #2: Q&A Bot on AIBots

However, most of the questions I received were about AIBots itself. What better way to answer these than with an AI Bot about AIBots?
Step #1 — Adding the knowledge base + Configure system prompt
For this, I had to prepare the knowledge base from scratch. I did have an introduction guide I created that gets shared in the welcome email for new users. Over time, there were more frequently asked questions and updates to our product roadmap, further details I wanted to include in the deck.
This step took me the longest time — fully describing every detail, in several ways, multiple times, and to ensure the information was kept up to date. Saving the 100+ slides as a pdf, I uploaded it into this AI Bot’s knowledge base.
Learning from what other experienced AIBots users do, I added a more sophisticated system prompt.

Step #2 — Testing with users and reiterating
For this use case, it was important that the bot get the facts correct, even if it were not life-threatening (i.e., of high risk). The final line in the prompt served to mitigate this risk, but some extensive testing was still needed.
As our existing RAG did not always process images and diagrams accurately, some information was not ingested, and the bot invented some details in their responses. Notably, misunderstanding the product roadmap timeline and concluding that .zip files were allowed.

As a quick fix, I entered this clarification into the text-based input which is part of the system prompt. And voila, the response is now correct.

Step #3 — Sharing the bot
Similarly, I created a shortened URL of https://go.gov.sg/aibots-about-bot for all public officers to access the bot directly, and also for me to share it easily when responding to queries.
Bot #3 — Preparing a proposal for an AIBots use case

Since the onset of ChatGPT in Nov 2022, the Singapore government was keen to explore the use of LLMs for officers’ work, though carefully. Regulations were set such that every use case involving material above a certain level of classification (Official) would need to be approved. This may seem strict but in the landscape of companies and countries banning the use of ChatGPT entirely, our approach could be considered as liberal.
Our Singapore government quickly initiated a no-data logging agreement with OpenAI such that us public officers could use ChatGPT for work purposes immediately, for data of up to a certain level of classification. Data sent through to the OpenAI API would not get stored on their servers.
However, per our circular on LLMs, this implies that every AI Bot created with file uploads is considered an individual use case and needs to be evaluated separately.
The concern is reasonable, given that one AI Bot can just be about generating mock data whereas another is intended to provide legal advice. We were at the stage where we, the AIBots team, or the governing body on AI developments did not know the range of use cases that government agencies had in mind.
Hence, an application to get a use case approved would be needed for every AI Bot created. An application requires the Bot Creator to share the purpose of the bot, what documents were to be uploaded, the intended users, worst-case scenario, measures to mitigate the risk, and level of clearance sought.
This can be a tedious process such that creating a bot could actually be easier and quicker than filling up the application. Furthermore, I often needed to review the application and follow up with Bot Creators or certain points, as information was missing or questions were not answered correctly.
After a while, I felt like I was asking the same questions over and over and isn’t that what a chatbot does? Therefore, the Use Case Application Helper bot was created!
This bot was designed to ask a Bot Creator questions about their use case, one at a time, to make the process more manageable and have their answers be more focused.
Step #1 — Configure system prompt
This is the system prompt I used, and no files were uploaded.

The bot doesn’t actually evaluate the proposal and simply asks the questions one by one, as instructed.
Step #2 — Testing and Reiterating
As part of the process requires users to email me their submission, I do get to review the conversation each time and serve as another layer to filter out any anomaly.
Often, the bot may repeat questions and I did try to tweak the prompt but it didn’t make much of a difference
I suppose that with the understanding that this bot is a trial effort, users would be forgiving.
Step #3 — Sharing the bot
Once again, I created a shortened URL for all public officers to chat with it directly.
The Bots Did Their Job
From the usage statistics of these bots, tracking the number of questions and users who asked those questions, it seems that the bots served their purposes.
On AIBots, we also allow Bot Editors to review the anonymised conversation history of their bots if it is declared as such. From this, I could see exactly what users were asking but not whom the question was asked by.
It seemed that users found the bots useful, as many of the questions asked were answered accurately, and these questions did not get directed to me. It did ease some of my workload and I then may still have just included the links to these bots in my autoresponder, even though I am no longer on leave. I would think everyone will like to get their answers as quickly as possible, and you can count on the bot for a 24/7 immediate reply more than you can count on me!
There is so much potential in such custom chatbots, and the range of use cases I see from users has been rather impressive and creative. Even then, a key consideration when using customised chatbots for work is to imagine what would happen if the answers are incorrect and to never assume that users will verify the output. In that way, we can continue to maximise the benefits of LLMs to help us with our work.
How I created 3 Customised GPT chatbots to reply on my behalf so I could go on holiday in peace was originally published in Government Digital Products, Singapore on Medium, where people are continuing the conversation by highlighting and responding to this story.