Pega Knowledge Buddy is using RAG Architecture and stores the content embeddings into Vector Store Database. My understanding we are currently supporting the following models used for embeddings for AWS & GCP (see screenshot below).
My question is: If the content has been embedded and the model gets changed to a different model at some point, would the entire content of the vector store needs to be re-embedded again? Thanks.
I think if you are changing the embedding provider and/or embedding model, the existing content in the vector store must be re‑embedded. No way around this.
A check against our AI Knowledge & documentation Agents seems to indicate that the process is handled at the Pega application layer — there’s no need to interact with AWS or GCP consoles directly, since Knowledge Buddy abstracts the underlying cloud infrastructure through the Pega GenAI gateway. The steps are:
Open the Knowledge Buddy portal and navigate to the relevant content records or data collections.
Reopen the content records that need to be updated.
Trigger re-ingestion — this kicks off automatic re-chunking, sends the content through the Pega GenAI gateway, generates fresh embeddings under the new model, and writes them to the GenAI Vector Store.
Pega Knowledge articles (if applicable) are automatically reflected in Knowledge Buddy upon update or republication, which simplifies the re-ingestion flow for that content type.
Validate outputs — a human-in-the-loop review is recommended after architecture changes to confirm retrieval quality and content accuracy.
The process is the same regardless of whether your underlying cloud provider is AWS or GCP — the abstraction layer keeps it consistent.