AI Expert Circle Kickoff Replay - Join the Conversation

Q: How to prevent GenAI from hallucinating or variations when user expects process AI to answer correctly without variation in response with same set of input variables.
A: At Pega, we reduce hallucinations by design, not just by prompt tuning.
First, we strictly constrain what the LLM can see. Instead of passing broad case context, we deliberately carve out only the required data using curated data pages and knowledge tools. This grounding dramatically reduces the chance of the model fabricating information.

Second, we combine generative AI with deterministic rules, processes, and workflows. Agents are focused on semantic and reasoning tasks, while decisions, validations, and execution paths remain governed by Pega’s rule and case management engine.

Finally, we continuously test and validate agent behavior using formal evaluation frameworks like DeepEval, allowing us to detect hallucinations, drift, and regressions before anything reaches production.

@EASLR has a current Expert Circle post that goes deeper on this testing approach, which which can be found here Mastering Trust: Testing, Continuous Monitoring, and Safety in an AI World