AI Expert Circle Kickoff Replay - Join the Conversation

If you’re new here, or if you couldn’t join us live, I’d really encourage you to watch the kickoff replay.

I had the pleasure of hosting this session with Steph Louis, alongside our AI Expert Circle leaders Andi Mutlow, Andy Bober, Jay Stewart, Tim Straatsma, and Ivar (who couldn’t join us live). Together, we kicked off this circle with a clear goal: create a place where practitioners can learn from each other and apply AI in ways that work in real enterprise environments.

Our discussion covered how we’re thinking about predictable AI in practice, where different agentic patterns make sense, and how teams are combining GenAI, decisioning, and workflow to move from experimentation into production with confidence. The leaders shared real delivery perspectives… what works, what breaks, and where teams most often over or under engineer.

You’ll hear us talk about:

  • How to decide where agents belong in an enterprise proc ess

  • How trust, governance, and reuse shape agentic de sign

  • What “good enough” really looks like when taking AI into production

Steph then walked through how this community experience is evolving - how Expert Circles work, how to follow conversations, how to contribute your own insights, and how participation and recognition fit into the broader community journey. This isn’t just content to consume; it’s a space to engage, influence, and learn together.

We had some great questions raised during the session, and we’ll be posting follow-ups and answers right here in this conversation, so keep a look out for those updates and feel free to jump in with your own questions or perspectives.

:backhand_index_pointing_right:Watch the replay, explore the pinned resources, and let us know what topics you’d like us to go deeper on next. This Expert Circle is something we’re building with you, and I’m excited to see the conversations continue here.

- Paul

Q: I am from cisco account with heritage pega application on onpremise .MCP is the word of 2026.. I still didn’t saw good communication between cisco’s splunk product and pega components with agent to agent protocol for solving production issues

A: MCP and Agent‑to‑Agent (A2A) are emerging as strategic standards for agentic systems in 2026, and they are natively supported in Pega Infinity ’25+ as part of Pega’s agentic workflow architecture. However, this does not currently extend to deep agentic integration with Cisco’s Splunk.

For Splunk specifically, Pega’s supported integration today is log‑level, not agentic. As documented in Appendix 79 - Pega Streaming Pega logs to Splunk, Pega Cloud supports streaming platform logs into Splunk using Splunk HEC (HTTP Event Collector), including secure token‑based authentication, log filtering, and fallback options. This integration is designed for observability and operations, not for real‑time, bidirectional agent‑to‑agent workflows.

1 Like

Q: Do you have recommended approaches for Agentic AI Architecures for clients to choose from?

A: Overall, Pega’s guidance is to design agentic systems around governed workflows and business outcomes, using agents where they add value, while maintaining transparency, auditability, and enterprise‑grade control. Common agentic patterns discussed in the industry include hierarchical (manager–worker), peer‑to‑peer, event‑driven, agents embedded in workflows, and hybrid approaches. These patterns differ in composition, trade‑offs, control boundaries, and where responsibility sits.

We are actively building out more detailed guidance based on some early adopter initiatives and our evolving roadmap. We’ll be posting that material in docs.pega.com and in this AI Expert Circle. Definitely feels like a meaty topic we could handle as a webinar as well.

If you have a specific use case or architecture in mind, please share more details. We might be able to provide you a targeted answer while we are working on the broader set of guidance assets.

Q: Is Pega considering an MCP (Model Context Protocol) integration to allow external AI tools (like Claude Code) to interact directly with the Pega Rulebase? External IDE Integrations

A: Excellent question! We’ve seen a lot of interest in this area and we’re exploring MCP-style integrations in select areas. We’d love to hear more from you on this… what dev / testing use cases are you most interested in? What scope within the Pega Rulebase (rules, data types, case types, decisioning artifacts, UI components)?

Q: How to prevent GenAI from hallucinating or variations when user expects process AI to answer correctly without variation in response with same set of input variables.
A: At Pega, we reduce hallucinations by design, not just by prompt tuning.
First, we strictly constrain what the LLM can see. Instead of passing broad case context, we deliberately carve out only the required data using curated data pages and knowledge tools. This grounding dramatically reduces the chance of the model fabricating information.

Second, we combine generative AI with deterministic rules, processes, and workflows. Agents are focused on semantic and reasoning tasks, while decisions, validations, and execution paths remain governed by Pega’s rule and case management engine.

Finally, we continuously test and validate agent behavior using formal evaluation frameworks like DeepEval, allowing us to detect hallucinations, drift, and regressions before anything reaches production.

@EASLR has a current Expert Circle post that goes deeper on this testing approach, which which can be found here Mastering Trust: Testing, Continuous Monitoring, and Safety in an AI World

Q: With the shift toward Agentic Customer Service, how does Pega’s architecture handle the handoff of context and memory between multiple specialized agents working on a single complex case?

A: One thing that often gets missed in agentic AI discussions is where memory actually lives.
With Pega, the case is the memory.

What needs to be done, what’s already been done, what decisions were made, what evidence was used, what stage we’re in. All of that lives in the case, not inside an AI Agent’s prompt or conversation history. And that’s intentional.

Each Agent in Pega is invoked fresh - the context it needs is assembled at runtime from the case, using controlled constructs such as data pages, decision data, aggregates, history, and vector stores. It performs a specific task, and then writes its outcomes back to the case.

The next Agent that comes along doesn’t inherit a long prompt or what is essentially just a chat transcript - it inherits a clean, governed state.

In other words, there is no direct agent to agent handoff. The case, under orchestration, is the handoff mechanism.

This matters when people raise concerns about agentic drift.

Most of what gets labelled as drift isn’t a platform issue, it’s an architectural one. When Agents rely on long, shared conversational memory, behaviour degrades over time. Context mutates, intent blurs, and coordination weakens, even though nothing has really explicitly changed.

At Pega, we avoid this by design. Agents don’t “remember”. The case does. Each agent is grounded in the same authoritative case state, under orchestration, with explicit inputs and outputs. That’s why agentic behaviour in Pega looks like enterprise workflow, not LLMs talking to each other.

So when people ask “how does Pega prevent drift?”, the honest answer is: it does not try to patch LLM drift. It eliminates the conditions that cause it.

1 Like

Its very nice response. Can any other AI tools integration will help IT client in a big way for example If we use Scale AI agent’s interaction with Pega Agents does it accelerate the process of debugging and diagnostic insights to Prod support team quickly?