Pega Cloud Summit 2026 – Day 1 Community Q&A Recap
Event and recordings: Pega Cloud Summit 2026: From Blueprint to Go-Live
Session Date: Thursday, 12 February 2026
Thank you to everyone who joined us on Pega Cloud Summit 2026! It were two incredible days packed with insights on GenAI, Pega-as-a-Service, Cloud Modernisation, and Blueprint on the first day. The second day covered Always Current with My PegaCloud, Pega Cloud Network Configuration, SDLC: Pega Diagnostic Center (PDC), and SDLC: Deployment Manager, followed by a live Ask-Me-Anything panel with John Higgins and Frank Guerrera. The live audience brought fantastic energy and some truly thought-provoking questions. Below is the full Q&A from the session — we hope it adds value whether you were there live or are catching up now.
GenAI, Agentic Capabilities & Pega’s AI Strategy
Q: How is Pega evolving its platform and developer experience to remain competitive as high-speed, model-driven code generation tools (e.g., Cursor, Co-Pilot, Codex) dramatically increase velocity and expectations for accuracy?
This was deferred to the Ask-Me-Anything panel on Day 2, with John Higgins and Frank Guerrera. Please refer to the Day 2 recap for the full answer.
Q: Pega currently prioritises predictability, determinism, and auditability over raw generation speed. Is Pega’s cautious AI approach a temporary response to enterprise risk, or a long-term strategy? And with a human-in-the-loop (HITL) model, does AI still need to be fully predictable, or is managed uncertainty acceptable?
Pega’s approach is a very deliberate, long-term enterprise strategy — not a temporary response to risk. Andrzej Lassak, Senior Director of AI Engineering, explained that enterprise clients — particularly regulated organisations such as banks — cannot take unconstrained bets with AI. The HITL model reflects the reality that some decisions simply cannot be easily undone. Pega’s “Predictable AI” concept separates AI usage at design time (where reasoning models are used freely to map workflows and explore options) from runtime (where execution follows predictable, governed processes). This ensures auditability and consistency without sacrificing the creative power of AI where it matters most.
Q: What are Pega’s design best practices when dealing with Agents? It seems the recommendation is to use HITL almost always, and never leave the actionable part entirely to the Agent itself.
Best practices strongly favour decomposition and human oversight for high-stakes actions. Rather than building one large “master” agent that manages everything end-to-end, Pega recommends building multiple small, focused agents injected at specific steps in a workflow. Each agent handles a narrow, well-defined task, making the overall system far more predictable. Agents handle reasoning and orchestration at those steps; humans retain control over consequential decisions — ensuring accountability and a full audit trail. Andrzej illustrated this with the “Intern Iris” story: an overly autonomous agent that sent a client contract without approval — a cautionary tale for why guardrails and boundaries matter.
Q: Is it necessary for every large enterprise to build their own AI models? Knowledge Buddy didn’t seem useful for clients like Cisco. Agentic AI seems more promising.
Not every client needs a custom model. Andrzej explained that when GenAI first emerged, there was a widespread assumption that every organisation would need to train its own LLM. In practice, Retrieval-Augmented Generation (RAG) — as implemented in Knowledge Buddy — allows clients to add a delta of business-specific knowledge on top of existing frontier models without the cost and complexity of fine-tuning. For clients with very specific or sensitive knowledge domains, a custom or fine-tuned model can be connected to Pega agents via standard connectors. Agentic AI is indeed a more flexible path for complex, enterprise-specific automation, and Knowledge Buddy itself is becoming a foundational layer for agentic architectures.
Q: Why not adopt a hybrid AI strategy? Different companies have different risk appetites — why a one-size-fits-all approach?
Pega’s approach is not prescriptive or one-size-fits-all. Andrzej confirmed that Pega is “rather flexible” and clients can go relatively far with agentic configurations if they choose — MCP protocol support, case creation/update, and other capabilities are all available. The guidance towards Predictable AI is a strong recommendation based on enterprise realities, not a hard restriction. Clients with higher risk appetite can configure agents more freely; Pega simply advises doing so with full awareness of the operational challenges — such as unpredictability, maintainability, and regulatory exposure — that come with less-governed agentic deployments.
Q: Can you share experience with Pega’s Agentic Process Fabric capabilities and how they support domain-specific unified agents or assistants? Real-world examples would be helpful.
Andrzej acknowledged he didn’t have a dedicated slide on this topic during the session, but offered to share links and demos directly. If you’re interested in Agentic Process Fabric specifics, please connect with Andrzej Lassak on LinkedIn — he is active and responsive. Further content will also be covered in upcoming Expert Circle webinars.
Pega Cloud Roadmap & Infrastructure
Please treat the dates provided here as target dates and not commitments. For all official roadmap commitments please refer to pega.com.
Q: Which AI models will be available in the upcoming EU-Sovereign Pega Cloud, and are they hosted and operated within the EU-Sovereign region?
The EU-Sovereign Cloud will support AWS models exclusively, as it must remain entirely within the AWS Sovereign Cloud boundary. Not all models are available in the region yet, as the rollout is still in progress. Andrzej shared that he is actively pushing for Nova Lite (a strong balance of speed and intelligence) and Mistral to be available from day one. There is no firm commitment on Nova yet, but Mistral is going to be available from day one. The EU-Sovereign Pega Cloud is expected to go live in H2 2026.
Q: How quickly does Pega release new AI models after they become available in the market?
Pega’s internal teams gain access to new models even before general availability — AWS provides early/preview access as a close partner. Andrzej’s goal is to refresh the list of available LLMs approximately every quarter, though some models are released faster — for example, Nova Lite was made available to clients within a week of AWS’s GA announcement. The guiding principle is quality and enterprise readiness, not just speed: a model like GPT-5.3 was already released but not yet GA everywhere, so Pega held back on making it available until it was fully ready.
Q: Most mature enterprise clients are moving toward on-premise deployments due to hardware availability and AI subsidies. How can on-premise clients easily adopt new Pega GenAI capabilities?
Pega GenAI capabilities are, as a rule of thumb, Pega Cloud exclusive — the underlying infrastructure complexity and the SaaS model make this the primary delivery path. That said, hybrid approaches are possible: a Pega Cloud-hosted Knowledge Buddy can be surfaced in an on-premise Infinity environment via the Knowledge Buddy Widget, which communicates with Pega Cloud and displays results locally. For agentic capabilities, on-premise systems can connect to Pega Cloud agents via standard connectors. Ivan added that being on Pega-as-a-Service is the fastest and most complete path to GenAI adoption.
Q: Can Pega Cloud integrate with a client-managed Identity Provider (IdP)?
Yes. At the Pega platform level, SSO integration with any standards-compliant IdP is supported today via SAML and OIDC. For Pega-managed services such as MyPega Cloud and Deployment Manager — which currently use Community credentials — a pilot is already running that allows clients to connect their corporate IdP for single sign-on. Account teams are actively reaching out to customers to enable this. The goal is seamless login across all Pega-managed surfaces using enterprise credentials.
Q: Can Knowledge Buddy use AWS S3 Vectors in a customer-managed AWS account, or is vector storage currently limited to Pega-managed infrastructure?
Currently, the GenAI vector store for Knowledge Buddy must reside within the customer’s dedicated Pega Cloud VPC. Pega provides each client with a dedicated VPC — which is relatively unique in the market. A proposal exists to support client-managed gateways in Infinity 26.1, which could open up options for using external vector stores. This is a roadmap item rather than a current feature.
Q: If a client moves to Pega Cloud, can they port attachments from their own repository? If so, who updates the repository information for existing cases — Pega or the client dev team?
Migration of attachments is fully supported, though the approach depends on where they currently live. If attachments are stored in the database, they can be migrated as-is and then archived to Pega Cloud file storage. If attachments are already in an external repository, the migration team performs a lift-and-shift to Pega Cloud file storage — but the paths stored in existing cases will need to be updated to point to the new location. This path-update work is a shared responsibility, coordinated between Pega’s migration team and the client delivery team. Satya Mishra confirmed this is not trivial but is entirely achievable with proper planning.
Q: Can the same be done for S3 storage?
Yes. Clients can configure an external S3 repository in the Pega platform and use it to store case attachments. Satya advised that latency is a key consideration — the S3 bucket should ideally be in the same region as the Pega Cloud environment, and Private Link can be used to connect the two VPCs securely. However, Pega Cloud’s own managed file storage comes pre-configured with KMS encryption, WAF protections, and firewall controls — so for new Pega Cloud deployments, using Pega-managed file storage is the recommended default unless there is a specific business reason to maintain a client-owned S3 bucket.
Q: Private Link — is it supported?
Yes, Private Link is available for Pega Cloud environments as part of the Secure Connect offering, which enables private connectivity channels from your enterprise to Pega Cloud — avoiding the public internet entirely.
Q: Is the Log Viewer hosted in Pega Cloud, or does it need to be installed locally and fed log files?
The Log Viewer is planned as a web-based interface hosted in Pega Cloud — no local installation required. It is on the H2 2026 roadmap and will complement existing options (MyPega Cloud log download and log streaming to Splunk, S3, Azure, or Dynatrace). Satya also noted that Pega is keen to partner with pilot clients for this feature — reach out to Ivan or Satya if you’d like to be involved.
Q: What is the difference between PDC (Pega Diagnostic Center) and Log Viewer?
Both are complementary tools. PDC is a broader diagnostic and monitoring platform: it analyses Pega alerts, exceptions, database transactions, and HTTP request/response patterns, applying algorithms to surface actionable performance insights — and an AI assistant for PDC is coming in pilot later this year. Log Viewer will provide access to raw log files in a user-friendly web interface, for scenarios where direct log inspection is needed (e.g., troubleshooting specific errors or meeting compliance retention requirements). Satya’s expectation is that most clients will increasingly rely on PDC for day-to-day monitoring, with Log Viewer as a complementary tool for deeper raw-log access.
Q: How can Pega Cloud interact with client-managed downstream applications?
Pega Cloud supports a wide range of integration patterns. The starting point is network connectivity — private (Direct Connect, Private Link) or public (HTTPS). Once the network layer is established, integration options include REST API calls, SOAP connectors, SFTP file transfers, and database connections. For case data specifically, Pega’s data integration capabilities allow connection to virtually any database or downstream system. Satya recommended attending the dedicated Pega Cloud Network Configuration session on Day 2 for a detailed technical deep-dive.
Q: As part of data masking, does Pega apply irreversible anonymisation by default, or is data masked using reversible encryption or tokenisation?
Pega’s data masking in the context of the Deployment Manager data migration feature uses key-based masking — sensitive fields such as SSN numbers are replaced with asterisks or equivalent masked values. This is distinct from encryption: Pega Cloud data is encrypted at rest and in transit by default (with optional bring-your-own-key for sensitive properties), but masking as applied during data migration is a substitution/obfuscation approach. Irreversible anonymisation options can be explored on a case-by-case basis depending on client compliance requirements.
Q: Given that key-based masking is reversible and therefore still constitutes production PII, how does Pega help clients assess and accept the associated risks compared to using irreversible anonymisation?
This question was not answered during the session.
A: Pega is transparent that key‑based masking is reversible and may still be treated as production PII under some regulations. For that reason, masking is positioned as a controlled option, not a universal compliance solution.
Masked data remains protected by encryption, strict access controls, and full auditability in Pega Cloud, and is typically used where data utility is required (for example, testing or troubleshooting). Clients retain ownership of the risk decision and assess whether reversible masking is acceptable based on their regulatory and internal policies.
Where irreversible anonymisation is required, Pega supports alternative approaches—such as excluding sensitive attributes, applying irreversible transformations, or integrating external anonymisation services—depending on client compliance needs.
In short, Pega enables informed choice, combining transparency, strong platform safeguards, and flexibility rather than enforcing a single masking strategy.
Blueprint & Modernisation
Q: Why not use AWS Transform or Google MAT to analyse a Pega codebase and upload it into Blueprint for modernisation — following the same pattern as COBOL or legacy codebase modernisation?
Answered by Stu Smith. The Application Signature Tool is purpose-built to understand Pega-specific application constructs — process flow rules, class structures, case lifecycles, access groups — and produces a .pegassign file that Blueprint natively understands. AWS Transform and similar tools are designed for generic code or COBOL transformation and lack this Pega-specific intelligence. Stu also noted that the Application Signature Tool is completely free to download from Pega Marketplace, making it an immediate option for any client starting a modernisation journey.
Q: How can Blueprint help organisations align modernisation efforts to Strategic Outcomes, Value, and Portfolio Investment?
Scott Merritt and John Higgins both responded to this. Scott described a “digital factory” model: clients use Blueprint to catalogue and envision their portfolio of applications, creating a structured backlog of modernisation work tied to strategic outcomes rather than siloed IT initiatives. John highlighted that Blueprint helps by focusing on the work that needs to get done — designing workflows around business outcomes rather than around screens or channels. This naturally breaks the pattern of building business logic into channels (as user stories tend to encourage), and instead creates channel-agnostic workflows that are easy to adapt and reuse. The result: a portfolio of blueprints becomes a portfolio of future projects, all aligned to measurable business goals.
Q: How do you convince clients like Cisco — who have already mastered AI — to adopt Blueprint?
John Higgins answered directly: don’t try to convince — inspire instead, and do it with the product, not words. At Davos, John met with 30+ senior executives and consistently used the same approach: ask them to think of either the IT project that nearly broke them, or the thing they urgently need to deliver in the next 6 months. Then hand them Blueprint and let them engage with it themselves. Every time, seeing how easy it is to be self-sufficient — bringing in their own data leads, integration teams, and subject matter experts in real time — created what he described as “a moment of sunshine in the eyes.” Blueprint’s value is self-evident once experienced; the goal is to get the right people into it.
Q: If we import a Blueprint of a legacy application into a Pega environment, does it help modernise from a UI Kit to a Constellation-based application?
Answered by Stu Smith and Anil Gokhale. Yes — and this is precisely the recommended modernisation path. The Application Signature Tool extracts the current case lifecycle and data model from the heritage application, which is then uploaded to Blueprint. Blueprint (with AI assistance) reimagines the application, and the new Blueprint is imported as a built-on application on top of the existing one. This enables a blended UI: any work type modernised as a Constellation case type gets the new Constellation experience, while UIKit work types remain unchanged. Modernisation happens incrementally, one micro-journey at a time, without disrupting live operations. Anil also noted that the Constellation Modernisation Tool on Marketplace is being superseded by enhancements to the Application Signature Tool for this purpose.
Q: Using Blueprint and Application Signature, to what extent can deep legacy DB complexity — stored procedures, embedded business logic, complex joins — be identified and meaningfully decomposed, rather than merely surfaced as dependencies?
The Application Signature Tool focuses on metadata about Pega application constructs — process flow rules, class structures, access groups, section rules, and service integrations — rather than database-level artefacts. It does not extract stored procedures, embedded SQL, or custom DB logic. The rationale: the recommended modernisation approach (built-on, iterative micro-journeys) deliberately maximises reuse of existing service integrations and back-end connections. Deep DB decomposition, where genuinely required, is best addressed through complementary consulting-led analysis and tooling outside the Pega stack.
Q: Do we manually compare suggestions for change from Blueprint?
The process is guided and AI-assisted rather than fully manual. Blueprint surfaces the current state (via the Application Signature Tool) and, with AI, helps reimagine the target state. Anil noted that AI assistant capabilities are coming directly to each Blueprint screen within weeks, enabling in-context suggestions for optimisation and GenAI adoption opportunities. Human review and validation remain part of the workflow — but Blueprint significantly reduces the manual effort compared to traditional discovery and design approaches.
Q: Does Application Signature consider the entire app stack, or only the implementation application to whose stack the PegaAppSign ruleset is added?
Anil confirmed that the tool does consider the full application stack, including built-on layers. On the portal, the tool surfaces all work types that are actively resolving work — regardless of which layer they originate from. This has been validated with pilot customers who have complex multi-layer stacks. For very deep stacks (e.g., 50 applications), Anil recommended starting with the highest-volume work types rather than attempting to discover everything at once.
Q: Will uploaded Blueprint templates be used to train the Blueprint module? Could company-specific processes become visible to other organisations?
Your data stays confidential. See Pega Blueprint Data Security for full details.
Q: Is Blueprint only available for Pega Cloud, or also for on-premise and client-managed cloud deployments?
Blueprint itself (at AI Workflow Builder | Pega Blueprint ) is available to anyone — it is free, always up-to-date, and accessible without a Pega Cloud subscription. Andrzej confirmed it respects data residency: data from EU-based users stays in the EU. However, to import a Blueprint into an Infinity environment and take full advantage of all Blueprint Delivered capabilities (CI/CD, Constellation, automated testing, etc.), being on Pega-as-a-Service is strongly recommended and provides the most complete path.
Q: How can we add more cases to an updated application that has already been published as MVP1?
This question was submitted near the end of the session and was not answered live.
A: You can import blueprint several times, skipping the case types, which you do not want to change and only importing what is really new.
Q: How can Cisco-style complex Pega applications — built around many integrations with Oracle SCM and significant legacy code — be modernised given Cisco’s own design system?
This question was submitted near the end of the session and was not answered live.
A: This is a rich topic — we encourage you to share your use case with the Blueprint Expert Circle forum for full discussion.
Join the Conversation
Did we miss anything, or do you have follow-up thoughts? Drop your comments below — the team is actively monitoring this thread.