Pega Cloud Summit 2026 – Day 2 Q&A Recap

:tada: Pega Cloud Summit 2026 – Day 2 Community Q&A Recap

Event and recordings: Pega Cloud Summit 2026: From Blueprint to Go-Live
Session Date: Friday, 13 February 2026


Thank you to everyone who joined us on Pega Cloud Summit 2026! It were two incredible days packed with insights on GenAI, Pega-as-a-Service, Cloud Modernisation, and Blueprint on the first day. The second day covered Always Current with My PegaCloud, Pega Cloud Network Configuration, SDLC: Pega Diagnostic Center (PDC), and SDLC: Deployment Manager, followed by a live Ask-Me-Anything panel with John Higgins and Frank Guerrera. The live audience brought fantastic energy and some truly thought-provoking questions. Below is the full Q&A from the session — we hope it adds value whether you were there live or are catching up now.


:counterclockwise_arrows_button: Always Current with My PegaCloud

Q: If restarting all the nodes, will there be any outage?
No — Pega performs a rolling restart, so there is no downtime during node restarts. The system will not disturb your environment or any running business processes.

Q: When we restart, do we also clear cache and static content?
No. A rolling restart does not clear cache — neither on the web side nor the database side. If you need to clear cache, that requires a separate support ticket.

Q: Will there be any outage during database maintenance?
Database maintenance is near-zero downtime. If any maintenance activity is expected to impact your environment, Pega will proactively notify you through cloud maintenance tickets before the maintenance begins. In practice, most maintenance happens quietly behind the scenes without any disruption.

Q: When we receive a Pega DB update notification, there are no details about what the update contains. Is this information confidential?
It is not deliberately hidden. Database maintenance is handled by Pega as part of the infrastructure-as-a-service layer, and clients are generally not expected to be impacted by it. If the update requires client awareness or action — or if there is any risk of disruption — Pega will create a cloud maintenance ticket and reach out proactively with full details.

Q: “Clone environment can take up to 5 days” — that’s the SLA. How long does it typically take in reality?
The 5-day SLA is a maximum. In practice, 90% of clone environments are ready within 24 hours. Pega sends cloud maintenance notifications at each stage of the process so you always know the status.

Q: Is there any cost to a cloned environment? How many days is it available? Are existing security measures like IP whitelisting automatically ported over?
The cloned environment is completely free of charge — it is provided by Pega specifically to allow you to test your applications before touching actual environments. By default, you have 30 days to perform user acceptance testing. You can extend this by a further 30 days directly from MyPegaCloud starting from day 28. Regarding IP whitelisting: a new cloned environment in Cloud 3 receives the default “deny by default” policy — existing access controls are not automatically ported over, so you will need to configure inbound access on the clone separately.

Q: How can I check system stability?
For system stability monitoring, Pega Diagnostic Center (PDC) is the primary tool. MyPegaCloud currently does not provide a dedicated stability summary, but PDC gives you full visibility into alerts, exceptions, and performance metrics across your environments.

Q: Can you show how to restart or schedule a restart of the Pega application?
This was demonstrated at the beginning of the session. Via MyPegaCloud, you can perform an immediate restart or schedule one for non-business hours. You can also choose specific tiers (e.g., web tier only). The full demonstration is available in the recording.

Q: Is a DB size increase request managed via My PegaCloud (MPC)?
Not directly as a self-service action. Pega’s alerting mechanism sends notifications when your database storage reaches 75% utilization, and automatically creates a change ticket when it reaches 90% — at which point Pega will increase storage unless you take action first (e.g., archiving data). If you want to increase storage proactively before reaching those thresholds, you need to raise a ticket via the My Support Portal.

Q: Does MPC include the management of Pega Deployment Manager (PDM) as well?
Not directly. MyPegaCloud provides the Deployment Manager URL for your environment, allowing you to navigate there quickly. However, configuration and management of pipelines and deployments is handled within Deployment Manager itself. A dedicated Deployment Manager session was included later in the day.

Q: Is there an integrated assessment tool that evaluates an existing application against a new version to identify potential compatibility issues or failure points?
Yes — the Update Checker tool is currently available via the Pega Marketplace. You can download and import it into Designer Studio to run an impact assessment on your application before starting an update. Soon, this process will be automated directly within MyPegaCloud, so the entire assessment — import, execution, and report — will happen with a single click in the update journey flow.

Q: If we are working across two environments — one in the US and one in India — which region should we select for the clone environment?
It depends on how your applications are structured and where your production environment is. The general recommendation is to choose the environment that most closely resembles your production environment and use it as your test baseline for the update journey.


:globe_with_meridians: Pega Cloud Network Configuration

Q: For an application built on a different technology stack, does consuming a Pega-exposed service over a private link still require a whitelisting request, given that inbound access to Pega is restricted by default?
No additional whitelisting is needed when using Private Link (AWS PrivateLink or GCP Private Service Connect). Private Link connectivity is not subject to the default “deny by default” inbound policy — it is a private, internal flow that is allowed by default once configured. The deny-by-default policy applies only to public internet and public peering connections.

Q: What mechanisms enforce network and access controls for globally available services (e.g., S3), and how do these differ from services accessed via private link or inbound whitelisting?
When you subscribe to Pega Cloud, you receive a dedicated tenant with isolated environments — no data is shared between clients. Connectivity between environments within the same Pega Cloud VPC is strictly isolated by default. Pega Cloud environments can connect to each other only via the public destination URL of the target environment, subject to that environment’s inbound access rules. For globally available cloud services such as S3, access controls are managed at the infrastructure level by Pega.

Q: Is it generally better to route all traffic through a centralized ingress/egress layer (e.g., an ESB) versus allowing direct connectivity with private links? What are the key trade-offs? Does Pega have anything on its roadmap for a managed integration/ESB-style capability?
There is no single right answer — it truly depends on your enterprise network architecture and scale requirements. At one end of the spectrum, each Pega service can have its own dedicated Private Link endpoint, which gives tight control but is difficult to scale. At the other end, a single network load balancer routes traffic to an API gateway or enterprise load balancer (e.g., F5), which is far more scalable. Most large enterprises land somewhere in the middle: some aggregation at the hub level, combined with granular controls within AWS or GCP. A hub-VPC model is particularly recommended as it scales well, supports data inspection for DLP requirements (especially relevant in banking and finance), and can integrate with multiple vendors — not just Pega Cloud. Pega’s networking team can help guide the architecture decisions. Regarding an ESB-style managed capability on the roadmap: this was not confirmed in the session.

Q: Can we use a client-managed Route 53 URL and route it to a Pega Cloud FQDN host?
Yes — Pega supports custom (vanity) domains. Clients can provide their own domain name in place of the default Pega-assigned URL. A certificate process is used to validate the domain on the Pega Cloud environment. Additionally, Private DNS is supported for Private Link configurations. Self-service management of vanity URLs via MyPegaCloud is on the roadmap for 2026.

Q: When you say “manage inbound allow lists,” does this mean the DNS Network Change Request (CC ticket) we raise today?
The ticket type for inbound allow list changes is a dedicated ticket in the support system — separate from a DNS change request. However, this is changing: new self-service capabilities are now rolling out in MyPegaCloud, allowing security contacts to manage inbound access controls directly without raising tickets. For new environments, this is already available. For existing environments, the rollout is in progress in waves.

Q: How does the recommended networking and integration approach change when consuming or invoking AI agents, compared to traditional service integrations? Are there different considerations for ingress/egress control, trust boundaries, and routing?
For AI integrations within Pega, the same connectivity models apply as for standard Pega applications — no special setup is required at the network level. Looking ahead, Pega’s networking team plans to expose AI agent connectivity configuration as a self-service capability in MyPegaCloud, so you can set up connectivity (e.g., to AWS PrivateLink or GCP Private Service Connect) independently, without raising tickets. Pega is also investing in AI-driven proactive networking diagnostics that will detect and alert on connectivity issues before they become incidents.

Q: Is it possible to use WebSphere or WebLogic instead of Tomcat on Pega Cloud, for clients who want to use a JMS MDB Listener but don’t want to manage the server themselves?
Frank Guerrera confirmed that Pega Cloud operates with standardized infrastructure (Tomcat), and this standardization is what enables Pega to scale and manage cloud services reliably. Site-to-site VPN and alternative application server configurations (WebSphere, WebLogic) are not supported on Pega Cloud. Clients with legacy connectivity requirements are being guided toward supported alternatives such as PrivateLink. If you have a specific JMS/MDB use case, please reach out directly to the Pega team to explore integration options.


:hammer_and_wrench: SDLC: Pega Diagnostic Center (PDC)

Q: Was PDC not called “Predictive Diagnostic Center” before?
Yes — PDC was rebranded in mid-2025. The previous name was Predictive Diagnostic Cloud. The rename to Pega Diagnostic Center was a deliberate decision to avoid the impression that PDC is exclusively for cloud users (it supports on-premises, private cloud, and all major cloud providers), and to position it clearly as the single center for all Pega application diagnostic needs.

Q: How can I increase the overall PDC score to Green?
Align your implementation with the best practices suggested on each PDC landing page score section. Following the recommended practices consistently across process health, reliability, and performance dimensions will move your scores toward Green.

Q: Is Pega adding AI capabilities to PDC to improve issue analysis and reduce the long loading time for production events?
Yes — the team is actively working on AI capabilities for PDC and you can expect them to be available soon. A GenAI Assistant for slow SQL query analysis is already available, providing suggestions on index structure and identifying performance-degrading operations.

Q: To what extent does PDC align with AIOps capabilities, such as autonomous anomaly detection and pattern learning, versus rule-based diagnostics? How does Pega see this evolving on the roadmap?
Pega has plans to add AI capabilities to build automation — this is on the roadmap. The current GenAI Assistant for SQL analysis is an early step in that direction, and further AI-driven diagnostics are being developed.

Q: Are there plans to let PDC give hardware up/downscaling advice using AI?
Hardware scaling decisions within Pega Cloud are informed by multiple monitoring tools — not PDC alone. PDC focuses on application-level diagnostics. Specific AI-driven hardware scaling recommendations from PDC are not currently planned, but the feedback has been noted for the product management roadmap.

Q: Is PDC available for on-premises, private cloud, and Pega Cloud environments?
Yes — PDC supports all Pega deployment types: Pega Cloud (AWS, GCP), any client-managed cloud environment, and on-premises installations (provided there is no compliance restriction on sending data over the internet). For government clients with specific compliance requirements that prevent sending monitoring data over the internet, Pega also offers a dedicated on-premises PDC installation.

Q: In my VPC, I can see some clusters have Message Signing enabled and others do not. What exactly does Message Signing do in a PDC configuration?
Message Signing establishes a trusted connection between your Pega instance and PDC. When enabled, PDC authenticates that incoming monitoring messages originate from a legitimate, configured Pega instance — and discards any messages that do not match the configuration. Since PDC does not store PII data, the primary purpose of Message Signing is to give clients who want an additional layer of trust and control the ability to establish a verified, authenticated data channel between their environment and PDC.

Q: We often see requests for end-to-end monitoring (browser → server → integrated systems) and tools like Dynatrace. Does Pega support anything in that direction?
Yes — Pega supports integration with external monitoring tools like Dynatrace via APIs. PDC provides an HTTP Monitoring Metrics API that exposes monitoring data in OpenTelemetry format, which can be consumed by tools such as Grafana Alloy, Dynatrace, Splunk, and other APM platforms.

Q: When using GenAI capabilities, how do you ensure that operational logs (e.g., SQL query performance and latency) do not capture or expose PII or other sensitive data?
Regardless of whether GenAI capabilities are in use, PDC is committed to not storing any PII data. All monitoring data is filtered to remove PII before it is processed or stored in PDC. This filtering happens at ingestion — before any AI or analytical processing — so there is no risk of PII leaking into GenAI-powered features.

Q: Are the PII/sensitive data controls enforced by platform defaults, configurable by clients, or dependent on application-level discipline?
The PII filtering is enforced by PDC platform defaults. By default, every Pega Cloud instance is automatically configured to send monitoring data to PDC. If a client has a compliance requirement not to use PDC, they can disable the configuration via a cloud change ticket (on Pega Cloud) or directly from the PDC landing page (on-premises).

Q: How can Pega logs and diagnostics be integrated into a client-managed monitoring platform instead of PDC?
Yes, this is possible. PDC provides an HTTP Monitoring Metrics API that exposes web user traffic metrics in OpenTelemetry format, which can be scraped at regular intervals by any compatible monitoring platform. Refer to the PDC Metrics API documentation for full details on integrating with client-managed monitoring applications such as Splunk or Grafana.

Q: Any details on the Performance Metrics API for integration with external monitoring tools like Splunk?
Please see the PDC Metrics API documentation. Any tool that supports OpenTelemetry format can consume the API — this includes Splunk, Grafana Alloy, and Dynatrace, among others.

Q: Are there any plans to add an MCP (Model Context Protocol) to PDC?
Nothing is currently planned, but this has been noted as a topic to raise with the PDC product management team.


:rocket: SDLC: Pega Deployment Manager (PDM)

Q: Does Deployment Manager require extra licensing, or does it come with Pega?
Deployment Manager as-a-Service is available free of charge for Pega-as-a-Service customers.

Q: Why do we need to create a pipeline?
Pipelines are how you define and automate your release process in Deployment Manager. Each pipeline encapsulates the stages (development, QA, staging, production) and quality gates (guardrail checks, unit tests, PDC stability assessment) specific to a given application and version. You can have multiple pipelines for the same application — for example, a merge pipeline for daily CI and a deployment pipeline for promotions through environments. This flexibility means the configuration is tailored to your team’s release strategy and governance needs.

Q: For Deployment Manager, where is the product ZIP file stored on the server?
The artifact never leaves the client’s boundary. Deployment Manager’s architecture is entirely unidirectional — root-to-live environments communicate out to Deployment Manager, but the service never reaches back into the client VPC. The artifact is stored in a shared S3 repository within your Pega Cloud environment, with folder structures organized per application and environment.

Q: How can multiple teams merge using the merge pipeline on a specific ruleset version? The merge pipeline only shows two options: New Ruleset or Higher Ruleset Version.
The recommended approach is to merge to the highest ruleset version, which Deployment Manager increments automatically. Up to five developers can merge concurrently using a single pipeline. If teams have reserved specific ruleset versions, it is possible to create a new ruleset version with each merge, but Pega does not recommend delta or dynamic deployments.

Q: If an application has parallel code deployments for different monthly releases (e.g., March, April, May), how does PDM support this? How is the merge pipeline configured with the correct ruleset and versions?
Deployment Manager recommends a pipeline-per-version approach. When a release is complete, archive its pipeline and create a new one. For parallel development streams, start each new release from a new minor version — this keeps environments in sync and avoids conflicts. Different pipelines can then cater to each version independently.

Q: How does the Data Migration pipeline work? When you say “data migration,” is it referring to On-Prem to Pega Cloud DB migration during an upgrade? We don’t migrate data from lower environments to production.
The Data Migration pipeline is specific to CDH (Customer Decision Hub) use cases — it is used to move simulation data. It is not intended for on-premises to Pega Cloud database migration. A separate, upcoming capability will support moving masked case data from production to non-production environments for use cases such as troubleshooting production bugs in lower environments.

Q: Do we have a separate repository for each environment?
No — there is one shared S3 repository per Pega Cloud subscription, with folder structures organized per application. Deployment Manager orchestrates the flow of artifacts across stages without duplicating them into separate environment-specific repositories.

Q: Which is the best option — keeping hotfix pipelines separate from regular release pipelines, or combining them?
The recommended approach is to reuse the existing release pipeline and use the stage-level skip capability to expedite emergency deployments. When a hotfix needs to go to production quickly, you can skip intermediate stages and go directly to production — no need to create a separate hotfix pipeline. This keeps your pipeline configuration lean and your deployment process consistent.

Q: Can you explain the rollback features and any recent improvements?
Deployment Manager supports rollback scoped to the application and its rule sets. It builds on the Pega platform’s native rollback strategy — rules and data instances associated to rule sets packaged in the product rule are rolled back. A key recent improvement is environment-level rollback: even after a deployment has completed, if you identify an issue in production, you can navigate to that environment in Deployment Manager and roll back to a previous restore point (generated by each “Deploy Application” task). Full documentation on what is and isn’t covered by rollback is available from the team on request.

Q: How can we move data instances using PDM (e.g., non-versioned rules, Data System Settings)?
Associate the data instances (e.g., DSS) to the appropriate rule sets included in your product rule. When the product is packaged and deployed, those instances are included. Deployment Manager itself ships its own releases the same way — DSS and data instances travel with the product configuration.

Q: If a CDN artifact is configured on a different server, is Deployment Manager still helpful?
The deploy artifact pipeline supports any artifact stored in the shared Pega Cloud repository — as long as the artifact is available at the configured path, Deployment Manager can orchestrate the deployment. For CDN-specific configurations outside this scope, please reach out to the team directly (via the Expert Circle or to Madhuri/Ivan) for guidance tailored to your setup.

Q: Can approval tasks be routed to a team or a specific user?
Currently, approval tasks can be routed to a specific user. The ability to route approvals to a distribution list or group of users is coming soon — within this quarter.

Q: Given that automated test suites already validate correctness, does Pega see AI agents in the CI/CD pipeline primarily contributing through risk forecasting and change-impact analysis to inform deployment decisions (e.g., canary-style rollout vs. full release)?
Yes — this is exactly the direction Pega is exploring. The DevOps Assistant in Deployment Manager is an early step toward AI-driven insight in the pipeline. The roadmap includes AI-powered analysis of deployment reports to surface deviations, recommend next best actions, and — at a tenant level — provide cross-application risk recommendations without requiring a release manager to review each application individually. Canary-style rollouts and other advanced production release strategies are also being considered.


:handshake: Ask-Me-Anything Panel: John Higgins & Frank Guerrera

The following questions were raised live by voice during the panel — not submitted via the Q&A feature.

Q: How do you see competition from AI-enabled no-code platforms? What differentiates Pega?
John Higgins framed this as a question of first principles, not competition. Pega serves mission-critical applications where quality, scale, engineering resilience, and predictability matter as much as speed. Pega uses AI at design time — through Blueprint — not at runtime, which ensures predictable , auditable workflows suited for regulated industries. The AI is used to accelerate design, cover every workflow scenario, and infuse capabilities within the platform, while conversational agents are treated as just another channel within a governed workflow. Frank Guerrera added that Pega’s decades of experience with complex, large-scale enterprise workloads gives it an edge that general-purpose AI tools cannot replicate.

Q: How do you position Pega-as-a-Service compared to on-premises or client-managed cloud? If the hardware is affordable, why not run it yourself?
Frank Guerrera acknowledged that it’s technically possible to run your own infrastructure, but the value of Pega-as-a-Service is not in cost-per-hardware-unit — it’s in what clients are freed up to do. Pega manages 30+ global certifications, scales architecture and processes continuously, and handles the operational complexity of keeping the platform secure and current. The key insight from Frank’s experience running large financial firms’ data centers: “Don’t build what you could buy if it doesn’t give you competitive advantage.” Pega-as-a-Service allows clients to direct their resources toward the business innovation that differentiates them, rather than maintaining undifferentiated infrastructure. John added that Pega’s cloud preference exists because clients simply get a better experience on Pega Cloud — and that the seamless integration between Blueprint, Always Current, AI infusion, and the managed update process is only possible at its full potential on Pega Cloud.

Q: Many organizations now define SaaS as an operating model where development, DevOps, and release responsibilities are largely offloaded to the vendor, leaving clients with primarily an operational user base. How does Pega align with this expectation, and how is this reflected in the long-term roadmap, given that Pega-as-a-Service is positioned as a PaaS offering?
Both John Higgins and Frank Guerrera addressed this directly. Frank noted that Pega sits between traditional PaaS and SaaS, offering a powerful, customizable platform combined with managed cloud infrastructure. He emphasized that the definition of DevOps and SaaS is itself changing rapidly — Blueprint and AI are transforming what development looks like, and the industry is moving toward something new that doesn’t fit neatly into existing categories (“Design as a Service” was one suggestion). John agreed, pointing to Gartner’s Business Orchestration and Automation Technology segment as a signal of where the market is heading — toward bespoke, AI-assisted applications rather than rigid packaged software. In practice, Pega’s cloud preference means clients can already offload a significant layer of infrastructure, update, and operational responsibility to Pega, while focusing their own teams on business innovation.

Q: How does Pega Cloud handle data sovereignty and GDPR for EU customers?
Pega is launching a Sovereign Cloud in Q2 in Europe — confirmed by Frank Guerrera during the panel. This offering is being developed in close partnership with AWS and will include fully-fledged Pega Secure private connectivity options. More details will be shared in early Q2.

Q: How do you see AI and agentic capabilities fitting into runtime workflows, and what use cases are most relevant?
John Higgins outlined a near-term vision where large enterprises operate in heterogeneous environments, selectively using agents from multiple vendors — Workday for employee records, Oracle for ERP data, Salesforce for CRM, ServiceNow for ticketing — all tightly orchestrated within Pega workflows as “slaves to the workflow.” Pega’s strategic advantage is that it is the only platform that openly acknowledges clients will continue using multiple vendors’ technology, and designs for that heterogeneity rather than trying to replace everything end-to-end.

:speech_balloon: Join the Conversation

Did we miss anything, or do you have follow-up thoughts? Drop your comments below — the team is actively monitoring this thread.