Using AI Without Breaking Trust: How Pega Separates Design Time from Run Time
Teams building enterprise systems feel constant pressure to “add AI” everywhere. Requests arrive half formed asking to make intake smarter, speed up decisions, or reduce clicks. The real risk is not in skipping AI. The real risk is putting it in the wrong place.
On the Pega Platform, the split between design time AI and run time AI is not theoretical. It drives how systems get built, governed, and trusted at scale. Design time is where exploration belongs. Solution designers and builders test ideas, generate options, and turn intent into structure. Mistakes cost little here, and learning moves fast.
Run time is different. Systems now interact with customers, regulators, and revenue. Behaviour must stay consistent. People need to be able to inspect it, explain it, and repeat it.
This post shows how that split works in practice. It explains where AI speeds up building through Pega Blueprint and Pega Infinity. It also shows where AI must stay inside clear workflows, decisions, and case steps at run time. The aim is not less AI. The aim is to use it where it helps, without breaking trust.
1. Design Time AI - accelerating how systems get built
Design time is where teams define intent. They shape case types, stages, data, integrations, and user experiences. AI fits naturally here because experimentation carries low operational risk and high learning value. On the Pega Platform, design time AI focuses on speed, coverage, and creative range. It helps teams move from ideas to a usable starting point faster without bypassing human judgment.
1.1 Pega Blueprint: turning intent into structure
Pega Blueprint is the clearest expression of design time AI. Blueprint translates business intent into an initial application structure. It converts goals, policies, and high-level requirements into tangible assets such as:
- Case types with stages and steps
- Core data objects and relationships
- Suggested personas and interaction channels
The value is not that Blueprint produces a finished application. It does not. The value is that teams start with a coherent model instead of a blank canvas. Solution designers can see how intent maps to cases and data before committing to a detailed design. Once the blueprint exists, solution builders refine it within the Pega Infinity Platform. They adjust stages, rename steps, add validations, and align flows with real operating policies. AI accelerates the early work. Humans apply judgment and accountability.
1.2 Pega Infinity: AI‑assisted building without operational risk
Design time AI on the Pega Platform comes together across the Pega Infinity Studios (App Studio and Dev Studio), a unified environment where teams configure, extend, and validate applications before they ever reach production.
1.2.1 Pega GenAI Autopilot
A central capability here is Pega GenAI Autopilot, which brings context‑aware generative AI directly into the build experience. Rather than working from generic prompts, Autopilot operates with awareness of application intent, case context, and platform best practices.
Within the Pega Infinity Studios, GenAI Autopilot can assist builders by generating starter assets across multiple layers of the application, such as:
- Case creation based on business intent, including stages and steps
- Case data models, with suggested attributes and sample data for early validation
- User interface drafts, aligned to the case lifecycle and personas
- Data objects and integrations, including assistance based on OpenAPI specifications
- Automated test scripts, helping teams validate behaviour earlier in the lifecycle
The emphasis is on acceleration, not automation of judgment.
Every artifact generated by GenAI Autopilot is a draft, created inside the governed design environment. Builders review, refine, and approve each asset before it becomes part of the application. Nothing generated by Autopilot executes at run time without passing through the same modelling, validation, and governance steps as manually created assets.
This is a deliberate design choice. GenAI Autopilot absorbs generative variation at design time, where exploration is safe and reversible. It helps teams move from intent to structure faster, reduces manual scaffolding, and shortens feedback loops without introducing unpredictable behaviour into live systems.
In other words, GenAI Autopilot accelerates how systems are built, while the Pega Infinity Studios ensure that humans remain accountable for what ultimately runs.
1.2.2 AI Designer
As AI usage grows across an application, acceleration alone is not enough. Teams also need a way to govern, test, and optimize generative AI consistently. This is where the AI Designer landing page plays a critical role.
AI Designer provides a central design‑time control plane for generative AI on the Pega Platform. From this single location, builders can configure and manage reusable GenAI Connect rules and Agent rules, instead of embedding prompts or models directly into individual case types. Using AI Designer, teams can:
- Configure reusable GenAI Connect rules for tasks such as summarization, extraction, classification, and explanation
- Define Agent rules that encapsulate how AI is invoked inside case steps or user experiences
- Test these GenAI rules using AI‑generated test data to validate behaviour early and repeatedly
- Compare and contrast outputs from different models using consistent inputs
- Evaluate quality and performance centrally, without manually executing individual case types
This approach enables a clean separation of concerns:
- AI behaviour is designed, tested, and tuned in AI Designer
- Cases and workflows consume AI through governed, reusable rules
Because testing and comparison happen centrally, teams can switch to a better‑performing model or configuration without re‑testing every consuming case flow individually. All outputs are still mediated by case steps, rules, and decisions at run time. AI Designer complements GenAI Autopilot. Autopilot accelerates asset creation. AI Designer ensures that generative AI usage itself is inspectable, testable, and controlled before it ever influences a live process.
1.3 Why exploration belongs at design time
Generative AI introduces variation by design. Whilst that is a strength early in the lifecycle, it is a liability in production.
Pega’s design time stance embraces this tension. Teams can think broadly, test alternatives, and discard weak ideas. The platform absorbs experimentation without exposing customers or staff to unstable behaviour. This is not about limiting innovation. It is about placing it where it creates value safely.
2. Run Time AI - precision inside guardrails
Now let’s look at run time AI. This is where systems must behave predictably and where customers, employees, and partners depend on consistent outcomes. Novelty matters way less than trust. On the Pega Platform, run time AI tends to show up in two dominant patterns. Both can use generative AI, but neither hand control of execution to the AI model.
2.1 Pattern 1: Automation Agents (AI in Agent Steps)
The first, and most common, run time pattern is AI embedded inside Agent steps. These are automation points that help interpret information or support case work, while the case model controls what happens next.
For example, consider a regulated intake flow such as an insurance claim or a bank service request. At run time, the system’s job is not to be creative. It is to understand what arrived and route work through a known path. Automation agents may be used to:
- Classify an incoming request
- Detect intent from text or speech
- Extract key data from documents and attachments
- Summarize long documents for a case worker
- Check completeness or consistency and flag what’s missing
Each of these actions is narrow in scope. The agent produces a defined output. The case then proceeds based on explicit stages, steps, rules, and decisions - not freeform prompts.
This is the core guardrail: AI interprets signals; the workflow governs execution.
Because these automation agents operate within defined steps, the behaviour is testable and reviewable. Case workers can see the result. Supervisors can validate outcomes. Compliance teams can audit decisions and the process that followed.
2.2 Pattern 2: Conversational, Knowledge, and Coach Agents
The second run time pattern is user‑facing AI. These are experiences that help customers or employees interact with the application more naturally. This includes:
- Conversational agents (application, case, or self‑service agents)
- Knowledge agents (for finding answers from approved sources, such as Pega Knowledge Buddy)
- Coach agents (guided assistance experiences, such as GenAI Coach)
These agents focus on guidance and task completion, not independent decision‑making. At run time, they typically:
- Answer questions using approved knowledge sources
- Guide users through known journeys or tasks
- Explain next steps, requirements, or outcomes
- Help gather information or complete forms
What they do not do is invent new processes or bypass governance. Even when the interaction is conversational, each exchange is anchored to the application context. Whether that be a case, a journey, a knowledge base, or a predefined service flow. The AI acts as an interface layer that reduces friction, while the platform controls what actions are allowed. The system of record does not change:
- The case model remains authoritative
- Stages and steps still govern progress
- Rules and decisions still control outcomes
This ensures consistent behaviour across channels. A customer using self‑service, a case worker using a coach, and an automation step inside a case can all rely on the same underlying logic.
2.3 Decisioning stays explicit
Across both run time patterns, decisioning must remain explicit for audit, compliance, and trust. Business outcomes rely on:
- Rules
- Decision tables
- Predictive and adaptive models
Large language models can inform and accelerate the creation of supporting artifacts, most safely at design time, but run time behaviour should remain inspectable, testable, and explainable through the platform’s decisioning and workflow constructs.
3. Building systems that last: why structure matters more than prompts
As AI becomes easier to apply, some platforms promote prompt‑driven systems as a shortcut: write a prompt and let the model decide. In controlled demos, this can look compelling. Under real operational load, it breaks down. Prompts change behaviour across model versions. Small wording changes produce large outcome swings. Debugging becomes opaque. Customer experiences drift.
Pega takes a different approach. Logic lives in cases, steps, rules, and decisions - not in freeform text. Workflows remain explicit. Decision paths stay inspectable. The platform runs fast, scales predictably, and behaves the same way on Monday morning and Friday afternoon.
This is not an anti‑AI stance. It is a disciplined one.
"… you do not want AI to be creative when you are running a nuclear power plant.”
Alan Trefler
That principle applies to enterprise systems. Creativity belongs where teams are designing and learning. At run time, consistency and control matter more than novelty.
This is why the design‑time and run‑time split on the Pega Platform is a strength, not a limitation. Design time invites exploration. AI accelerates creation and learning through Blueprint, GenAI Autopilot, AI Designer, and the Pega Infinity studios. Variation is safe. Run time demands control. AI performs narrow, well‑bounded tasks inside explicit workflows. Case management and decisioning remain visible, testable, and governed.
That balance explains why Pega systems run large banks, insurers, and public sector organizations and why teams can adopt AI without rewriting their operating model.
Use AI where creativity helps.
Use structure where trust matters.
That distinction is what separates experiments from systems that endure.
How are you separating design time experimentation from run time execution in your Pega applications? Where have you seen this distinction reduce risk or speed adoption? Where do you wish Pega did more? Share your experience below.



