The AI Layer Nobody's Competing At: Why Design-Time Governance Changes the Game

Most of the AI conversation right now — in this community and everywhere else — is about runtime. Which model is smarter. Which agent is more autonomous. Which reasoning engine handles edge cases better. That is important work. But I think we are underweighting a different layer entirely: design-time AI.

Blueprint operates at that layer. And the governance implications are worth examining.

Blueprint is AI under user control at the design phase. It does not generate code autonomously and ship it. It converts a plain-language business problem into a visible, editable, challengeable application design — case types, data models, personas, integration points — and waits for human judgment before anything moves forward. Every suggestion is transparent. Every decision is auditable. The AI proposes. The human disposes.

This maps directly to what Andrzej Lassak described at the Cloud Summit — a reference I keep returning to because it is one of the cleaner articulations of the design-time principle I have heard inside Pega: use AI’s creative reasoning at design time, keep runtime execution governed and predictable. Blueprint maps every path and decision in advance so that what happens at runtime is audited, consistent, and explainable — rather than relying on a large language model to improvise in the moment.

Here is what that looks like in practice: describe a trade finance Letter of Credit workflow in plain language. Blueprint returns a structured case type, a data model with embedded UCP 600 compliance fields, and integration scaffolding for SWIFT connectivity — all visible, editable, and human-approved before a single line of code is committed. That is design-time AI producing a governed foundation, not a runtime model hoping it gets the compliance fields right.

For enterprise clients in regulated industries, this distinction is not academic. It is the difference between a system you can stand behind and one you cannot.

Why this matters beyond Blueprint specifically: the Ramp AI Index (April 2026) found that AI adoption has crossed 50%, but the dividing line between adopters and non-adopters is access to institutional knowledge — who is telling you what to build and how to build it. I explored this in a LinkedIn article this week: “The Most Expensive Silence in Business Right Now” (https://www.linkedin.com/pulse/most-expensive-silence-business-right-now-pumulo-sikaneta-5male/). When partners encode that design-time expertise into reusable Blueprint templates, they are not just creating delivery accelerators — they are making institutional knowledge accessible at scale.

For the full breakdown — including what Blueprint generates on import, the Partner-Powered Blueprint model, and how partners are using it as a pre-sales discovery tool — I have posted the companion piece in the Blueprint Expert Circle: https://forums.pega.com/t/blueprint-as-equalizer-what-the-ramp-ai-index-tells-us-about-why-partner-blueprints-matter-more-than-we-think/11526/2

For this community, the question I would love to explore is:

If we accept that AI should be governed and transparent at design time — not retrofitted after the fact — why are we still tolerating the opposite pattern in decisioning architecture, data model design, and integration mapping? What’s stopping us from applying the Blueprint principle across the entire delivery lifecycle?

Curious what the AI Expert Circle thinks.

Hello Pumulo,

Great post. I think this would eventually happen and gap in what is being governed at design time will keep reducing. But I think design time human configuration changes cant be fully eliminated as there will always be the problem of in house proprietary tools, or bespoke apps or legacy systems which needs integrating with. Todays green field app will become legacy in a decade. So we will always be having the gap. But definitely we can attempt to reduce the gap but cant avoid it.

I see at least three areas where agents can help: design time, runtime and build time. I agree that runtime agents discussion take a lot of air right now. How do you think about build time - coding agents? They are super powerful and this use case fits perfectly GenAI: you know what you want to achieve (and you can test it) but that route to the goal is not linear and there are multiple solutions - agent can help here. Interesting what others think.

1 Like

@sikap AI Agents are evolving at both product and IT client environments. I think eventually Builders/Designers/Business Analyst will use Domain Expert Assistants, Coding Assistants etc. and Agent to Agent communication will reduce lot of gap in the knowledge between companies during the application development. for example grok will give best financial analyst agent in coming months and pega agent if configured properly by IT client according to his requirement it will be a faster way producing Production level ready to use artifact for the clients. Pega agents give template for building faster communication AI agents and IT clients can deploy it quickly and based on business demands the agent communication will be tweaked. Even blue print also uses domain expert agents in future is my guess.