As we move through 2026, the EU AI Act stops being something organisations are vaguely “getting ready for” and starts to shape real delivery decisions. New obligations are coming into force for high‑risk AI systems and for general‑purpose AI and foundation models, and customers are increasingly asking what this means for how they build, deploy, and govern AI in practice.
At a high level, the AI Act introduces a risk‑based framework. Certain AI uses are prohibited. Others, particularly those that affect access to essential services, employment, credit, or safety, are classified as high‑risk and allowed only with strong controls. More recently, the Act has also introduced explicit obligations for general‑purpose AI models, covering documentation, transparency around training data, and the management of systemic risk. These are real requirements, and they matter.
But when you listen carefully to customer questions, they are rarely just asking about model checklists. What they are really asking is something deeper: how do we stay in control of AI‑driven outcomes as these systems become embedded in everyday business decisions?
This is where it helps to shift the conversation away from AI models and towards how decisions actually run inside the organisation.
The AI Act places a strong emphasis on transparency, accountability, human oversight, and the ability to explain outcomes. In practice, those qualities don’t live inside a model. They emerge from the way intelligence, decisions, and processes are connected. A prediction or a generated response only becomes meaningful when it is applied in context: who the customer is, what has already happened, what policies apply, what risks are acceptable, and what outcome the organisation is trying to achieve.
That context usually lives in a operational case or a workflow, not in the model itself. Often that workflow may be unstructured and manual, and that is where the problems start,
A useful mental model is to think in terms of three layers working together. Intelligence, including predictive models and GenAI, interprets signals, infers intent, and proposes options. Decisioning arbitrates between those options, applying policy, eligibility, value, and risk to determine what should happen next. Workflow and case management provide orchestration and memory: they maintain state, sequence interactions, involve humans where appropriate, and capture outcomes so learning happens responsibly over time.
Seen through this lens, many of the AI Act requirements start to make more sense. Explainability becomes about explaining what happened in a specific case, not opening up a model in isolation. Human oversight becomes something you design into the flow of work, rather than a last‑minute approval step. Accountability becomes clearer because responsibility sits with decisions and processes, not just algorithms.
This also changes how compliance feels. Organisations that treat AI governance as an operational capability tend to move faster, not slower. When decisions, policies, and workflows are already explicit and connected, new regulatory requirements feel like an extension of existing practice. When they aren’t, compliance feels like friction and delay.
This is where Pega is somewhat unusual. Pega’s position has long been that trusted AI at scale requires the combination of case context, workflow orchestration, and decisioning, rather than treating AI as a standalone component. That combination is what allows AI‑driven decisions and experiences to be embedded in real work, managed consistently, audited with confidence, and trusted over time.
So when customers ask “what should we be doing about the AI Act?”, a grounded response is often: focus less on individual models, and more on how AI‑driven decisions are made, governed, and evolved across the business. The Act is not just a legal hurdle. It’s a prompt to build AI systems that organisations can actually explain, control, and stand behind.
Common customer questions on the EU AI Act — and how to respond
“Does this mean we can’t use AI or GenAI anymore?”
No. The AI Act is not a ban on AI. It introduces a risk‑based framework. Some uses are prohibited, but most enterprise use cases are allowed, including high‑risk ones, provided there are appropriate controls around transparency, oversight, and accountability. A helpful way to frame it is that the Act regulates how AI is used, not whether it can be used.
“Is this mainly about controlling foundation models and LLMs?”
Foundation models do have explicit obligations, particularly around documentation and transparency. But for most enterprises, the bigger challenge is demonstrating control over AI‑driven outcomes in real business situations. Regulators care less about which model you use and more about whether you can explain, govern, and stand behind the decisions that model helps produce.
“What does ‘explainability’ actually mean in practice?”
In practice, explainability is rarely about opening up a model. It’s about explaining what happened in a specific case. Who was involved, what information was available at the time, what rules or policies applied, what options were considered, and why a particular outcome occurred. Systems that tie AI decisions to case context and workflow make this much easier to demonstrate.
“Do we need a human to approve every AI decision now?”
No. The AI Act talks about meaningful human oversight, not universal manual approval. Oversight works best when it is designed into the process. Some decisions can be fully automated, some conditional, and some escalated to a human. What matters is that these boundaries are explicit, intentional, and provable.
“Why are you talking so much about workflow and cases? Isn’t this an AI regulation?”
Because AI rarely acts on its own. Decisions happen inside processes. Context, history, policy constraints, and outcomes all live in workflows and cases. The AI Act implicitly assumes that organisations can surface and govern this context. When AI is embedded in workflow, it becomes much easier to show control, accountability, and compliance.
“Will this slow us down?”
It depends on how AI is implemented. Organisations that treat governance as an operational capability often move faster, not slower. When decisions, policies, and workflows are already connected and visible, regulatory requirements feel like an extension of existing practice rather than a disruptive change.
“So what’s the sensible way forward?”
Focus less on individual models and more on how AI‑driven decisions are made, orchestrated, and monitored across the business. Intelligence, decisioning, and workflow need to work together. That’s what allows AI to be used confidently, at scale, in a way that organisations can explain, control, and stand behind as regulations mature.
