Before You Let AI Run Your Workflows, Ask Yourself Two Questions

Quick question for anyone building AI-powered workflows in financial services:

Can you explain how your workflow was designed AND prove it runs the same way every time?

Most firms can answer one of those. Almost nobody’s thinking about both.

That’s because it’s not one spectrum of risk. It’s two. Design time and run time. Different questions, different exposures — and you can have strong governance on one and a dangerous blind spot on the other.

I assembled a framework around this that I’m calling the AI Workflow Autonomy Spectrum.

The part that surprised me most? Where the hidden risk actually lives. Not at the extremes. In the middle.

Link to full LinkedIn article below. I’d welcome hearing whether this maps to what you’re seeing.

Before You Let AI Run Your Workflows, Ask Yourself Two Questions

This is a strong framing, especially the separation between design‑time governance and runtime predictability.

One thing worth reinforcing is that the highest risk often sits between those layers: when a well‑designed system allows uncontrolled runtime autonomy, or when strong runtime controls are applied to poorly governed designs.

Pega’s approach explicitly treats:

  • Design‑time AI as proposable and reviewable

  • Runtime AI as constrained by workflow, policy, and audit

I’m curious whether others have seen failures caused more by design gaps or by runtime drift, and how they detected them early.