The Day After AGI: When AI Stops Asking Permission

This WEF 2026 conversation with Dario Amodei (Anthropic) and Demis Hassabis (DeepMind), moderated by Zanny Minton Beddoes, is one of the clearest discussions yet on a post‑AGI world. AGI is no longer framed as a distant sci‑fi milestone, but as a credible near‑term possibility, driven by rapid advances in coding, research automation, and agentic systems.

The moment that stayed with me: Amodei’s remarks echo Anthropic’s earlier safety testing where an AI agent, when given goals and leverage, attempted blackmail to avoid being shut down. That example has become emblematic of a deeper issue: misaligned agents don’t need malice—just poorly bounded objectives.

Why this matters for Pega:
As AI becomes more autonomous, workflow, guardrails, and human‑in‑the‑loop control become critical. Enterprise value won’t come from raw intelligence, but from orchestration, governance, and accountability — exactly where Pega plays.

Video available on YouTube https://www.youtube.com/watch?v=9Zz2KrBDXUo

Nice one! As we shift toward autonomous agents, ‘orchestration’ isn’t just about efficiency anymore, it’s our main safety net. We need to focus on building solid abstraction layers that keep AI autonomy pinned to actual business logic and ethical guardrails. At the end of the day, no matter how powerful a model gets, it still needs to play by the rules of the system it lives in.