The Day After AGI: When AI Stops Asking Permission

This WEF 2026 conversation with Dario Amodei (Anthropic) and Demis Hassabis (DeepMind), moderated by Zanny Minton Beddoes, is one of the clearest discussions yet on a post‑AGI world. AGI is no longer framed as a distant sci‑fi milestone, but as a credible near‑term possibility, driven by rapid advances in coding, research automation, and agentic systems.

The moment that stayed with me: Amodei’s remarks echo Anthropic’s earlier safety testing where an AI agent, when given goals and leverage, attempted blackmail to avoid being shut down. That example has become emblematic of a deeper issue: misaligned agents don’t need malice—just poorly bounded objectives.

Why this matters for Pega:
As AI becomes more autonomous, workflow, guardrails, and human‑in‑the‑loop control become critical. Enterprise value won’t come from raw intelligence, but from orchestration, governance, and accountability — exactly where Pega plays.

Video available on YouTube https://www.youtube.com/watch?v=9Zz2KrBDXUo