AI Lab: Survey of SoTA in Agentic LLMs

Multi-agent systems (MAS) have been researched for decades, but the arrival of LLMs gave the field a huge boost, as prior MAS systems could in practice only be deployed in very well defined domains.

And on the flip side, agentic approaches to LLMs and LLM-powered agents have turned LLMs from passive couch potatoes, into active AI systems that can get work done. For a discussion of this evolution, see my PegaWorld breakouts from 2024 (!) and 2025 here:

So how to keep up with the more researchy view on LLM-powered agents? To address this need I collaborated with my Leiden University colleagues on this survey paper. This was mostly written at the start of 2025 and published at the end of year, which might feel like eons ago in agentic AI, but it should still give you loads of pointers where agentic research might be heading!

Aske Plaat, Max van Duijn, Niki van Stein, Mike Preuss, Peter van der Putten and Kees Joost Batenburg. Agentic Large Language Models, a survey. Journal of Artificial Intelligence Research, Vol. 84, Article 29. Publication date: December 2025.

Enjoy!