Concerns about using adaptive models, part 2

“GenAI can do personalised targeting now — so why bother with adaptive models?”

Ask a large language model for the next best action for a customer and it will answer without hesitation. It will sound confident, articulate, and reassuringly correct. It can explain an offer, soften its tone, anticipate objections, and present the interaction in a way that feels considerate rather than transactional.

It is easy, looking at that fluency, to wonder whether we still need adaptive models at all.

If GenAI can “do targeting”, why bother with the older machinery of decision strategies, propensity models, and optimisation logic?

To answer that, it helps to separate two very different problems that are often blurred together. One is about how a decision is expressed to a customer. The other is about which decision should be made in the first place, and why. The fact that GenAI is exceptional at the first does not mean it solves the second.

This distinction maps closely onto what AI researchers call the alignment problem. How do we know that an AI system is optimising towards the goals we actually intend it to pursue?

In research, alignment is often discussed in abstract or even existential terms. In an enterprise context it is much more practical. Are the actions being recommended aligned to business objectives, customer outcomes, regulatory constraints, and long‑term value, or are they simply plausible answers to a well‑phrased question?

A large language model, when asked for a next best action, is doing exactly what it was trained to do. It generates a response that fits the prompt, draws on patterns it has seen before, and sounds reasonable to a human reader. It may even sound empathetic and well judged. That is a remarkable capability, and it is genuinely valuable in customer engagement.

Where GenAI adds real power to Customer Decision Hub is in humanising the interaction between the decision engine and the customer. It can take a selected action and present it in language that fits the moment. The same offer can be framed differently for a loyal customer than for one who is frustrated, for a proactive outbound message versus a live service interaction, for reassurance rather than urgency. Tone, phrasing, and emphasis can all be adjusted so the interaction feels natural and appropriate to the context.

In short, GenAI is extremely good at helping us say a plausible thing, in the great way, to the individual person.

But none of that tells us whether it was the right thing to do.

Decisioning, in an enterprise sense, is not primarily a language problem. It is an optimisation problem. It requires an explicit definition of what “best” means, an understanding of trade‑offs, and a mechanism for learning from real outcomes over time. It requires constraints, policies, and feedback loops. Most importantly, it requires evidence.

This is where adaptive decisioning plays a fundamentally different role.

In Customer Decision Hub, next best action is not an opinion generated on demand. It is the result of an explicit optimisation process. Candidate actions are evaluated against predicted customer propensity, business value, and strategic levers. Engagement policies and rules enforce what must not happen, as well as what should. Outcomes are observed, models update themselves, and future decisions change as a result.

That entire structure exists for one reason: alignment.

We do not want a system that merely sounds convincing. We want one that can be shown, over large volumes of interactions, to be optimising towards defined goals. We want to be able to say not just “this sounded right”, but “this improved the outcome we care about, and we can demonstrate that it did”.

A large language model cannot give you that guarantee on its own. It does not have an explicit objective function at runtime. It does not know which business trade‑offs matter more than others unless those trade‑offs are encoded elsewhere. It cannot, by itself, manage control groups, measure lift, or reconcile competing objectives across channels and time horizons.

There are also more conventional, practical concerns that reinforce this point, even if they are not the heart of the argument. Running LLMs at scale is expensive compared to scoring decision logic and adaptive models, which matters when you are making millions of decisions per day. Latency is manageable in a call centre conversation, but becomes problematic when constructing personalised web pages or running large outbound batches. Risk, too, is different in nature. A fluent model making a rare but catastrophic mistake can cause far more damage than a bounded optimisation engine operating within well‑defined constraints.

These issues are real, but they are secondary. The central question is still alignment.

Seen through that lens, GenAI and adaptive decisioning are not alternatives at all. They are complementary layers in a single system. Adaptive models decide what should happen, based on optimisation, evidence, and learning. Generative models decide how that decision is expressed, so that it feels human, contextual, and appropriate to the individual customer.

This separation also opens up interesting future possibilities. If we accept that the LLM’s role is to shape expression rather than choice, we can imagine guiding that expression deliberately with CDH. A “Next Best Cialdini Principle”, for example, could influence whether an action is framed around social proof, reciprocity, or scarcity for each customer. A “next best brand voice” could ensure that tone and language remain consistent with brand intent, while still adapting to customer context. In both cases, the optimisation remains anchored in the decision engine, while the LLM is used to humanise delivery with those language nudges.

GenAI expands what we can say, and how naturally we can say it. Adaptive decisioning keeps the system aligned to what we are trying to achieve, and allows us to prove that it worked.

Confusing the two leads either to beautifully phrased decisions that drift away from enterprise goals, or to technically optimal decisions delivered with all the warmth and empathy of an API call.

The real opportunity is not to replace adaptive models with GenAI, but to let each do what it does best.

GenAI humanises the interaction with the customer. Adaptive decisioning, grounded in CDH, determines which action to take, when to take it, and provides the alignment and safety that make those decisions trustworthy at scale.

1 Like

If CDH is the brain, LLMs could be the right side, while adaptive models are the left.

1 Like