Concerns about using adaptive models, part 1

“We already have great models, why would we use Pega adaptive models?”

I am planning a short series of posts on common questions and concerns that come up when teams consider using adaptive models. Rather than trying to address every concern in one go, I want to take a more practical approach and focus on one concern at a time, the kind that comes up repeatedly in real conversations with customers, data scientists and marketing teams.

This first post tackles what is probably the most common and, in many ways, the most reasonable concern of all.

“We already have models, why would we use Pega adaptive models?”

This is usually not said defensively these days. It is said thoughtfully and with ambition. Organisations are investing heavily in data science capability, in carefully engineered features, in purchase, churn and risk models that reflect deep domain knowledge, and in governance processes that exist for very good reasons. When someone hears “adaptive models”, the instinctive reaction is often to assume that they are being asked to throw all of that away for a poorly governed world of aurtomated and inferior black box models.

That is not what this is about.

The most important thing to say upfront is that this is not an either-or decision. Pega Adaptive Models are not designed to replace your existing models, nor to undermine the work of data science teams. They are designed to work alongside them, inside a decisioning framework, each doing a different job.

To understand why that matters, it helps to be very clear about what adaptive models in Pega are actually built to do.

Adaptive models are deliberately narrow in scope. They are binary classifiers whose purpose in CDH is to help rank competing, eligible alternatives at the moment a decision is being made. They predict the likelihood that a customer will respond positively to a particular action or offer. A similar philosophy exists in Pega Process AI, where adaptive models are used not to describe an entire process end to end, but to learn which routing, outcome or path is most likely to succeed in a given situation, based on what actually happens in practice. In both cases, the intent is the same, to let learning happen close to the decision itself. They do not need to be perfect, just good enough, consistently, to say “given everything we know right now, this option is more likely to work than that one”.

That distinction is critical.

In real next best action scenarios, we are rarely choosing between one or two options. We are often ranking tens or hundreds of possible actions, across multiple channels, each with multiple treatments or creative variants, all subject to eligibility rules, suitability constraints and business policies. The decision space is large and constantly changing. Customer behaviour shifts, channels behave differently, treatments fatigue, and yesterday’s best option is not necessarily today’s.

Trying to manage that purely through manually crafted predictive models quickly becomes an operational problem, not a modelling one. Even if it were theoretically possible to build such models by hand, the labour involved in creating, maintaining, refreshing and governing them at that scale would be prohibitive.

This is where I often use an analogy that has helped in conversations.

Think of a handcrafted predictive model as an Omega Speedmaster. It is a thing of skill and craftsmanship, built by experts, expensive to produce, carefully maintained, and rightly valued. An adaptive model, by contrast, is more like a Swatch. It is mass produced, cheap, easy to replace, and very good at telling the time.

If you need one watch (maybe for an important occasion such as wedding, interview or date), the Omega is a solid choice. If you need many watches to match your outfit, activity or mood, the Swatch suddenly makes a lot more sense. The business interview is a high stakes situation you need to get right. Playing football with your kids in the park does not depend too much on the expense of your watch - you just need to know when it’s time to go home.

Adaptive models are the Swatches of predictive modelling. They are designed to be produced at scale, to be disposable, to learn continuously from real outcomes, and to do one job well, ranking alternatives in context. They are not trying to win a Kaggle competition. They are trying to win the next interaction or decision point.

This also explains why the conversation should not be framed as “whose model is better”. Your existing models can and should continue to play an important role. They can be imported, integrated, and used as inputs into decision strategies. They can express long term risk, lifetime value, churn propensity, affordability, or any other deep signal that matters to your business. Adaptive models then sit closer to the point of decision, learning from what actually happens when customers are presented with choices.

Over time, this changes the role of the data science team in a healthy way. Instead of spending disproportionate effort building and maintaining large numbers of similar models, teams can focus on higher value work, designing better decisioning approaches, improving data quality, and asking better questions of customer behaviour.

A brief word on GenAI, because it inevitably comes up. Large language models are incredibly powerful and will absolutely have a role to play in decisioning systems, particularly around content and presentation. However, ranking competing actions based on observed customer outcomes, with clear feedback loops and measurable lift, is a very specific optimisation problem. Adaptive models are built precisely for that. GenAI is complementary here, not a replacement. I will come back to this properly in a later post.

If there is one takeaway from this concern, it is this. When someone says “we already have models”, they are not resisting adaptive models, they are signalling maturity. The conversation then becomes not about replacement, but about fit for purpose, scope and collaboration. What kind of modelling belongs at the moment of decision, and what kind belongs elsewhere in the ecosystem? How can they work together?

Adaptive models exist because decisioning at scale needs something different. Not more craftsmanship, but more learning, more pragmatism, and far less manual effort. Whether in customer decisioning or in process optimisation, the pattern is the same, learn from what actually happens, as close as possible to the point where the decision is made.

In a future post, I will look at some of the concrete coexistence patterns for adaptive models and existing predictive models, how teams actually put this into practice, and where the boundaries tend to sit.