When Decisioning Runs Out of New Things to Say: The NBA Availability Problem in High-Frequency Channels

Inbound digital channels are unforgiving for Next Best Action. Customers open an app because they want something — information, action, clarity, or simply confirmation that nothing needs attention. In banking especially, customers log in daily, sometimes several times a day. These are moments of chosen brand attention. They should not be wasted.

And yet, for many teams going live with their first NBA release, they are.

The Problem: NBA Silence and the Over-Exposure Trap

The pattern is familiar: an initial MLP action library is deliberately small and contains a handful of carefully governed messages chosen for safety and clarity. This is arguably sensible. But when those messages are exposed repeatedly to highly engaged customers, the system quickly hits a fork in the road. Either the same few eligible messages must appear constantly, which may feel jarring and repetitive, or they are suppressed aggressively with contact policies and impression limits, leaving the app with nothing to say.

Both outcomes waste the moment. Critically, this is not primarily a contact policy problem. It is an optionality problem.

Optionality is the average number of viable actions that survive long enough through engagement rules to reach NBA arbitration. When optionality is high, AI-driven arbitration can choose well. When it collapses, even a sophisticated decision engine might well be reduced to a handful of targeting rules. Internal benchmarks across mature implementations show that engagement increases dramatically as optionality rises. This should not be a surprise. The bigger the pool of eligible messages you have the more likely you are to find an engaging one. Once optionality falls below a small handful of choices, engagement drops sharply — not because customers are disengaged, but because the system can no longer choose relevant and timely messages.

In high-frequency inbound channels, a small action library amplifies this effect. Repetition drives suppression. Suppression drives silence. Silence becomes normalised as a “safe” outcome — when in reality it is almost always a system failure, not a result of a customer signal.

Working Through the Solutions

1. Contact Policies

The instinctive first response: suppress over-exposed actions using engagement rules and contact policies. This controls repetition, but it trades one problem for others. Customers who genuinely need repeated prompting before recognising relevance are cut off too early. The system becomes quieter, but not smarter. As I have discussed in previous posts here, a key aim of early NBA experiences should be “training the customer to say yes”. When you are silent, you are training the customer to expect little empathy from you.

2. Propensity Thresholds

A logical next step, but a blunt one. Thresholds introduce cliffs into what is usually a smooth, time-dependent response curve. An action just below the threshold is treated as worthless, even where repeated exposure might have increased its relevance. More damaging still, thresholds destroy optionality before arbitration can weigh up value, context, and overall priority. The system eliminates choices upstream rather than trading them off intelligently. Thresholds can also cause NBAs to miss low-propensity, high-value actions entirely — the equivalent of never shooting for the elephant because the odds look long.

3. A/B Testing

A tempting route for teams who want evidence before expanding the action library. In practice, it demands significant manual effort, moves slowly, and rarely captures the full nuance of how different customers respond to different actions over time. Useful for isolated hypotheses, but not a scalable solution to a structural optionality gap.

4. Adaptive Learning

This is where the discussion usually lands, and for good reason. Some messages genuinely require repeated exposure before a customer recognises their relevance. Propensity to engage may rise with impressions, peak, and then tail off as the message becomes over‑exposed. That response curve will differ by message, by customer, and by context, and it looks, at first glance, like an ideal problem for adaptive decisioning.

In practice, it may not work as hoped.

Adaptive models are trained to predict a single response, not to explicitly model a time‑dependent propensity curve. They optimise for “will the customer engage now?”, not for how engagement evolves with repeated exposure. We implicitly hope that interaction history, impression counts, recency flags, and simple windows contain enough signal for the model to infer this shape indirectly. Often, it may not.

More fundamentally, this approach assumes something that is rarely stated out loud: that we are not simply flogging dead horses. Adaptive learning can only tune exposure when there is genuine latent demand to discover. If an action is fundamentally irrelevant to a customer, no amount of repeated exposure will make it relevant. Adaptive cannot manufacture interest where none exists. It can only optimise among actions that will eventually be taken up by someone.

There is also another structural tension. Adaptive learning depends on exploration. It needs actions to remain available long enough to observe how response changes over time. Contact policies, frequency caps, and propensity thresholds remove actions precisely when learning would need them most. The model never observes the full response curve, never sees the peak, and never learns where over‑exposure actually begins.

We cannot have it both ways. Either we allow adaptive decisioning to learn with relatively few hard constraints, accepting short‑term repetition in exchange for long‑term understanding, or we impose frequency rules and accept that learning will be limited.

When optionality is low, this failure is unavoidable. Adaptive has nothing left to explore. It cannot learn its way out of an empty cupboard.

5. Increase Optionality — The Right Starting Point

The most effective solution is also the most structural: expand what the system has to choose from. This does not mean more sales pressure or more offers. It means more conversational range in the Action Library. Informational actions, reassurance messages, progress updates, service nudges, reminders, and confirmations all count. Variant treatments of the same underlying action, differing in tone or framing, are not a compromise. They are a legitimate and often essential source of optionality in NBA experiences.

Marketers may be reluctant to “pad out” the action library with what can look like low‑value messages. But familiarity matters. When customers regularly encounter relevant, low‑friction messages in the app, higher‑value actions are more likely to land on a receptive user. Optionality is not about noise. It is about ensuring the system has enough range to choose well.

With genuine optionality restored, arbitration can do its job. Silence then becomes a conscious outcome of the decision engine, not the accidental result of over-suppression.

Reframing the Diagnostic

Rather than endlessly tuning contact policies or thresholds, it is often more revealing to measure optionality directly. How many actions typically reach arbitration in the channel? How does engagement change as that number rises or falls? Where does repetition become a problem, and where does silence start to dominate? These questions usually expose structural issues that no amount of threshold tuning will fix. Effort then shifts from more tweaking of control parameters in NBA and towards more available content.

Inbound apps are places customers return to by choice. If the system repeatedly has nothing to say, customers notice. Stop tuning contact rules. Measure decision health. Act to improve optionality.

I would be very interested in others’ thoughts here. Where has silence been the right decision — and where has it been the symptom of optionality collapsing too early?