In almost every Customer Decision Hub engagement I’ve been involved in, the same conversation surfaces sooner or later. Sometimes it appears early, sometimes only once a programme is already live, but it is there. The question is not whether to use PxV arbitration, but what we actually mean by value, and how should we implement it?
As practitioners, we are fortunate to work with the most sophisticated real-time AI decisioning platform available. Pega’s adaptive machine learning does an exceptional job of understanding customer behaviour and producing propensities to guide empathic interactions. For many organisations, that capability alone is transformative. It replaces static targeting rules with evidenced relevance, and guesswork with AI-driven learning.
But eventually, organisations realise that delivering relevance is not the same as business success. The AI may be good at predicting what customers are likely to accept, while being silent on whether those outcomes are the ones the business should be caring about. That is usually when PxV enters the conversation, and when opinions start to diverge!
In theory, PxV arbitration sounds straightforward. In practice, it quickly exposes a deeper organisational tension: many businesses are not aligned on what customer value actually means. Marketing, engagement, service, and sales each contribute to value in different ways, over different time horizons, and often with different success measures. When these perspectives collide inside a single arbitration framework, uncertainty about value is inevitable, not because the technology is unclear, but because the organisation itself is still negotiating what it truly values.
There are familiar fears. How do we assign value to non‑revenue generating actions like service, reassurance, or education? How do we value a retention initiative? Won’t high‑value products (mortgages and broadband are the usual suspects) dominate arbitration even when customer readiness is low? Should value reflect finance, strategy, customer satisfaction, or brand intent? And who gets to decide?
What I’ve seen repeatedly is that these debates rarely fail because teams lack sophistication. They fail because V may be quietly asked to do too many jobs at once: represent contribution, encode urgency, express brand tone, fix prioritisation, and support reporting — all in a single number.
Different organisations resolve this tension in different ways. Some deliberately stay p‑only longer than feels comfortable. Others lean heavily on weights, or push journeys to do most of the sequencing work. The least successful response is to abandon AI‑driven arbitration altogether and fall back to targeting rules. In all of these cases, the organisation fails to realise the full promise of Customer Decision Hub: an AI‑driven, continuously optimising customer experience engine and reduce it to little more than a marketing campaign tool.
The purpose of this post is not to prescribe a single “correct” way to design V in CDH. Instead, it lays out a set of principles that I’ve found useful when advising clients — principles that help teams disagree productively and evolve safely.
These principles are offered deliberately as conversation starters, not commandments.
If you’ve solved this problem differently — or think some of these principles are wrong — I’d actively encourage you to say so in the comments. PxV is one of those areas where healthy disagreement can be a sign of innovation, not confusion.
The Ten Core Principles of PxV Value Design
The principles below are not configuration tips. They are a “constitution” for PxV arbitration. They are constraints that protect learning, reporting, and optimisation as sophistication and strategies evolve.
1. Separate customer relevance from business importance
Propensity measures what the customer is likely to do. Value expresses how much the business cares if it happens.
This separation is foundational. When customer relevance and business importance are conflated, decisioning flips between being customer‑led and sales‑driven, with no stable centre. PxV exists precisely because neither perspective is sufficient on its own.
2. Keep value to one question
“If this action is accepted, how good is that outcome for the business?” Nothing else belongs inside value.
The moment value starts encoding urgency, timing, preference, or journey stage, it loses meaning. Those signals belong elsewhere. Value must stay conceptually clean to remain interpretable.
3. Treat value as a journey contribution, not eventual product payoff
In multi‑step outcomes, action value represents progress toward success, not the end reward in isolation.
Most meaningful business outcomes require multiple experience decisions over time. If all value is assigned to the final step, arbitration becomes structurally biased toward premature closing and undermines experience.
4. Conserve value across the journey
Distributed action values should add up coherently so reporting, simulation, and optimisation remain meaningful.
Tools like Value Finder, Scenario Planner, and Impact Analyzer assume value is additive and comparable. If value is double‑counted or inflated, these tools faithfully optimise the wrong thing.
5. Value must include non‑revenue outcomes
Service, reassurance, risk mitigation, and journey‑enabling actions carry value because they enable long‑term success.
Service is not the opposite of sales. In many journeys, it is a prerequisite. Excluding non‑revenue actions from value design systematically biases decisioning toward short‑term gain.
6. Make value universal and comparable
Value should not vary by customer, moment, or context; it must remain stable enough to compare across populations and time.
Customer‑level variation belongs in outcome measurement, not in action value. Stability is what allows learning and optimisation to mean anything.
7. Use weights to express strategy and brand intent
Weights define emphasis, balance, and timing — they are not a correction for incoherent value.
Weights are where judgement lives. They allow organisations to decide how sales and service should coexist, without corrupting the value signal used for learning.
8. Use journeys to control the time when actions should dominate
Journeys manage timing and sequencing, so the right contributions win at the right stage.
Journey stage weighting changes priority without redefining contribution. This is what allows decisioning to respect customer readiness while remaining analytically sound.
9. Design value for learning: it must be explainable
Value does not need to be precise, but it must be defensible as a measurement currency for optimisation and impact analysis.
Learning fails not when models are wrong, but when meaning becomes blurred and incoherent. Explainable value is more important than “accurate” value.
10. Start simple — then evolve under governance
It is valid to begin with relevance‑led decisioning (p‑only) and refine value deliberately as a governed asset over time.
P‑only decisioning can be a strategic phase. What matters is recognising when it breaks — and evolving value thoughtfully, not reactively.
Closing thought
PxV arbitration is not about aggresively forcing business outcomes onto customers. It is about making customer relevence, value and strategy explicit — clearly, transparently, and governably — inside an adaptive machine‑learning decisioning system.
A continuously optimising customer experience depends on a clean separation of concerns: propensity to represent customer relevance, value to express business importance, weights to encode brand intent, and journeys to manage time and sequencing. When those responsibilities are blurred, decisioning still functions — but learning, trust, and optimisation inevitably suffer.