Hi All,
I have a requirement to create one ADM model for every page in a website. But the challenge is that there will be 5 models for every proposition as we have 5 pages as of now. Note: Page name is passed as model context of the ADM configuration
The other option which I was thinking is to pass the page name as predictor to the ADM model. As we have same set of predictors for all the ADM models and passing page name as an extra predictor will reduce the models.
Note: Page name is passed as a predictor of an ADM model.
Which is the best practice in this scenario? If I go with second option, it will not create new models for new pages as the pages might be added in future.
But if I have to choose option 1, then the number of models will keep increasing over time.
Regards,
Jasmine
Hi @JasmineM4474,
In a way the choice between a “context key” and a predictor is somewhat arbitrary. In general, if you expect radically different behaviours, if there is enough volume to deal with “cold start” and the combinational explosion that results from it is manageable, choosing context keys is not a bad idea at all, as - if these conditions hold - will give you better predictive performance than pulling the (context) data in as predictors. Examples from other customers we’ve seen are for example anonymous vs. known customers, and we’ve also seen customers that consider a page almost like a channel, so they added an additional context key for page.
You listed all these arguments already. And by the way, instead of additional context keys, you could also configure different adaptive model (rules) for each page, but that is more maintenance and the only reason to do that would be if they had different configurations in terms of (hyper)parameters, or different sets of predictors (governance etc).
More context keys gives more context key combinations which gives you more model instances. That is a load on the system of course. Your Pega staff can help with sizing. More context keys also make the models more granular, so every model will receive fewer responses. If your response volume is high this is not a problem, but if it takes weeks for models to gather sufficient responses to become reliable, it probably is not a good idea to configure it like this.
As always, I would start simply by adding the page context as predictors. Then in a controlled experiment measure if models with customized context really work better - before bothering. Consider the data extract from the models (“Historical Data Capture”) to pull the data and run an experiment off-line.
-Otto
@Otto_Perdeck Thanks for the response. When you refer to Historical Data Capture, does it mean IH data? If there any ways to see all the responses which have passed to ADM models? I can see last 5 responses in the ADM landing page. If this is possible, we can start with ADM model with page as a predictor and define a context key later.
@JasmineM4474 No with Hist data capture I refer to a capability in ADM to dump the predictor values and outcomes. It is a relatively new feature enabled on the ADM rule.
See for example
@Otto_Perdeck Thank you. I think this is different from the snapshot model/predictor data of the ADM model.
@JasmineM4474 It is totally different indeed. The datamart data contains model info and no individual customer records. The historical dataset contains the customer data records and no info about the models.