External SageMaker Model Propensity Not Displaying in Prediction Studio Propensity Tab

Category: Pega AI/Machine Learning Services

Question:

Hello Pega Community,

I’m facing an issue with an external AWS SageMaker model integration in Pega Prediction Studio where the model response contains the propensity value, but it’s not displaying in the Propensity tab.

Current Situation:

  1. Successfully created an external model in Prediction Studio
  2. Model calls SageMaker endpoint and receives responses correctly
  3. Full response displays in the “Result” tab but Propensity tab shows “——” (empty)

SageMaker Response Format:

jsonCopy

{ "pyPropensity": 0.24374601781192465, "probability": 0.24374601781192465, "risk_level": "LOW", "recommendation": "Standard engagement", "predictions": [0.24374601781192465] }

What I’ve Observed:

  • The response is stored in .pyModelRunResult property
  • The Propensity tab UI element has data-ui-meta with clipboardPath':'.pyPropensity'
  • HTML inspection shows the tab expects .pyPropensity property to be populated
  • No automatic field mapping occurs from external model response to .pyPropensity

What I’ve Tried:

  1. Metadata Template Configuration - Added response mapping but no effect
  2. SageMaker Response Modification - Tried different field names (pyPropensity, propensity, Propensity)
  3. Data Transform Creation - Couldn’t find where to apply response data transforms for external models
  4. Model Configuration Search - No visible post-processing or field mapping options

Questions:

  1. How should external model responses be mapped to the .pyPropensity property in Prediction Studio?
  2. Is there a standard configuration for field mapping in external models that I’m missing?
  3. What’s the recommended approach to extract propensity from external model JSON responses?
  4. Are there any specific settings or configurations for external models that differ from built-in Pega models?

Any guidance on the standard approach for this integration would be greatly appreciated!

@Shaikk17193357

The reason you don’t see the propensity value in the Propensity tab is because Pega expects the .pyPropensity field to be at the top level, but in your case, it’s inside .pyModelRunResult from the external SageMaker response. Even though the response is correct, Pega does not automatically move the value to where the UI expects it. To fix this, create a data transform (e.g., ExtractExternalPropensity) that copies the value from .pyModelRunResult.pyPropensity to .pyPropensity. Then, configure this data transform to run after the model execution, either through the model settings in Prediction Studio or using a wrapper activity or data flow. Once this is set up, Pega will display the value in the Propensity tab as expected.

@Sairohith

Thanks for the suggestion! I investigated this approach and found some key differences in how external models work.

What I discovered: After debugging the PS_ExecuteRuleForSingleRun activity, I found that external SageMaker models don’t store responses in the nested property structure you mentioned. Instead:

  • External model responses are stored as raw strings in pyModelRunResult
  • The response format varies (JSON: {"pyPropensity": 0.24} or CSV: "Yes,0.999,...")

My solution: I implemented a parser in the activity that:

  • Detects response format (JSON/CSV)
  • Extracts propensity values using regex/string parsing
  • Sets the .pyPropensity property for UI display

Result: Propensity values now display correctly in Prediction Studio

Challenge: The PS_ExecuteRuleForSingleRun activity is marked as [Final, Internal] This means:

  • Changes can’t be modified through normal development
  • May be overwritten during Pega upgrades
  • Requires monitoring after system updates

Question for the community: Has anyone found an alternative approach for external model propensity extraction that doesn’t require modifying Final artifacts? Or is there a way to override this activity safely?

Thanks for the guidance .

@Sairohith

Thank you for your time and effort in analyzing this issue.

The solution regarding this problem is that Pega has specific parsing techniques for different modeling approaches and supports only limited models. Each model expects different output properties. After the service request (SR), they updated the Pega documentation related to this topic in the “Supported Modeling Techniques with AWS SageMaker” section.

When we use any modeling technique not supported by Pega, it will dump the output in the result tab as .pyModelRunResult in this case. We should explicitly parse the properties we want from the result in our strategies.