I use Connect REST to retrieve data from an external document processing service during case processing. The service returns a list of matches that I need to map to fields (properties on my case). So far so good - I can achieve all of this with data page, a response data transform, and a data model.
However, here is the catch. The JSON will change based on my use case. Here, I’m extracting IDs, so I will get matches such as “DateOfBirth”, “DocumentType”, “FirstName”. In another use case, I might process receipts, so I would get invoice dates, total amount, and so on.
How can I build this in a way so it’s reusable and I don’t have to re-create properties and data transforms for every single use case?
@mrwolf2 Leverage JSON data transform feature to auto map the fields. As long as the data model is defined, Pega can map these fields automatically. No manual mapping is required.
Thank you @byrev1. There is no other way around that, is there? I know that Pega can automatically map properties when deserializing the JSON, but as you said, that requires a data model to be defined first.
Let me try to understand your problem statement here. You’ve Json keys like" DateOfBirth", “DocumentType”, “FirstName” which are dynamic in nature so you wouldn’t know them in design time. if that’s the usecase, Pega does not have OOTB mechanism to map these keys as its not a common paradigm in json objects. However you can build custom solutions something as stated below.
Map the whole json object to a property or parameter and later apply string functions to replace dynamic string with some property and then parse the json
you can build custom java function using Pega engine Apis and java classes and convert that into regular array Json and parse it. Its achievable . keys become one of the element under the array.