How to Cache LLM Prompts and Responses in Pega to Reduce Token Costs?

Use Data Transforms for token efficiency for Agents and Coach: Apply Data Transforms to supply only essential information from the Data Page. This reduces the amount of data the Large Language Model (LLM) processes, improving token efficiency and response speed. Sample token usage when conversing with a Coach:

Please follow this link for more details: Trim the Payload: Use Data Transforms to Minimize Token Consumption