In this session of the Predictable AI Placement series, We take a deep dive into the Assignment Assist Agent—a case‑level conversational agent designed to support subject matter experts during active assignments.
Building on the earlier placement patterns in the series, this session focuses on how an Assignment Assist Agent can be embedded directly into the user experience to enable deeper analysis without disrupting the flow of work. Using a compliance and audit use case, Jay shows how this agent supports analysts after automated analysis has already been performed by document and step agents.
You’ll see how the Assignment Assist Agent:
Appears contextually during a specific assignment (for example, assessment review)
Uses knowledge tools backed by data pages to access audit findings and audit rules
Provides guided, conversational insights into documents and findings
Helps subject matter experts explore details and validate conclusions before finalizing their work
The session also walks through how to design and configure the agent in Dev Studio, including:
Agent instructions, guardrails, and tone
Quick‑select questions for guided exploration
Knowledge tools and example phrases
UI placement options (utilities panel, tab, or case‑level agent icon)
Visibility rules to ensure the agent appears only when it adds value
Importantly, we emphasize best practices for using Assignment Assist Agents responsibly—ensuring they increase efficiency and quality without unintentionally extending cycle time.
This deep dive is part of the broader AI Expert Circle series covering the five AI placement patterns and follows earlier sessions on the Application Agent, Document Agent, and Step Agent, with a final deep dive on GenAI Connect coming next.
Amazing content. I tried a small scenario to create an interview case using an Assessment casetype. There is one issue i am facing where the candidate details are to be captured inside an embedded page of the work case.
The work case level fields are being filled properly, but the embedded page fields are not getting set.
Am i missing something here?
@RameshSangili Your valuable suggestions are needed here
Hey JC , Is your issue with a conversational agent on the assignment or are is that happening with the application agent and case type tool ? Lets discuss the exact use case for the embedded pages.
I have created an Agent powered with an advanced tool that has action setup as case type and refers to the Assessment case type.
When i test this agent from postman using agent dx api, the conversation is happening properly and data is being read and parsed by the agent. But only top level work page properties are being set and the embedded page fields are not being stamped.
Bold highlighted field values are stamped on the properly but not rest of them. The case assignments are also moving forward properly.
Below is the sample prompt i am giving from postman.
{
"Request": "Schedule an assessment for candidate Vinod Kumar with 5 years experience in Java. All 5 years experience is in Java. Set it for April 20th 2026 at 3 PM for 1 hour with John Doe. His email address is [email protected] and mobile number is +919812345678. He is willing to relocate to another location. He is already serving notice period and is available to join within 10 days. Salary expectation is around 20 lakhs. Date of birth is 6th September 1989."
}
JC, can you provide me with the version of Pega Infinity you are working on. I will verify case processing field type support. Just to confirm that is a single page of data , not a page list.
Thanks for sharing this deep dive and for continuing the Predictable AI Placement series.
This session does a great job of illustrating where an Assignment Assist Agent adds value by supporting subject matter experts during active work, rather than attempting to automate judgment or decision making.
One of the most important takeaways here is the placement principle itself: this agent appears only when a human is already engaged in an assignment and needs contextual insight, explanation, or validation, not when automation should still be doing the work. That distinction is central to predictable and governed AI design.
The examples showing how knowledge tools, data pages, and guided prompts are combined reinforce that this is not a free form chatbot, but a scoped assistant operating within clear guardrails.
It is also helpful that the session explicitly calls out the risk of unintentionally extending cycle time if the agent is overused or placed too early in the process.
I would be interested to hear from others how they have applied this pattern in real projects?
What criteria have you used to decide when an Assignment Assist Agent should appear, and what signals have helped you avoid placing it where automation or rules were still the better fit?
@JayachandraSiddipeta Can you compare your configuration to these criteria. This doc was just released and is a guide to what is supported in the Case Type Tool based Application agents. If you feel it is within these guidelines, let me know and will look into it further . Pegasystems Documentation