I have created an AI agent and configured its instructions and guardrails. As part of this setup, I have defined three case-type tools ,
each with specific instructions to trigger the respective case when certain conditions are met based on the user input (such as email content).
Currently, when I use the agent and ask it to create a new case and pass data from the current case to the newly created case, I am facing inconsistent behavior:
Sometimes the case is created but the data is not passed.
Sometimes the data is passed, but the case gets automatically submitted.
Due to auto-submission, I am unable to view or verify the pre-filled case details.
What I want to achieve is:
When a user explicitly instructs the agent, it should create the appropriate case based on the identified condition.
The agent should extract and pass relevant data from the current case (and user input) to the new case.
The new case should open in a pre-filled, editable state (not submitted), so that the user can review the details before proceeding.
I would like guidance on:
Whether this behavior should be controlled at the Agent Instructions level or within each individual tool’s instructions.
How to ensure consistent data mapping and passing between cases.
How to strictly prevent auto-submission and enforce a review step before submission.
Any suggestions on the correct approach or best practices to achieve this behavior would be very helpful.
The most reliable way to get this behaviour is to treat case creation, data mapping, and review-before-submit as tool/workflow design concerns and not something you try to control only with agent instructions.
Agent instructions can help the model choose the right tool, but the guarantee that the new case opens pre-filled and editable, without auto-submitting, should be enforced in the case-type tool configuration and the target case flow design.
Use the agent instructions for high-level routing, such as “choose the correct case-type tool based on user intent and extracted conditions.”
Use the individual tool instructions for tool-specific extraction guidance, such as which fields to collect for each case type.
But use the case workflow itself to enforce behaviour like “create draft,” “open review assignment” and “do not auto-submit”
Probably you can create the case with a flag like IsAgentInitiated = true and use that flag in the target case’s create stage or pre entry start condition and then route agent-created cases to a review assignment so the user can inspect and edit the prefilled values before submission
extract entities, then explicitly map them to the case properties that need to be populated. That is more reliable than leaving mapping purely implicit in prompt
Agent instructions: decide which case-type tool to invoke.
Tool instructions: describe what data to extract and what arguments to pass.
Target case design: create the case in draft/review mode, prepopulate fields and stop at a review assignment rather than submitting.
Mapping layer: use explicit property mapping from current case + user input into target case fields.
If you’re expecting the users to review the information, then the conversation agent or agent may not be a good fit for your requirement. The whole purpose of conversation agent, will guide the user and automate to complete the assignment instead of manual review. If you’re expecting manual review, then the next assignment has to be routed to a differnt user or workqueue for the user to review and take actions.
Sometimes the data is passed, but the case gets automatically submitted.
Due to auto-submission, I am unable to view or verify the pre-filled case details.