-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Added support for "return" handoffs (#1) #869
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Could also solve the problem specified in: #858 @rm-openai - Would love your review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought about this for a while, and ultimately I think this doesn't belong in handoffs because the definition of a handoff is something that takes over control. Whether it chooses to return control should be up to the new agent.
That said, the problem identified is a real one. I think the right way to do it is via a FunctionTool that also receives the full conversation history. i.e. you do something like
@function_tool
def my_function(context, history, ... other args):
That function could then use an agent or not, but either way it has access to the conversation history.
Thoughts?
EDIT: Also, this should be really easy now bc of the ToolContext
you added - can just add a history
field with all the prev items in there.
@rm-openai - Generally, I agree - this was actually my goto as well. There are, however two gaps with that implementation:
If you have any good solution for either of these with the function calls \ agent as tool approach, I'd love your input :) EDIT: I also thought about using the existing handoffs mechanism, and as you suggested, giving the new agent the choice of whether to return control or not. But that seemed unnatural to me for two reasons:
EDIT 2: Another option is to support something parallel to Handoffs, like |
* Fix function_schema name override bug (openai#872) ## Summary - ensure `name_override` is always used in `function_schema` - test name override when docstring info is disabled ## Testing - `make format` - `make lint` - `make mypy` - `make tests` Resolves openai#860 ------ https://chatgpt.com/codex/tasks/task_i_684f1cf885b08321b4dd3f4294e24ca2 * adopted float instead of timedelta for timeout parameters (openai#874) I replaced the `timedelta` parameters for MCP timeouts with `float` values, addressing issue openai#845 . Given that the MCP official repository has incorporated these changes in [this PR](modelcontextprotocol/python-sdk#941), updating the MCP version in openai-agents and specifying the timeouts as floats should be enough. * Prompts support (openai#876) Add support for the new openai prompts feature. * v0.0.18 (openai#878) * Allow replacing AgentRunner and TraceProvider (openai#720) * Prepare 0.0.19 release (openai#895) * Added support for "return" handoffs (#1) * Fix bug in Reasoning with `store=False` --------- Co-authored-by: Rohan Mehta <[email protected]> Co-authored-by: Daniele Morotti <[email protected]> Co-authored-by: pakrym-oai <[email protected]>
Related to issue: #847