-
Notifications
You must be signed in to change notification settings - Fork 448
Description
Checks
- I have updated to the lastest minor and patch version of Strands
- I have checked the documentation and this is not expected behavior
- I have searched ./issues and there are no duplicates of my issue
Strands Version
1.9.1 and 1.7.0
Python Version
3.13.3
Operating System
Windows 11 (but the error is also present when executing in bedrock agentcore runtime, which I assume is Linux)
Installation Method
pip
Steps to Reproduce
Here's whats going on: In the last 48 hours, my agent started being unable to call tools. My tools call a lambda function and then return the output, simple as that. I didn't change anything about my lambdas, or the way the agent was being provided tools. However, now the agent calls the tool and immediately thinks it didn't get the response, so it calls it again. it ends up calling the tool 7 or 8 times, making up a result for the tool call (even though I've instructed it not to), and then getting the result of the tool call later and correcting itself. What this looks like to the end user is that the agent just makes up an incorrect reply to their question, imediately followed by the correct response.
Here's my code for building my agent
agent = Agent(
model=BedrockModel(
model_id=constants['gpt_oss_120b_id'],
guardrail_id=constants['guardrail_id'],
guardrail_version=constants['guardrail_version'],
guardrail_trace="enabled",
top_p=0.5,
temperature=0.5
),
tools=(lambda_functions + [query_knowledge_base]),
hooks=[AgentHookProvider(messages_saver)],
system_prompt=build_instructions()
Here's my code for the lambda functions
def call_lambda(function_name, params: dict):
try:
execution_id = str(uuid.uuid4())
print(
f'Calling lambda {function_name} with params {params}, execution id: {execution_id}')
payload = {
"functionName": function_name,
"params": params
}
encoded_payload = json.dumps(payload).encode('utf-8')
response = lambda_client.invoke(FunctionName='lambda_name,
LogType='Tail', Payload=encoded_payload)
result = json.loads(response['Payload'].read().decode('utf-8'))
print(
f'Result status Code: {result['statusCode']}, execution id: {execution_id}')
print(
f'Result body: {result['body']}\nexecution id: {execution_id}')
if result['statusCode'] != 200:
raise Exception(result['body'])
lambda_cache.append({
'functionName': function_name,
'params': params,
'body': result['body'],
'timestamp': time.time()
})
return json.loads(result['body'])
except Exception as e:
print(
f'Error calling lambda {function_name} with params {params}: {e}')
return {'errMsg': str(e), 'statusCode': 500}
@tool
def fn_name(opts: dict):
try:
rst = call_lambda('fn_name', opts)
return rst
except Exception as e:
print(f'Error calling fn_name: {e}')
return {'errMsg': str(e), 'statusCode': 500}
lambda_functions = [
fn_name,
...
]
Again, I didn't change anyhting about the lambda or how the agent calls the function or is given the function. I've tried both the gpt-oss model (which is the one I was using) and now the nova micro model, and both give the same issue. This is definitely an issue with strands, since I've tested this locally, not running on AgentCore runtime and the same issue occurs.
What's confusing to me is I'm trying to test with both 1.7.0 (what I was on) and 1.9.1 (latest) strands sdk versions and had the issue with both.
Expected Behavior
Agent waits for the tool call response before generating more content
Actual Behavior
Agent starts generating more content directly after calling the tool, believing the tool already returned.
Additional Context
No response
Possible Solution
No response
Related Issues
No response