Skip to content

Conversation

@poshinchen
Copy link
Contributor

@poshinchen poshinchen commented Sep 30, 2025

Description

Following OTEL suggestions to update semantic conventions for v1.37

Details

  1. Detect gen_ai_latest_experimental is in env OTEL_SEMCONV_STABILITY_OPT_IN or not for conventions switch
  2. For the latest conventions:
    2.1 Replaced gen_ai.system with gen_ai.provider.name
    2.2 Updated all events with inputs / outputs to the event name: [gen_ai.client.inference.operation.details](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-events/#event-eventgen_aiclientinferenceoperationdetails)
    2.3 Move all inputs to the event attribute of gen_ai.input.messages
    2.4 Move all outputs to the event attribute of gen_ai.output.messages

[NOTE] Currently I'm serializing the inputs / output as string for gen_ai.input.messages. Although the example shows that it can be a complex object, the add_event API does not take nested object / list.

Related Issues

#877

Documentation PR

TBD

Type of Change

New feature

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • [] I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@poshinchen poshinchen force-pushed the feat/update-semantic-conventions branch from e4d43de to fad30fd Compare October 1, 2025 17:38
@poshinchen poshinchen requested a review from a team October 1, 2025 17:38
@poshinchen poshinchen self-assigned this Oct 1, 2025
@poshinchen poshinchen marked this pull request as ready for review October 1, 2025 17:41
@poshinchen poshinchen merged commit 2493545 into strands-agents:main Oct 2, 2025
22 checks passed
JackYPCOnline added a commit that referenced this pull request Oct 10, 2025
* feat: replace kwargs with invocation_state in agent APIs

* fix: handle **kwargs in stream_async.

* feat: add a unit test for the change

* Update src/strands/agent/agent.py

Co-authored-by: Nick Clegg <[email protected]>

* tool - executors - concurrent - remove no-op gather (#954)

* feat(telemetry): updated traces to match OTEL v1.37 semantic conventions (#952)

* event loop - handle model execution (#958)

* feat: implement concurrent message reading for session managers (#897)

Replace sequential message loading with async concurrent reading in both
S3SessionManager and FileSessionManager to improve performance for long
conversations. Uses asyncio.gather() with run_in_executor() to read
multiple messages simultaneously while maintaining proper ordering.

Resolves: #874

Co-authored-by: Vamil Gandhi <[email protected]>

* hooks - before tool call event - cancel tool (#964)

* fix(telemetry): removed double serialization for events (#977)

* fix(litellm): map LiteLLM context-window errors to ContextWindowOverflowException (#994)

* feat: add more tests and adjust invocation_state dic structure

* Apply suggestion from @Unshure

Co-authored-by: Nick Clegg <[email protected]>

* fix: adjust **kwargs in multiagent primitives

---------

Co-authored-by: Nick Clegg <[email protected]>
Co-authored-by: Patrick Gray <[email protected]>
Co-authored-by: poshinchen <[email protected]>
Co-authored-by: Vamil Gandhi <[email protected]>
Co-authored-by: Vamil Gandhi <[email protected]>
Co-authored-by: ratish <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants