forked from getsentry/sentry-python
    
        
        - 
                Notifications
    You must be signed in to change notification settings 
- Fork 0
          ⚡️ Speed up function extract_sentrytrace_data by 37%
          #44
        
          New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
          
     Open
      
      
            codeflash-ai
  wants to merge
  40
  commits into
  master
  
    
      
        
          
  
    
      Choose a base branch
      
     
    
      
        
      
      
        
          
          
        
        
          
            
              
              
              
  
           
        
        
          
            
              
              
           
        
       
     
  
        
          
            
          
            
          
        
       
    
      
from
codeflash/optimize-extract_sentrytrace_data-mg9m9ul7
  
      
      
   
  
    
  
  
  
 
  
      
    base: master
Could not load branches
            
              
  
    Branch not found: {{ refName }}
  
            
                
      Loading
              
            Could not load tags
            
            
              Nothing to show
            
              
  
            
                
      Loading
              
            Are you sure you want to change the base?
            Some commits from the old base branch may be removed from the timeline,
            and old review comments may become outdated.
          
          
                
     Open
            
            
  
    ⚡️ Speed up function extract_sentrytrace_data by 37%
  
  #44
              
                    codeflash-ai
  wants to merge
  40
  commits into
  master
from
codeflash/optimize-extract_sentrytrace_data-mg9m9ul7
  
      
      
   
              
            Conversation
  
    
      This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
      Learn more about bidirectional Unicode characters
    
  
  
    
    getsentry#4875) ### Description We cannot directly intercept MCP Tool calls, as they are done remotely by the LLM and not in the Agent itself. However, we see when such a tool call took place, so we can emit a zero-length span with the tool call specifics. It will start at the same time as the parent span. Closes https://linear.app/getsentry/issue/TET-1192/openai-agents-hosted-mcp-calls-cannot-be-wrapped-in-an-execute-tool <!-- CURSOR_SUMMARY --> --- > [!NOTE] > Emit execute_tool spans for MCP tool calls detected in agent results, with tool metadata, input/output (PII-gated), and error status. > > - **Tracing/Spans (openai_agents)**: > - Add `utils._create_mcp_execute_tool_spans` to emit `OP.GEN_AI_EXECUTE_TOOL` spans for MCP tool calls (`McpCall`) found in `result.output`. > - Sets `GEN_AI_TOOL_TYPE=mcp`, `GEN_AI_TOOL_NAME`, propagates input/output when PII allowed, and marks `SPANSTATUS.ERROR` on error. > - Spans start at the parent span's start time (zero-length representation of remote call). > - Wire into `spans/ai_client.update_ai_client_span` to create these tool spans after setting usage/input/output data. > - Update imports to include `SPANSTATUS` and `OP`. > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 96df8c1. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> --------- Co-authored-by: Ivana Kellyer <[email protected]>
Update our test matrix with new releases of integrated frameworks and libraries. ## How it works - Scan PyPI for all supported releases of all frameworks we have a dedicated test suite for. - Pick a representative sample of releases to run our test suite against. We always test the latest and oldest supported version. - Update [tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini) with the new releases. ## Action required - If CI passes on this PR, it's safe to approve and merge. It means our integrations can handle new versions of frameworks that got pulled in. - If CI doesn't pass on this PR, this points to an incompatibility of either our integration or our test setup with a new version of a framework. - Check what the failures look like and either fix them, or update the [test config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py) and rerun [scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh). See [scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md) for what configuration options are available. _____________________ _🤖 This PR was automatically created using [a GitHub action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._ Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
The optimization adds length checks before expensive string formatting operations. Specifically:
**Key Changes:**
- Added `len(trace_id) != 32` check before `"{:032x}".format(int(trace_id, 16))`
- Added `len(parent_span_id) != 16` check before `"{:016x}".format(int(parent_span_id, 16))`
**Why It's Faster:**
The original code always performed string-to-int conversion and formatting, even when the trace_id/span_id were already properly formatted. The optimization skips these expensive operations when the strings are already the correct length (32 hex chars for trace_id, 16 for span_id).
The `int(trace_id, 16)` and `"{:032x}".format()` operations are computationally expensive, involving:
- Hexadecimal string parsing
- Integer conversion 
- String formatting with zero-padding
**Performance Impact:**
Test results show the optimization is most effective when trace IDs and span IDs are already properly formatted (which is common in production). Cases like `test_valid_full_header` show 51.6% speedup, and `test_missing_trace_id` shows 65.9% speedup. The optimization has minimal overhead for cases where formatting is still needed, with only small gains (1-7%) for malformed inputs.
This is particularly valuable for high-throughput tracing scenarios where most headers contain well-formatted trace data.
    
              
                    misrasaurabh1
  
              
              approved these changes
              
                  
                    Oct 2, 2025 
                  
              
              
            
            
### Description Even though we try to figure out the current release automatically if it's not provided, it can still end up being `None`. If that's the case, it won't be attached to logs. The `test_logs_attributes` test assumes there always is a release, which is incorrect. I opted for conditionally checking for `sentry.release` in the test instead of removing the check altogether, even though the test itself is supposed to test custom user provided attributes. The reason is that there is no other generic logs test testing `sentry.release`. #### Issues Closes getsentry#4878 #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
Adds tracing support to DramatiqIntegration getsentry#3454 --------- Co-authored-by: igorek <[email protected]> Co-authored-by: Anton Pirker <[email protected]> Co-authored-by: Ivana Kellyer <[email protected]>
Add a first implementation of the litellm integration, supporting completion and embeddings Closes https://linear.app/getsentry/issue/PY-1828/add-agent-monitoring-support-for-litellm Closes https://linear.app/getsentry/issue/TET-1218/litellm-testing <!-- CURSOR_SUMMARY --> --- > [!NOTE] > Introduce `LiteLLMIntegration` that instruments LiteLLM chat/embeddings calls with spans, token usage, optional prompt logging, and exception capture. > > - **Integrations**: > - Add `sentry_sdk/integrations/litellm.py` with `LiteLLMIntegration` registering LiteLLM `input/success/failure` callbacks. > - Start spans for `chat`/`embeddings`, set `gen_ai.*` metadata (provider/system, operation, model, params like `max_tokens`, `temperature`, `top_p`, `stream`). > - Record LiteLLM-specific fields: `api_base`, `api_version`, `custom_llm_provider`. > - Optionally capture request/response messages when `include_prompts` and PII are enabled. > - Track token usage from response `usage` and capture exceptions; always finish spans. > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 1ecd559. This will update automatically on new commits. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY --> --------- Co-authored-by: Ivana Kellyer <[email protected]>
### Description huggingface_hub has a release candidate out and our test suite doesn't work with it. Two changes necessary: - 1.0 uses `httpx`, so our `responses` mocks don't work, we also need `pytest_httpx`. - With httpx we get additional `http.client` spans in the transaction, while before we were assuming the transaction only contains exactly one `gen_ai.*` span and nothing else. #### Issues Closes getsentry#4802 #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
Update our test matrix with new releases of integrated frameworks and libraries. ## How it works - Scan PyPI for all supported releases of all frameworks we have a dedicated test suite for. - Pick a representative sample of releases to run our test suite against. We always test the latest and oldest supported version. - Update [tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini) with the new releases. ## Action required - If CI passes on this PR, it's safe to approve and merge. It means our integrations can handle new versions of frameworks that got pulled in. - If CI doesn't pass on this PR, this points to an incompatibility of either our integration or our test setup with a new version of a framework. - Check what the failures look like and either fix them, or update the [test config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py) and rerun [scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh). See [scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md) for what configuration options are available. _____________________ _🤖 This PR was automatically created using [a GitHub action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._ --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ivana Kellyer <[email protected]>
Update our test matrix with new releases of integrated frameworks and libraries. ## How it works - Scan PyPI for all supported releases of all frameworks we have a dedicated test suite for. - Pick a representative sample of releases to run our test suite against. We always test the latest and oldest supported version. - Update [tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini) with the new releases. ## Action required - If CI passes on this PR, it's safe to approve and merge. It means our integrations can handle new versions of frameworks that got pulled in. - If CI doesn't pass on this PR, this points to an incompatibility of either our integration or our test setup with a new version of a framework. - Check what the failures look like and either fix them, or update the [test config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py) and rerun [scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh). See [scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md) for what configuration options are available. _____________________ _🤖 This PR was automatically created using [a GitHub action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._ --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ivana Kellyer <[email protected]>
…ry#4883) Prevent mutating cookies on incoming HTTP requests if the cookie name is in the scrubbers denylist. Cookies like `token=...` were replaced with `AnnotatedValue` because a shallow reference of the request information was held by the client. A deep copy is introduced so scrubbing does not interfere with Litestar, and in particular does not break `JWTCookieAuth`. Closes getsentry#4882 --------- Co-authored-by: Ivana Kellyer <[email protected]>
### Description Removing the Check CI Config step altogether as well as associated parts of the toxgen script (`fail_on_changes`). Added a BIG ALL CAPS WARNING to `tox.ini` instead. Also updated the toxgen readme a bit. Removing the check should be fine because we haven't actually seen cases of people trying to edit `tox.ini` directly -- if this happens in the future it's easy to notice in the PR. If we don't notice it then, we can notice it during the weekly toxgen update. And if don't notice it then, the file simply gets overwritten. 🤷🏻♀️ ### The Problem With Checking `tox.ini`: The Long Read In order to check manual changes to `tox.ini` on a PR, we hash the committed file, then run toxgen, hash the result, and compare. If the hashes differ, we fail the check. This works fine as long as there have been no new releases between the two points in time when `tox.ini` was last committed and when we ran the check. This is usually not the case. There are new releases all the time. When we then rerun toxgen, the resulting `tox.ini` is different from the committed one because it contains the new releases. So the hashes are different without any manual changes to the file. One solution to this is always saving the timestamp of the last time `tox.ini` was generated, and then when rerunning toxgen for the purposes of the check, ignoring all new releases past the timestamp. This means any changes we detect were actually made by the user. However, the explicit timestamp is prone to merge conflicts. Anytime `master` has had a toxgen update, and a PR is made that also ran toxgen, the PR will have a merge conflict on the timestamp field that needs to be sorted out manually. This is annoying and unnecessary. (An attempt was made to use an implicit timestamp instead in the form of the commit timestamp, but this doesn't work since we squash commits on master, so the timestamp of the last commit that touched `tox.ini` is actually much later than the change was made. There are also other problems, like someone running toxgen but committing the change much later, etc.) ### Solutions considered - using a custom merge driver to resolve the timestamp conflict automatically (doesn't work on GH PRs) - running toxgen in CI on each PR and committing the change (would work but we're essentially already doing this with the cron job every week) - not checking in `tox.ini` at all, but running toxgen on each PR (introduces new package releases unrelated to the PR, no test setup committed -- contributors and package index maintainers also need to run our tests) - finding a different commit to use as the implicit timestamp (doesn't work because we squash commits on `master`) - ... In the end I decided to just get rid of the check. If people modifying `tox.ini` manually becomes a problem, we can deal with it then. I've added a big warning to `tox.ini` to discourage this. #### Issues Closes getsentry#4886 #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
- Add a constant that contains the allowed message roles according to OTEL and a mapping - Apply that mapping to all gen_ai integrations - We will track input roles that do not conform to expectations via a Sentry issue in agent monitoring to make sure we continually update the mappings --------- Co-authored-by: Ivana Kellyer <[email protected]>
…y#4770) Automatically fork isolation and current scopes when running tasks with `concurrent.future`. Packages the implementation from getsentry#4508 (comment) as an integration. Closes getsentry#4565 --------- Co-authored-by: Anton Pirker <[email protected]>
### Description Python 3.14 is out, let's use it for linting. #### Issues Ref getsentry#4895 #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
### Description Remove old metrics code to make way for getsentry#4898 Metrics was always an experimental feature and Sentry stopped accepting metrics a year ago. #### Issues <!-- * resolves: getsentry#1234 * resolves: LIN-1234 --> #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
### Description Logs are not experimental anymore, but one of the internal log-related functions still had "experimental" in the name. #### Issues <!-- * resolves: getsentry#1234 * resolves: LIN-1234 --> #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
…y#4898) ### Summary Similar to getsentry/sentry-javascript#17883, this allows the py sdk to send in new trace metric protocol items, although this code is experimental since the schema may still change. Most of this code has been copied from logs (eg. log batcher -> metrics batcher) in order to dogfood, once we're more sure of our approach we can refactor. Closes LOGS-367 --------- Co-authored-by: Ivana Kellyer <[email protected]>
Adds support for `python-genai` integrations. It supports both sync and async clients, and both regular and streaming modes for interacting with models and building agents. Closes [PY-1733: Add agent monitoring support for `google-genai`](https://linear.app/getsentry/issue/PY-1733/add-agent-monitoring-support-for-google-genai)
…etsentry#4858) ### Description Without "@functools.wraps" added, Ray exposes Prometheus metrics with all tasks named "new_func" #### Issues * Follow up to [!4430](getsentry#4430) comments #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
### Description Updating tox + reorg the AI group alphabetically. New openai release doesn't work on 3.8, explicitly testing on 3.9+ from there Doing this now to unblock getsentry#4906 (comment) #### Issues <!-- * resolves: getsentry#1234 * resolves: LIN-1234 --> #### Reminders - Please add tests to validate your changes, and lint your code using `tox -e linters`. - Add GH Issue ID _&_ Linear ID (if applicable) - PR title should use [conventional commit](https://develop.sentry.dev/engineering-practices/commit-messages/#type) style (`feat:`, `fix:`, `ref:`, `meta:`) - For external contributors: [CONTRIBUTING.md](https://github.com/getsentry/sentry-python/blob/master/CONTRIBUTING.md), [Sentry SDK development docs](https://develop.sentry.dev/sdk/), [Discord community](https://discord.gg/Ww9hbqr)
…ry#4902) Add code source attributes to outgoing HTTP requests as described in getsentry/sentry-docs#15161. The attributes are only added if the time to receive a response to an HTTP request exceeds a configurable threshold value. Factors out functionality from SQL query source and tests that it works in the HTTP request setting. Closes getsentry#4881
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3 to 4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/github/codeql-action/releases">github/codeql-action's releases</a>.</em></p> <blockquote> <h2>v3.30.8</h2> <h1>CodeQL Action Changelog</h1> <p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p> <h2>3.30.8 - 10 Oct 2025</h2> <p>No user facing changes.</p> <p>See the full <a href="https://github.com/github/codeql-action/blob/v3.30.8/CHANGELOG.md">CHANGELOG.md</a> for more information.</p> <h2>v3.30.7</h2> <h1>CodeQL Action Changelog</h1> <p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p> <h2>3.30.7 - 06 Oct 2025</h2> <p>No user facing changes.</p> <p>See the full <a href="https://github.com/github/codeql-action/blob/v3.30.7/CHANGELOG.md">CHANGELOG.md</a> for more information.</p> <h2>v3.30.6</h2> <h1>CodeQL Action Changelog</h1> <p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p> <h2>3.30.6 - 02 Oct 2025</h2> <ul> <li>Update default CodeQL bundle version to 2.23.2. <a href="https://redirect.github.com/github/codeql-action/pull/3168">#3168</a></li> </ul> <p>See the full <a href="https://github.com/github/codeql-action/blob/v3.30.6/CHANGELOG.md">CHANGELOG.md</a> for more information.</p> <h2>v3.30.5</h2> <h1>CodeQL Action Changelog</h1> <p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p> <h2>3.30.5 - 26 Sep 2025</h2> <ul> <li>We fixed a bug that was introduced in <code>3.30.4</code> with <code>upload-sarif</code> which resulted in files without a <code>.sarif</code> extension not getting uploaded. <a href="https://redirect.github.com/github/codeql-action/pull/3160">#3160</a></li> </ul> <p>See the full <a href="https://github.com/github/codeql-action/blob/v3.30.5/CHANGELOG.md">CHANGELOG.md</a> for more information.</p> <h2>v3.30.4</h2> <h1>CodeQL Action Changelog</h1> <p>See the <a href="https://github.com/github/codeql-action/releases">releases page</a> for the relevant changes to the CodeQL CLI and language packs.</p> <h2>3.30.4 - 25 Sep 2025</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/github/codeql-action/blob/main/CHANGELOG.md">github/codeql-action's changelog</a>.</em></p> <blockquote> <h2>3.29.4 - 23 Jul 2025</h2> <p>No user facing changes.</p> <h2>3.29.3 - 21 Jul 2025</h2> <p>No user facing changes.</p> <h2>3.29.2 - 30 Jun 2025</h2> <ul> <li>Experimental: When the <code>quality-queries</code> input for the <code>init</code> action is provided with an argument, separate <code>.quality.sarif</code> files are produced and uploaded for each language with the results of the specified queries. Do not use this in production as it is part of an internal experiment and subject to change at any time. <a href="https://redirect.github.com/github/codeql-action/pull/2935">#2935</a></li> </ul> <h2>3.29.1 - 27 Jun 2025</h2> <ul> <li>Fix bug in PR analysis where user-provided <code>include</code> query filter fails to exclude non-included queries. <a href="https://redirect.github.com/github/codeql-action/pull/2938">#2938</a></li> <li>Update default CodeQL bundle version to 2.22.1. <a href="https://redirect.github.com/github/codeql-action/pull/2950">#2950</a></li> </ul> <h2>3.29.0 - 11 Jun 2025</h2> <ul> <li>Update default CodeQL bundle version to 2.22.0. <a href="https://redirect.github.com/github/codeql-action/pull/2925">#2925</a></li> <li>Bump minimum CodeQL bundle version to 2.16.6. <a href="https://redirect.github.com/github/codeql-action/pull/2912">#2912</a></li> </ul> <h2>3.28.21 - 28 July 2025</h2> <p>No user facing changes.</p> <h2>3.28.20 - 21 July 2025</h2> <ul> <li>Remove support for combining SARIF files from a single upload for GHES 3.18, see <a href="https://github.blog/changelog/2024-05-06-code-scanning-will-stop-combining-runs-from-a-single-upload/">the changelog post</a>. <a href="https://redirect.github.com/github/codeql-action/pull/2959">#2959</a></li> </ul> <h2>3.28.19 - 03 Jun 2025</h2> <ul> <li>The CodeQL Action no longer includes its own copy of the extractor for the <code>actions</code> language, which is currently in public preview. The <code>actions</code> extractor has been included in the CodeQL CLI since v2.20.6. If your workflow has enabled the <code>actions</code> language <em>and</em> you have pinned your <code>tools:</code> property to a specific version of the CodeQL CLI earlier than v2.20.6, you will need to update to at least CodeQL v2.20.6 or disable <code>actions</code> analysis.</li> <li>Update default CodeQL bundle version to 2.21.4. <a href="https://redirect.github.com/github/codeql-action/pull/2910">#2910</a></li> </ul> <h2>3.28.18 - 16 May 2025</h2> <ul> <li>Update default CodeQL bundle version to 2.21.3. <a href="https://redirect.github.com/github/codeql-action/pull/2893">#2893</a></li> <li>Skip validating SARIF produced by CodeQL for improved performance. <a href="https://redirect.github.com/github/codeql-action/pull/2894">#2894</a></li> <li>The number of threads and amount of RAM used by CodeQL can now be set via the <code>CODEQL_THREADS</code> and <code>CODEQL_RAM</code> runner environment variables. If set, these environment variables override the <code>threads</code> and <code>ram</code> inputs respectively. <a href="https://redirect.github.com/github/codeql-action/pull/2891">#2891</a></li> </ul> <h2>3.28.17 - 02 May 2025</h2> <ul> <li>Update default CodeQL bundle version to 2.21.2. <a href="https://redirect.github.com/github/codeql-action/pull/2872">#2872</a></li> </ul> <h2>3.28.16 - 23 Apr 2025</h2> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/github/codeql-action/commit/a841c540b73bac7685691a2f930006ba52db3645"><code>a841c54</code></a> Scratch <code>uploadSpecifiedFiles</code> tests, make <code>uploadPayload</code> tests instead</li> <li><a href="https://github.com/github/codeql-action/commit/aeb12f6eaaa7419b7170f27dc3e2b5710203ff2d"><code>aeb12f6</code></a> Merge branch 'main' into redsun82/skip-sarif-upload-tests</li> <li><a href="https://github.com/github/codeql-action/commit/6fd4ceb7bbb8ec2746fd4d3a64b77787dffd9afc"><code>6fd4ceb</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/3189">#3189</a> from github/henrymercer/download-codeql-rate-limit</li> <li><a href="https://github.com/github/codeql-action/commit/196a3e577b477ffb129cb35c7ed3ba72e6e2dbe7"><code>196a3e5</code></a> Merge pull request <a href="https://redirect.github.com/github/codeql-action/issues/3188">#3188</a> from github/mbg/telemetry/partial-config</li> <li><a href="https://github.com/github/codeql-action/commit/98abb870dcd6421594724ae220643e13baf90298"><code>98abb87</code></a> Add configuration error for rate limited CodeQL download</li> <li><a href="https://github.com/github/codeql-action/commit/bdd2cdf891a0a89c6680bd54c9ba63c80e440f75"><code>bdd2cdf</code></a> Also include <code>language</code> in error status report for <code>start-proxy</code>, if available</li> <li><a href="https://github.com/github/codeql-action/commit/fb148789ab863424b005147b4b018fe5691e5ccc"><code>fb14878</code></a> Include <code>languages</code> in <code>start-proxy</code> telemetry</li> <li><a href="https://github.com/github/codeql-action/commit/2ff418f28a66dd71cd80701e95ec26db12875f15"><code>2ff418f</code></a> Parse <code>language</code> before calling <code>getCredentials</code></li> <li>See full diff in <a href="https://github.com/github/codeql-action/compare/v3...v4">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Update our test matrix with new releases of integrated frameworks and libraries. ## How it works - Scan PyPI for all supported releases of all frameworks we have a dedicated test suite for. - Pick a representative sample of releases to run our test suite against. We always test the latest and oldest supported version. - Update [tox.ini](https://github.com/getsentry/sentry-python/blob/master/tox.ini) with the new releases. ## Action required - If CI passes on this PR, it's safe to approve and merge. It means our integrations can handle new versions of frameworks that got pulled in. - If CI doesn't pass on this PR, this points to an incompatibility of either our integration or our test setup with a new version of a framework. - Check what the failures look like and either fix them, or update the [test config](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/config.py) and rerun [scripts/generate-test-files.sh](https://github.com/getsentry/sentry-python/blob/master/scripts/generate-test-files.sh). See [scripts/populate_tox/README.md](https://github.com/getsentry/sentry-python/blob/master/scripts/populate_tox/README.md) for what configuration options are available. _____________________ _🤖 This PR was automatically created using [a GitHub action](https://github.com/getsentry/sentry-python/blob/master/.github/workflows/update-tox.yml)._ --------- Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ivana Kellyer <[email protected]>
Hi all, I am building Codeflash.ai which is an automated performance
optimizer for Python codebases. I tried optimizing sentry and found a
bunch of great optimizations that I would like to contribute. Would love
to collaborate with your team to get them reviewed and merged. Let me
know what's the best way to get in touch.
<!-- CODEFLASH_OPTIMIZATION:
{"function":"_get_db_span_description","file":"sentry_sdk/integrations/redis/modules/queries.py","speedup_pct":"44%","speedup_x":"0.44x","original_runtime":"586
microseconds","best_runtime":"408
microseconds","optimization_type":"loop","timestamp":"2025-10-02T20:47:52.016Z","version":"1.0"}
-->
#### 📄 44% (0.44x) speedup for ***`_get_db_span_description` in
`sentry_sdk/integrations/redis/modules/queries.py`***
⏱️ Runtime : **`586 microseconds`** **→** **`408 microseconds`** (best
of `269` runs)
#### 📝 Explanation and details
The optimization achieves a **43% speedup** by eliminating redundant
function calls inside the loop in `_get_safe_command()`.
**Key optimizations applied:**
1. **Cached `should_send_default_pii()` call**: The original code called
this function inside the loop for every non-key argument (up to 146
times in profiling). The optimized version calls it once before the loop
and stores the result in `send_default_pii`, reducing expensive function
calls from O(n) to O(1).
2. **Pre-computed `name.lower()`**: The original code computed
`name.lower()` inside the loop for every argument (204 times in
profiling). The optimized version computes it once before the loop and
reuses the `name_low` variable.
**Performance impact from profiling:**
- The `should_send_default_pii()` calls dropped from 1.40ms (65.2% of
total time) to 625μs (45.9% of total time)
- The `name.lower()` calls were eliminated from the loop entirely,
removing 99ms of redundant computation
- Overall `_get_safe_command` execution time improved from 2.14ms to
1.36ms (36% faster)
**Test case patterns where this optimization excels:**
- **Multiple arguments**: Commands with many arguments see dramatic
improvements (up to 262% faster for large arg lists)
- **Large-scale operations**: Tests with 1000+ arguments show 171-223%
speedups
- **Frequent Redis commands**: Any command processing multiple values
benefits significantly
The optimization is most effective when processing Redis commands with
multiple arguments, which is common in batch operations and complex data
manipulations.
✅ **Correctness verification report:**
| Test                        | Status            |
| --------------------------- | ----------------- |
| ⚙️ Existing Unit Tests | 🔘 **None Found** |
| 🌀 Generated Regression Tests | ✅ **48 Passed** |
| ⏪ Replay Tests | 🔘 **None Found** |
| 🔎 Concolic Coverage Tests | 🔘 **None Found** |
|📊 Tests Coverage       | 100.0% |
<details>
<summary>🌀 Generated Regression Tests and Runtime</summary>
```python
import pytest
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description
_MAX_NUM_ARGS = 10
# Dummy RedisIntegration class for testing
class RedisIntegration:
    def __init__(self, max_data_size=None):
        self.max_data_size = max_data_size
# Dummy should_send_default_pii function for testing
_send_pii = False
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description
# --- Basic Test Cases ---
def test_basic_no_args():
    """Test command with no arguments."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); desc = codeflash_output # 2.55μs -> 7.76μs (67.2% slower)
def test_basic_single_arg_pii_false():
    """Test command with one argument, PII off."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); desc = codeflash_output # 3.62μs -> 7.86μs (54.0% slower)
def test_basic_single_arg_pii_true():
    """Test command with one argument, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); desc = codeflash_output # 3.28μs -> 7.40μs (55.7% slower)
def test_basic_multiple_args_pii_false():
    """Test command with multiple args, PII off."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey", "value1", "value2")); desc = codeflash_output # 12.6μs -> 8.24μs (52.8% faster)
def test_basic_multiple_args_pii_true():
    """Test command with multiple args, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey", "value1", "value2")); desc = codeflash_output # 9.92μs -> 8.47μs (17.0% faster)
def test_basic_sensitive_command():
    """Test sensitive command: should always filter after command name."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "secret")); desc = codeflash_output # 7.96μs -> 7.56μs (5.33% faster)
def test_basic_sensitive_command_case_insensitive():
    """Test sensitive command with different casing."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "set", ("mykey", "secret")); desc = codeflash_output # 7.77μs -> 7.84μs (0.881% slower)
def test_basic_max_num_args():
    """Test that args beyond _MAX_NUM_ARGS are ignored."""
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(_MAX_NUM_ARGS + 2))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.0μs -> 9.43μs (197% faster)
    # Only up to _MAX_NUM_ARGS+1 args are processed (the first arg is key)
    expected = "GET 'arg0'" + " [Filtered]" * _MAX_NUM_ARGS
# --- Edge Test Cases ---
def test_edge_empty_command_name():
    """Test with empty command name."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "", ("key",)); desc = codeflash_output # 3.22μs -> 7.46μs (56.9% slower)
def test_edge_empty_args():
    """Test with empty args tuple."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "DEL", ()); desc = codeflash_output # 2.09μs -> 6.73μs (69.0% slower)
def test_edge_none_arg():
    """Test with None argument."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (None,)); desc = codeflash_output # 3.37μs -> 7.57μs (55.5% slower)
def test_edge_mixed_types_args():
    """Test with mixed argument types."""
    integration = RedisIntegration()
    args = ("key", 123, 45.6, True, None, ["a", "b"], {"x": 1})
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 19.9μs -> 8.46μs (136% faster)
def test_edge_sensitive_command_with_pii_true():
    """Sensitive commands should always filter, even if PII is on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AUTH", ("user", "pass")); desc = codeflash_output # 3.40μs -> 7.50μs (54.7% slower)
def test_edge_max_data_size_truncation():
    """Test truncation when description exceeds max_data_size."""
    integration = RedisIntegration(max_data_size=15)
    codeflash_output = _get_db_span_description(integration, "GET", ("verylongkeyname", "value")); desc = codeflash_output # 9.20μs -> 8.72μs (5.57% faster)
    # "GET 'verylongkeyname' [Filtered]" is longer than 15
    # Truncate to 15-len("...") = 12, then add "..."
    expected = "GET 'verylo..."
def test_edge_max_data_size_exact_length():
    """Test truncation when description is exactly max_data_size."""
    integration = RedisIntegration(max_data_size=23)
    codeflash_output = _get_db_span_description(integration, "GET", ("shortkey",)); desc = codeflash_output # 3.33μs -> 7.63μs (56.4% slower)
def test_edge_max_data_size_less_than_ellipsis():
    """Test when max_data_size is less than length of ellipsis."""
    integration = RedisIntegration(max_data_size=2)
    codeflash_output = _get_db_span_description(integration, "GET", ("key",)); desc = codeflash_output # 4.07μs -> 8.65μs (52.9% slower)
def test_edge_args_are_empty_strings():
    """Test when args are empty strings."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("", "")); desc = codeflash_output # 8.52μs -> 7.74μs (10.1% faster)
def test_edge_command_name_is_space():
    """Test when command name is a space."""
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, " ", ("key",)); desc = codeflash_output # 3.09μs -> 7.34μs (57.9% slower)
# --- Large Scale Test Cases ---
def test_large_many_args_pii_false():
    """Test with a large number of arguments, PII off."""
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(1000))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 32.3μs -> 10.3μs (213% faster)
    # Only first arg shown, rest are filtered, up to _MAX_NUM_ARGS
    expected = "GET 'arg0'" + " [Filtered]" * min(len(args)-1, _MAX_NUM_ARGS)
def test_large_many_args_pii_true():
    """Test with a large number of arguments, PII on."""
    global _send_pii
    _send_pii = True
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(1000))
    # Only up to _MAX_NUM_ARGS are processed
    expected = "GET " + " ".join([repr(f"arg{i}") for i in range(_MAX_NUM_ARGS+1)])
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.1μs -> 9.55μs (194% faster)
def test_large_long_command_name_and_args():
    """Test with very long command name and args."""
    integration = RedisIntegration()
    cmd = "LONGCOMMAND" * 10
    args = tuple("X"*100 for _ in range(_MAX_NUM_ARGS+1))
    expected = cmd + " " + " ".join([repr("X"*100) if i == 0 else "[Filtered]" for i in range(_MAX_NUM_ARGS+1)])
    codeflash_output = _get_db_span_description(integration, cmd, args); desc = codeflash_output # 34.2μs -> 9.45μs (262% faster)
def test_large_truncation():
    """Test truncation with very large description."""
    integration = RedisIntegration(max_data_size=50)
    args = tuple("X"*20 for _ in range(_MAX_NUM_ARGS+1))
    codeflash_output = _get_db_span_description(integration, "GET", args); desc = codeflash_output # 28.3μs -> 10.0μs (182% faster)
def test_large_sensitive_command():
    """Test large sensitive command, all args filtered."""
    integration = RedisIntegration()
    args = tuple(f"secret{i}" for i in range(1000))
    codeflash_output = _get_db_span_description(integration, "SET", args); desc = codeflash_output # 28.0μs -> 10.1μs (178% faster)
    # Only up to _MAX_NUM_ARGS+1 args are processed, all filtered
    expected = "SET" + " [Filtered]" * (_MAX_NUM_ARGS+1)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import pytest  # used for our unit tests
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description
_MAX_NUM_ARGS = 10
# Minimal RedisIntegration stub for testing
class RedisIntegration:
    def __init__(self, max_data_size=None):
        self.max_data_size = max_data_size
# Minimal Scope and client stub for should_send_default_pii
class ClientStub:
    def __init__(self, send_pii):
        self._send_pii = send_pii
    def should_send_default_pii(self):
        return self._send_pii
class Scope:
    _client = ClientStub(send_pii=False)
    @classmethod
    def get_client(cls):
        return cls._client
def should_send_default_pii():
    return Scope.get_client().should_send_default_pii()
from sentry_sdk.integrations.redis.modules.queries import \
    _get_db_span_description
# --- Begin: Unit Tests ---
# 1. Basic Test Cases
def test_basic_single_arg_no_pii():
    # Test a simple command with one argument, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 3.46μs -> 7.84μs (55.9% slower)
def test_basic_multiple_args_no_pii():
    # Test a command with multiple arguments, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 8.35μs -> 8.05μs (3.70% faster)
def test_basic_multiple_args_with_pii():
    # Test a command with multiple arguments, PII enabled
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 7.97μs -> 7.63μs (4.39% faster)
def test_basic_sensitive_command():
    # Test a sensitive command, should always be filtered
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AUTH", ("user", "password")); result = codeflash_output # 3.40μs -> 7.46μs (54.4% slower)
def test_basic_no_args():
    # Test a command with no arguments
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); result = codeflash_output # 2.16μs -> 6.63μs (67.4% slower)
# 2. Edge Test Cases
def test_edge_max_num_args():
    # Test with more than _MAX_NUM_ARGS arguments, should truncate at _MAX_NUM_ARGS
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(f"arg{i}" for i in range(_MAX_NUM_ARGS + 2))
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 32.4μs -> 9.05μs (258% faster)
    # Only up to _MAX_NUM_ARGS should be included
    expected = "SET " + " ".join(
        [repr(args[0])] + [repr(arg) for arg in args[1:_MAX_NUM_ARGS+1]]
    )
def test_edge_empty_string_key():
    # Test with an empty string as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", ("",)); result = codeflash_output # 3.42μs -> 7.51μs (54.5% slower)
def test_edge_none_key():
    # Test with None as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (None,)); result = codeflash_output # 3.25μs -> 7.42μs (56.2% slower)
def test_edge_non_string_key():
    # Test with integer as key
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (12345,)); result = codeflash_output # 3.24μs -> 7.62μs (57.5% slower)
def test_edge_sensitive_command_case_insensitive():
    # Test sensitive command with mixed case
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "AuTh", ("user", "password")); result = codeflash_output # 3.57μs -> 7.72μs (53.8% slower)
def test_edge_truncation_exact():
    # Test truncation where description is exactly max_data_size
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=13)
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 3.61μs -> 8.05μs (55.1% slower)
def test_edge_truncation_needed():
    # Test truncation where description exceeds max_data_size
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=10)
    codeflash_output = _get_db_span_description(integration, "GET", ("mykey",)); result = codeflash_output # 4.32μs -> 7.96μs (45.8% slower)
def test_edge_truncation_with_filtered():
    # Truncation with filtered data
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration(max_data_size=10)
    codeflash_output = _get_db_span_description(integration, "SET", ("mykey", "myvalue")); result = codeflash_output # 10.3μs -> 8.92μs (15.7% faster)
def test_edge_args_are_bytes():
    # Test arguments are bytes
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "GET", (b"mykey",)); result = codeflash_output # 3.42μs -> 7.54μs (54.7% slower)
def test_edge_args_are_mixed_types():
    # Test arguments are mixed types
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = ("key", 123, None, b"bytes")
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 13.7μs -> 8.31μs (65.1% faster)
    expected = "SET 'key' 123 None b'bytes'"
def test_edge_args_are_empty_tuple():
    # Test arguments is empty tuple
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "PING", ()); result = codeflash_output # 2.14μs -> 6.67μs (67.9% slower)
def test_edge_args_are_list():
    # Test arguments as a list (should still work as sequence)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    codeflash_output = _get_db_span_description(integration, "SET", ["key", "val"]); result = codeflash_output # 8.54μs -> 7.96μs (7.30% faster)
def test_edge_args_are_dict():
    # Test arguments as a dict (should treat as sequence of keys)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = {"a": 1, "b": 2}
    codeflash_output = _get_db_span_description(integration, "SET", args); result = codeflash_output # 7.87μs -> 7.86μs (0.102% faster)
def test_edge_args_are_long_string():
    # Test argument is a very long string (truncation)
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=20)
    long_str = "x" * 100
    codeflash_output = _get_db_span_description(integration, "SET", (long_str,)); result = codeflash_output # 4.46μs -> 8.43μs (47.1% slower)
# 3. Large Scale Test Cases
def test_large_many_args_no_pii():
    # Test with large number of arguments, PII disabled
    Scope._client = ClientStub(send_pii=False)
    integration = RedisIntegration()
    args = tuple(f"key{i}" for i in range(999))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 28.6μs -> 10.6μs (171% faster)
    # Only first is shown, rest are filtered (up to _MAX_NUM_ARGS)
    expected = "MGET 'key0'" + " [Filtered]" * _MAX_NUM_ARGS
def test_large_many_args_with_pii():
    # Test with large number of arguments, PII enabled
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(f"key{i}" for i in range(999))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 30.9μs -> 9.87μs (213% faster)
    # Only up to _MAX_NUM_ARGS are shown
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])
def test_large_truncation():
    # Test truncation with large description
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration(max_data_size=50)
    args = tuple("x" * 10 for _ in range(20))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 31.0μs -> 10.4μs (198% faster)
def test_large_sensitive_command():
    # Test large sensitive command, should always be filtered
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple("x" * 10 for _ in range(20))
    codeflash_output = _get_db_span_description(integration, "AUTH", args); result = codeflash_output # 5.42μs -> 9.30μs (41.8% slower)
def test_large_args_are_large_numbers():
    # Test with large integer arguments
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(10**6 + i for i in range(_MAX_NUM_ARGS + 1))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 27.6μs -> 9.38μs (194% faster)
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])
def test_large_args_are_large_bytes():
    # Test with large bytes arguments
    Scope._client = ClientStub(send_pii=True)
    integration = RedisIntegration()
    args = tuple(b"x" * 100 for _ in range(_MAX_NUM_ARGS + 1))
    codeflash_output = _get_db_span_description(integration, "MGET", args); result = codeflash_output # 30.2μs -> 9.35μs (223% faster)
    expected = "MGET " + " ".join([repr(arg) for arg in args[:_MAX_NUM_ARGS+1]])
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
```
</details>
To edit these changes `git checkout
codeflash/optimize-_get_db_span_description-mg9vzvxu` and push.
[](https://codeflash.ai)
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
    ### Description openai uses `Omit` now instead of `NotGiven` openai/openai-python@8260288 #### Issues * resolves: getsentry#4923 * resolves: PY-1885
Check the `call_type` value to distinguish embeddings from chats. The `client` decorator sets `call_type` by introspecting the function name and wraps all of the top-level `litellm` functions. If users import from `litellm.llms`, embedding calls still may appear as chats, but the input callback we provide does not have enough information in that case. Closes getsentry#4908
### Description when async generators throw a `GeneratorExit` we end up with ``` ValueError: <Token var=<ContextVar name='current_scope' default=None at 0x7f04cf05fb50> at 0x7f04ceb17340> was created in a different Context ``` so just catch that and rely on GC to cleanup the contextvar since we can't be smarter than that anyway for this case. #### Issues * resolves: getsentry#4925 * resolves: PY-1886
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
      
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
📄 37% (0.37x) speedup for
extract_sentrytrace_datainsentry_sdk/tracing_utils.py⏱️ Runtime :
3.30 milliseconds→2.41 milliseconds(best of124runs)📝 Explanation and details
The optimization adds length checks before expensive string formatting operations. Specifically:
Key Changes:
len(trace_id) != 32check before"{:032x}".format(int(trace_id, 16))len(parent_span_id) != 16check before"{:016x}".format(int(parent_span_id, 16))Why It's Faster:
The original code always performed string-to-int conversion and formatting, even when the trace_id/span_id were already properly formatted. The optimization skips these expensive operations when the strings are already the correct length (32 hex chars for trace_id, 16 for span_id).
The
int(trace_id, 16)and"{:032x}".format()operations are computationally expensive, involving:Performance Impact:
Test results show the optimization is most effective when trace IDs and span IDs are already properly formatted (which is common in production). Cases like
test_valid_full_headershow 51.6% speedup, andtest_missing_trace_idshows 65.9% speedup. The optimization has minimal overhead for cases where formatting is still needed, with only small gains (1-7%) for malformed inputs.This is particularly valuable for high-throughput tracing scenarios where most headers contain well-formatted trace data.
✅ Correctness verification report:
⚙️ Existing Unit Tests and Runtime
tracing/test_http_headers.py::test_sentrytrace_extraction🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-extract_sentrytrace_data-mg9m9ul7and push.