From 7c9ba02539cd95caafdb913f41841d54e1c51f36 Mon Sep 17 00:00:00 2001 From: Colin Chartier Date: Wed, 24 Apr 2024 11:04:49 -0400 Subject: [PATCH 1/4] Langchain integration docs --- .../python/integrations/langchain/index.mdx | 92 +++++++++++++++++++ 1 file changed, 92 insertions(+) create mode 100644 docs/platforms/python/integrations/langchain/index.mdx diff --git a/docs/platforms/python/integrations/langchain/index.mdx b/docs/platforms/python/integrations/langchain/index.mdx new file mode 100644 index 0000000000000..56fab773b3f0e --- /dev/null +++ b/docs/platforms/python/integrations/langchain/index.mdx @@ -0,0 +1,92 @@ +--- +title: Langchain +description: "Learn about using Sentry for Langchain." +--- + +This integration connects Sentry with [Langchain](https://github.com/langchain-ai/langchain). +The integration has been confirmed to work with Langchain 0.1.11. + +## Install + +Install `sentry-sdk` from PyPI and the appropriate langchain packages: + +```bash +pip install --upgrade 'sentry-sdk' 'langchain-openai' 'langchain-core' +``` + +## Configure + +If you have the `langchain` package in your dependencies, the Langchain integration will be enabled automatically when you initialize the Sentry SDK. + +An additional dependency, `tiktoken`, is required if you want to calculate token usage for streaming chat responses. + + + +```python +from langchain_openai import ChatOpenAI +import sentry_sdk + +sentry_sdk.init( + dsn="___PUBLIC_DSN___", + enable_tracing=True, + traces_sample_rate=1.0, + send_default_pii=True, # send personally-identifiable information like LLM responses to sentry +) + +llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) +``` + +## Verify + +Verify that the integration works by inducing an error. The error and performance transaction should appear in your Sentry project. + +```python +from langchain_openai import ChatOpenAI +import sentry_sdk + +sentry_sdk.init(...) # same as above + +llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, api_key="bad API key") +with sentry_sdk.start_transaction(op="ai-inference", name="The result of the AI inference"): + response = llm.invoke([("system", "What is the capital of paris?")]) + print(response) +``` + +After running this script, a transaction will be created in the Performance section of [sentry.io](https://sentry.io). Additionally, an error event (about the bad API key) will be sent to [sentry.io](https://sentry.io) and will be connected to the transaction. + +It may take a couple of moments for the data to appear in [sentry.io](https://sentry.io). + +## Behavior + +- The Langchain integration will connect Sentry with Langchain and automatically monitor all LLM, tool, and function calls. + +- All exceptions in the execution of the chain are reported. + +- Sentry is configured not to consider LLM and tokenizer inputs/outputs as PII. If you want to include them, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts despite `send_default_pii=True`, configure the integration with `include_prompts=False` like in the Options section. + +## Options + +By adding `LangchainIntegration` to your `sentry_sdk.init()` call explicitly, you can set options for `LangchainIntegration` to change its behavior: + +```python +import sentry_sdk +from sentry_sdk.integrations.langchain import LangchainIntegration + +sentry_sdk.init( + dsn="___PUBLIC_DSN___", + enable_tracing=True, + send_default_pii=True, + traces_sample_rate=1.0, + integrations = [ + LangchainIntegration( + include_prompts=False, # LLM/tokenizer inputs/outputs will be not sent to Sentry, despite send_default_pii=True + ), + ], +) +``` + +## Supported Versions + +- Langchain: 0.1.11+ +- tiktoken: 0.6.0+ +- Python: 3.9+ From eec9b4b2e615646c2833a71ccfbebef2b6ce24a0 Mon Sep 17 00:00:00 2001 From: Colin Chartier Date: Wed, 24 Apr 2024 16:57:33 -0400 Subject: [PATCH 2/4] Add to python integrations --- docs/platforms/python/integrations/index.mdx | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/platforms/python/integrations/index.mdx b/docs/platforms/python/integrations/index.mdx index a0e9ce7bd0565..7b21cbadc22ec 100644 --- a/docs/platforms/python/integrations/index.mdx +++ b/docs/platforms/python/integrations/index.mdx @@ -35,9 +35,10 @@ The Sentry SDK uses integrations to hook into the functionality of popular libra ## AI -| | **Auto enabled** | -| ------------------------------------------------------------------------------------------------------------------ | :--------------: | -| | ✓ | +| | **Auto enabled** | +|-----------------------------------------------------------------------------------------------------------------------|:----------------:| +| | ✓ | +| | ✓ | ## Data Processing From 9288ae8d835e432b70c08d3c39c51cd57b281395 Mon Sep 17 00:00:00 2001 From: Colin Chartier Date: Wed, 24 Apr 2024 17:02:03 -0400 Subject: [PATCH 3/4] Review comments --- docs/platforms/python/integrations/langchain/index.mdx | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/platforms/python/integrations/langchain/index.mdx b/docs/platforms/python/integrations/langchain/index.mdx index 56fab773b3f0e..1f7a3d6649b8e 100644 --- a/docs/platforms/python/integrations/langchain/index.mdx +++ b/docs/platforms/python/integrations/langchain/index.mdx @@ -18,7 +18,7 @@ pip install --upgrade 'sentry-sdk' 'langchain-openai' 'langchain-core' If you have the `langchain` package in your dependencies, the Langchain integration will be enabled automatically when you initialize the Sentry SDK. -An additional dependency, `tiktoken`, is required if you want to calculate token usage for streaming chat responses. +An additional dependency, `tiktoken`, is required to be installed if you want to calculate token usage for streaming chat responses. @@ -38,7 +38,7 @@ llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) ## Verify -Verify that the integration works by inducing an error. The error and performance transaction should appear in your Sentry project. +Verify that the integration works by inducing an error: ```python from langchain_openai import ChatOpenAI @@ -62,7 +62,7 @@ It may take a couple of moments for the data to appear in [sentry.io](https://se - All exceptions in the execution of the chain are reported. -- Sentry is configured not to consider LLM and tokenizer inputs/outputs as PII. If you want to include them, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts despite `send_default_pii=True`, configure the integration with `include_prompts=False` like in the Options section. +- Sentry by default considers LLM and tokenizer inputs/outputs as PII. If you want to include them, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the [Options section](#options) below. ## Options From 8e8a68c4b75734e622609a9e70c97d829f32b274 Mon Sep 17 00:00:00 2001 From: colin-sentry <161344340+colin-sentry@users.noreply.github.com> Date: Thu, 25 Apr 2024 12:56:30 -0400 Subject: [PATCH 4/4] Update docs/platforms/python/integrations/langchain/index.mdx Co-authored-by: vivianyentran <20403606+vivianyentran@users.noreply.github.com> --- docs/platforms/python/integrations/langchain/index.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/platforms/python/integrations/langchain/index.mdx b/docs/platforms/python/integrations/langchain/index.mdx index 1f7a3d6649b8e..a4a53630e99ea 100644 --- a/docs/platforms/python/integrations/langchain/index.mdx +++ b/docs/platforms/python/integrations/langchain/index.mdx @@ -62,7 +62,7 @@ It may take a couple of moments for the data to appear in [sentry.io](https://se - All exceptions in the execution of the chain are reported. -- Sentry by default considers LLM and tokenizer inputs/outputs as PII. If you want to include them, set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the [Options section](#options) below. +- Sentry considers LLM and tokenizer inputs/outputs as PII and, by default, does not include PII data. If you want include the data, then set `send_default_pii=True` in the `sentry_sdk.init()` call. To explicitly exclude prompts and outputs despite `send_default_pii=True`, configure the integration with `include_prompts=False` as shown in the [Options section](#options) below. ## Options