Skip to content

Base LangSmith content migration #134

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Aug 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
455 changes: 442 additions & 13 deletions src/docs.json

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion src/langgraph-platform/datasets-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,4 @@ This guide shows how to add examples to [LangSmith datasets](https://docs.smith.
5. Edit the example's input/output as needed before adding it to the dataset.
6. Select "Add to dataset" at the bottom of the page to add all selected nodes to their respective datasets.

See [Evaluating intermediate steps](https://docs.smith.langchain.com/evaluation/how_to_guides/langgraph#evaluating-intermediate-steps) for more details on how to evaluate intermediate steps.
See [Evaluating intermediate steps](https://docs.smith.langchain.com/langsmith/langgraph#evaluating-intermediate-steps) for more details on how to evaluate intermediate steps.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
title: How to deploy self-hosted full platform
sidebarTitle: Deploy self-hosted full platform
---
Before deploying, review the [conceptual guide for the Self-Hosted Full Platform](/langgraph-platform/self-hosted-full-platform) deployment option.
Before deploying, review the [conceptual guide for the Self-Hosted Full Platform](/langgraph-platform/self-hosted) deployment option.

<Info>
**Important**
Expand Down
2 changes: 1 addition & 1 deletion src/langgraph-platform/iterate-graph-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ class Configuration:

## LangSmith Playground

The [LangSmith Playground](https://docs.smith.langchain.com/prompt_engineering/how_to_guides#playground) interface allows testing individual LLM calls without running the full graph:
The [LangSmith Playground](https://docs.smith.langchain.com/langsmith/create-a-prompt) interface allows testing individual LLM calls without running the full graph:

1. Select a thread
2. Click "View LLM Runs" on a node. This lists all the LLM calls (if any) made inside the node.
Expand Down
4 changes: 2 additions & 2 deletions src/langgraph-platform/run-evals-studio.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ Before running an experiment, ensure you have the following:

1. **A LangSmith dataset**: Your dataset should contain the inputs you want to test and optionally, reference outputs for comparison.
* The schema for the inputs must match the required input schema for the assistant. For more information on schemas, see [here](https://langchain-ai.github.io/langgraph/concepts/low_level/#schema).
* For more on creating datasets, see [How to Manage Datasets](https://docs.smith.langchain.com/evaluation/how_to_guides/manage_datasets_in_application#set-up-your-dataset).
* For more on creating datasets, see [How to Manage Datasets](https://docs.smith.langchain.com/langsmith/manage-datasets-in-application#set-up-your-dataset).
2. **(Optional) Evaluators**: You can attach evaluators (e.g., LLM-as-a-Judge, heuristics, or custom functions) to your dataset in LangSmith. These will run automatically after the graph has processed all inputs.
* To learn more, read about [Evaluation Concepts](https://docs.smith.langchain.com/evaluation/concepts#evaluators).
* To learn more, read about [Evaluation Concepts](https://docs.smith.langchain.com/langsmith/evaluation-overview#evaluators).
3. **A running application**: The experiment can be run against:
* An application deployed on [LangGraph Platform](/langgraph-platform/deployment-quickstart).
* A locally running application started via the [langgraph-cli](/langgraph-platform/local-server).
Expand Down
134 changes: 134 additions & 0 deletions src/langsmith/access-current-span.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
---
title: Access the current run (span) within a traced function
sidebarTitle: Access the current run (span) within a traced function
---

In some cases you will want to access the current run (span) within a traced function. This can be useful for extracting UUIDs, tags, or other information from the current run.

You can access the current run by calling the `get_current_run_tree`/`getCurrentRunTree` function in the Python or TypeScript SDK, respectively.

For a full list of available properties on the `RunTree` object, see [this reference](/langsmith/run-data-format).

<Tabs>
<Tab title="Python">
```python
from langsmith import traceable
from langsmith.run_helpers import get_current_run_tree
from openai import Client

openai = Client()

@traceable
def format_prompt(subject):
run = get_current_run_tree()
print(f"format_prompt Run Id: {run.id}")
print(f"format_prompt Trace Id: {run.trace_id}")
print(f"format_prompt Parent Run Id: {run.parent_run.id}")
return [
{
"role": "system",
"content": "You are a helpful assistant.",
},
{
"role": "user",
"content": f"What's a good name for a store that sells {subject}?"
}
]

@traceable(run_type="llm")
def invoke_llm(messages):
run = get_current_run_tree()
print(f"invoke_llm Run Id: {run.id}")
print(f"invoke_llm Trace Id: {run.trace_id}")
print(f"invoke_llm Parent Run Id: {run.parent_run.id}")
return openai.chat.completions.create(
messages=messages, model="gpt-4o-mini", temperature=0
)

@traceable
def parse_output(response):
run = get_current_run_tree()
print(f"parse_output Run Id: {run.id}")
print(f"parse_output Trace Id: {run.trace_id}")
print(f"parse_output Parent Run Id: {run.parent_run.id}")
return response.choices[0].message.content

@traceable
def run_pipeline():
run = get_current_run_tree()
print(f"run_pipeline Run Id: {run.id}")
print(f"run_pipeline Trace Id: {run.trace_id}")
messages = format_prompt("colorful socks")
response = invoke_llm(messages)
return parse_output(response)

run_pipeline()
```
</Tab>
<Tab title="TypeScript">
```typescript
import { traceable, getCurrentRunTree } from "langsmith/traceable";
import OpenAI from "openai";

const openai = new OpenAI();

const formatPrompt = traceable((subject: string) => {
const run = getCurrentRunTree();
console.log("formatPrompt Run ID", run.id)
console.log("formatPrompt Trace ID", run.trace_id)
console.log("formatPrompt Parent Run ID", run.parent_run.id)
return [
{
role: "system" as const,
content: "You are a helpful assistant.",
},
{
role: "user" as const,
content: `What's a good name for a store that sells ${subject}?`,
},
];
}, { name: "formatPrompt" });

const invokeLLM = traceable(
async (messages: { role: string; content: string }[]) => {
const run = getCurrentRunTree();
console.log("invokeLLM Run ID", run.id)
console.log("invokeLLM Trace ID", run.trace_id)
console.log("invokeLLM Parent Run ID", run.parent_run.id)
return openai.chat.completions.create({
model: "gpt-4o-mini",
messages: messages,
temperature: 0,
});
},
{ run_type: "llm", name: "invokeLLM" }
);

const parseOutput = traceable(
(response: any) => {
const run = getCurrentRunTree();
console.log("parseOutput Run ID", run.id)
console.log("parseOutput Trace ID", run.trace_id)
console.log("parseOutput Parent Run ID", run.parent_run.id)
return response.choices[0].message.content;
},
{ name: "parseOutput" }
);

const runPipeline = traceable(
async () => {
const run = getCurrentRunTree();
console.log("runPipline Run ID", run.id)
console.log("runPipline Trace ID", run.trace_id)
console.log("runPipline Parent Run ID", run.parent_run?.id)
const messages = await formatPrompt("colorful socks");
const response = await invokeLLM(messages);
return parseOutput(response);
},
{ name: "runPipeline" }
);

await runPipeline();
```
</Tab>
</Tabs>
120 changes: 120 additions & 0 deletions src/langsmith/add-metadata-tags.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
---
title: Add metadata and tags to traces
sidebarTitle: Add metadata and tags to traces
---

LangSmith supports sending arbitrary metadata and tags along with traces.

Tags are strings that can be used to categorize or label a trace. Metadata is a dictionary of key-value pairs that can be used to store additional information about a trace.

Both are useful for associating additional information with a trace, such as the environment in which it was executed, the user who initiated it, or an internal correlation ID. For more information on tags and metadata, see the [Concepts](/langsmith/observability-overview#tags) page. For information on how to query traces and runs by metadata and tags, see the [Filter traces in the application](/langsmith/filter-traces-in-application) page.

<Tabs>
<Tab title="Python">
```python
import openai
import langsmith as ls
from langsmith.wrappers import wrap_openai

client = openai.Client()
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]

# You can set metadata & tags **statically** when decorating a function
# Use the @traceable decorator with tags and metadata
# Ensure that the LANGSMITH_TRACING environment variables are set for @traceable to work
@ls.traceable(
run_type="llm",
name="OpenAI Call Decorator",
tags=["my-tag"],
metadata={"my-key": "my-value"}
)
def call_openai(
messages: list[dict], model: str = "gpt-4o-mini"
) -> str:
# You can also dynamically set metadata on the parent run:
rt = ls.get_current_run_tree()
rt.metadata["some-conditional-key"] = "some-val"
rt.tags.extend(["another-tag"])
return client.chat.completions.create(
model=model,
messages=messages,
).choices[0].message.content

call_openai(
messages,
# To add at **invocation time**, when calling the function.
# via the langsmith_extra parameter
langsmith_extra={"tags": ["my-other-tag"], "metadata": {"my-other-key": "my-value"}}
)

# Alternatively, you can use the context manager
with ls.trace(
name="OpenAI Call Trace",
run_type="llm",
inputs={"messages": messages},
tags=["my-tag"],
metadata={"my-key": "my-value"},
) as rt:
chat_completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
)
rt.metadata["some-conditional-key"] = "some-val"
rt.end(outputs={"output": chat_completion})

# You can use the same techniques with the wrapped client
patched_client = wrap_openai(
client, tracing_extra={"metadata": {"my-key": "my-value"}, "tags": ["a-tag"]}
)
chat_completion = patched_client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
langsmith_extra={
"tags": ["my-other-tag"],
"metadata": {"my-other-key": "my-value"},
},
)
```
</Tab>
<Tab title="TypeScript">
```typescript
import OpenAI from "openai";
import { traceable, getCurrentRunTree } from "langsmith/traceable";
import { wrapOpenAI } from "langsmith/wrappers";

const client = wrapOpenAI(new OpenAI());
const messages: OpenAI.Chat.ChatCompletionMessageParam[] = [
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" },
];

const traceableCallOpenAI = traceable(
async (messages: OpenAI.Chat.ChatCompletionMessageParam[]) => {
const completion = await client.chat.completions.create({
model: "gpt-4o-mini",
messages,
});
const runTree = getCurrentRunTree();
runTree.extra.metadata = {
...runTree.extra.metadata,
someKey: "someValue",
};
runTree.tags = [...(runTree.tags ?? []), "runtime-tag"];
return completion.choices[0].message.content;
},
{
run_type: "llm",
name: "OpenAI Call Traceable",
tags: ["my-tag"],
metadata: { "my-key": "my-value" },
}
);

// Call the traceable function
await traceableCallOpenAI(messages);
```
</Tab>
</Tabs>
Loading
Loading