diff --git a/docs.json b/docs.json
index 53b4713030..c3521b18e7 100644
--- a/docs.json
+++ b/docs.json
@@ -706,18 +706,46 @@
"pages": [
"weave/concepts/what-is-weave",
{
- "group": "Track your application",
+ "group": "Trace your application",
"pages": [
- "weave/guides/tracking/tracing",
- "weave/tutorial-tracing_2",
+ {
+ "group": "Tracing basics",
+ "pages": [
+ "weave/guides/tracking/tracing",
+ "weave/guides/tracking/create-call",
+ "weave/guides/tracking/trace-tree",
+ "weave/guides/tracking/querying-calls"
+ ]
+ },
+ {
+ "group": "Advanced tracing",
+ "pages": [
+ "weave/guides/tracking/trace-generator-func",
+ "weave/tutorial-tracing_2",
+ "weave/guides/tracking/threads",
+ "weave/guides/tracking/ops",
+ "weave/guides/tools/attributes",
+ "weave/guides/core-types/media",
+ "weave/guides/tracking/view-call"
+ ]
+ },
+ {
+ "group": "Work with Calls",
+ "pages": [
+ "weave/guides/tracking/update-call",
+ "weave/guides/tracking/call-schema-reference",
+ "weave/guides/tracking/get-call-object",
+ "weave/guides/tracking/set-call-display"
+ ]
+ },
+ "weave/guides/tracking/trace-disable",
"weave/guides/tracking/costs",
- "weave/guides/core-types/media",
+
"weave/guides/tools/saved-views",
"weave/guides/tools/comparison",
- "weave/guides/tracking/trace-tree",
- "weave/guides/tracking/threads",
"weave/guides/tracking/trace-plots",
"weave/guides/tools/attributes",
+ "weave/guides/tracking/trace-to-run",
"weave/guides/tools/weave-in-workspaces"
]
},
@@ -731,7 +759,6 @@
"weave/guides/evaluation/weave_local_scorers",
"weave/guides/evaluation/evaluation_logger",
"weave/guides/core-types/leaderboards",
- "weave/guides/tools/attributes",
"weave/guides/tools/column-mapping",
"weave/guides/evaluation/dynamic_leaderboards"
]
@@ -750,8 +777,7 @@
"weave/tutorial-weave_models",
"weave/guides/core-types/models",
"weave/guides/core-types/prompts",
- "weave/guides/tracking/objects",
- "weave/guides/tracking/ops"
+ "weave/guides/tracking/objects"
]
},
{
@@ -768,6 +794,7 @@
"group": "Integrate with your LLM provider and frameworks",
"pages": [
"weave/guides/integrations",
+ "weave/guides/integrations/autopatching",
{
"group": "LLM Providers",
"pages": [
diff --git a/images/export_modal.png b/images/export_modal.png
deleted file mode 100644
index b9db34254c..0000000000
Binary files a/images/export_modal.png and /dev/null differ
diff --git a/images/screenshots/basic_call.png b/images/screenshots/basic_call.png
deleted file mode 100644
index 04a4b6e693..0000000000
Binary files a/images/screenshots/basic_call.png and /dev/null differ
diff --git a/images/screenshots/calls_filter.png b/images/screenshots/calls_filter.png
deleted file mode 100644
index 153515c7f2..0000000000
Binary files a/images/screenshots/calls_filter.png and /dev/null differ
diff --git a/images/screenshots/calls_macro.png b/images/screenshots/calls_macro.png
deleted file mode 100644
index 94597acbc1..0000000000
Binary files a/images/screenshots/calls_macro.png and /dev/null differ
diff --git a/ja/weave/guides/tracking/tracing.mdx b/ja/weave/guides/tracking/tracing.mdx
index b64d066a53..79cffdca56 100644
--- a/ja/weave/guides/tracking/tracing.mdx
+++ b/ja/weave/guides/tracking/tracing.mdx
@@ -3,18 +3,7 @@ title: トレースの基本
description: Weave のトレース機能を使用して、 AI アプリケーション の実行を追跡・モニタリングします
---
-
-
-
-
-
-
-
-
-
-
-
-
+---
Call は Weave における基本的な構成要素です。これらは関数の単一の実行を表し、以下を含みます。
- Inputs (引数)
@@ -535,7 +524,6 @@ API を直接使用して手動で Call を作成することもできます。
詳細ページには、Call の入力、出力、実行時間、および追加のメタデータが表示されます。
-
Weave Python SDK を使用して Call を表示するには、[`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) メソッドを使用できます。
@@ -805,19 +793,11 @@ curl -L 'https://trace.wandb.ai/calls/delete' \
## Querying and exporting Calls
-
-
-
-
プロジェクトの `/calls` ページ ("Traces" タブ) には、プロジェクト内のすべての Call のテーブルビューが表示されます。そこでは以下のことが可能です。
* ソート
* フィルタリング
* エクスポート
-
-
-
-
エクスポートモーダル(上記)では、データをさまざまな形式でエクスポートできるほか、選択した Call に対応する Python および CURL のコードスニペットも表示されます。
UI でビューを作成してから、生成されたコードスニペットを通じてエクスポート API について学ぶのが最も簡単な方法です。
diff --git a/ko/weave/guides/tracking/tracing.mdx b/ko/weave/guides/tracking/tracing.mdx
index 214a5988f1..9b5eb51024 100644
--- a/ko/weave/guides/tracking/tracing.mdx
+++ b/ko/weave/guides/tracking/tracing.mdx
@@ -3,18 +3,7 @@ title: Tracing 기초
description: Weave tracing을 사용하여 AI 애플리케이션의 실행을 추적하고 모니터링하세요.
---
-
-
-
-
-
-
-
-
-
-
-
-
+---
Calls는 Weave 의 핵심 빌드 블록입니다. 이는 다음을 포함한 단일 함수 실행을 나타냅니다:
- Inputs (인수)
@@ -534,7 +523,6 @@ API를 직접 사용하여 수동으로 Calls를 생성할 수도 있습니다.
상세 페이지에는 호출의 입력, 출력, 런타임 및 추가 메타데이터가 표시됩니다.
-
Weave Python SDK를 사용하여 호출을 보려면 [`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) 메소드를 사용할 수 있습니다:
@@ -804,19 +792,11 @@ curl -L 'https://trace.wandb.ai/calls/delete' \
## Querying and exporting Calls
-
-
-
-
프로젝트의 `/calls` 페이지("Traces" 탭)에는 프로젝트의 모든 Calls에 대한 테이블 뷰가 포함되어 있습니다. 여기에서 다음을 수행할 수 있습니다:
* 정렬
* 필터링
* 내보내기
-
-
-
-
내보내기 모달(위 그림 참조)을 사용하면 다양한 형식으로 데이터를 내보낼 수 있을 뿐만 아니라, 선택한 호출에 해당하는 Python 및 CURL 코드를 보여줍니다!
가장 쉽게 시작하는 방법은 UI에서 뷰를 구성한 다음, 생성된 코드 조각을 통해 내보내기 API에 대해 자세히 알아보는 것입니다.
diff --git a/weave/guides/integrations.mdx b/weave/guides/integrations.mdx
index 8dd648a4ef..3a79bc9262 100644
--- a/weave/guides/integrations.mdx
+++ b/weave/guides/integrations.mdx
@@ -3,50 +3,15 @@ title: Integrations overview
description: "Seamlessly trace and monitor LLM calls across 30+ providers and frameworks with Weave's automatic patching, supporting OpenAI, Anthropic, Google AI, and major orchestration tools without code changes."
---
-# Integrations
-Weave provides **automatic implicit patching** for all supported integrations by default:
-**Implicit Patching (Automatic):** Libraries are automatically patched regardless of when they are imported.
-```python lines
-# Option 1: Import before weave.init()
-import openai
-import weave
-weave.init('my-project') # OpenAI is automatically patched!
-
-# Option 2: Import after weave.init()
-import weave
-weave.init('my-project')
-import anthropic # Automatically patched via import hook!
-```
-
-**Disabling Implicit Patching:** You can disable automatic patching if you prefer explicit control.
-
-```python lines
-import weave
-
-# Option 1: Via settings parameter
-weave.init('my-project', settings={'implicitly_patch_integrations': False})
-
-# Option 2: Via environment variable
-# Set WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=false before running your script
-
-# With implicit patching disabled, you must explicitly patch integrations
-import openai
-weave.patch_openai() # Now required for OpenAI tracing
-```
+W&B Weave provides logging integrations for popular LLM providers and orchestration frameworks. These integrations allow you to seamlessly trace calls made through various libraries, enhancing your ability to monitor and analyze your AI applications.
-**Explicit Patching (Manual):** You can explicitly patch integrations for fine-grained control.
+If you use LLM provider libraries (such as OpenAI, Anthropic, Cohere, or Mistral) in your application, you want those API calls to show up in W&B Weave as traced Calls: inputs, outputs, latency, token usage, and cost. Without help, you would have to wrap every `client.chat.completions.create()` (or equivalent) in `@weave.op` or manual instrumentation, which is tedious and easy to miss something.
-```python lines
-import weave
-weave.init('my-project')
-weave.integrations.patch_openai() # Enable OpenAI tracing
-weave.integrations.patch_anthropic() # Enable Anthropic tracing
-```
+Weave automatically intercepts (patches) supported LLM client libraries. Your application code stays unchanged: you use the provider SDK as usual, and each request is recorded as a Weave Call. You get full tracing with minimal setup.
-W&B Weave provides logging integrations for popular LLM providers and orchestration frameworks. These integrations allow you to seamlessly trace calls made through various libraries, enhancing your ability to monitor and analyze your AI applications.
## LLM Providers
diff --git a/weave/guides/integrations/autopatching.mdx b/weave/guides/integrations/autopatching.mdx
new file mode 100644
index 0000000000..651c504e64
--- /dev/null
+++ b/weave/guides/integrations/autopatching.mdx
@@ -0,0 +1,110 @@
+---
+title: "Control automatic LLM call tracking"
+description: "Control how W&B Weave automatically records calls to OpenAI, Anthropic, and other LLM libraries"
+---
+
+
+
+If you use LLM provider libraries (such as OpenAI, Anthropic, Cohere, or Mistral) in your application, autopatching is Weave’s way of taking care of tracing all your LLM calls for you. When you call `weave.init()`, Weave automatically intercepts (patches) supported LLM client libraries. Your application code stays unchanged: you use the provider SDK as usual, and each request is recorded as a Weave Call. You get full tracing with minimal setup.
+
+This page describes when and how to change that behavior: turning automatic tracking off, limiting it to specific providers, or post-processing inputs and outputs (for example, to redact PII).
+
+## Default behavior
+
+By default, Weave automatically patches and tracks calls to common LLM libraries such as `openai` and `anthropic`. Call `weave.init(...)` at the start of your program and use those libraries normally. Their calls will appear in your project’s Traces.
+
+## Configure autopatching
+
+
+
+
+The `autopatch_settings` argument is deprecated. Use `implicitly_patch_integrations=False` to disable implicit patching, or call specific patch functions like `patch_openai(settings={...})` to configure settings per integration.
+
+
+Weave provides **automatic implicit patching** for all supported integrations by default:
+
+**Implicit Patching (Automatic):** Libraries are automatically patched regardless of when they are imported.
+
+```python lines
+# Option 1: Import before weave.init()
+import openai
+import weave
+weave.init('your-team-name/your-project-name') # OpenAI is automatically patched!
+
+# Option 2: Import after weave.init()
+import weave
+weave.init('your-team-name/your-project-name')
+import anthropic # Automatically patched via import hook!
+```
+
+**Disabling Implicit Patching:** You can disable automatic patching if you prefer explicit control.
+
+```python lines
+import weave
+
+# Option 1: Via settings parameter
+weave.init('your-team-name/your-project-name', settings={'implicitly_patch_integrations': False})
+
+# Option 2: Via environment variable
+# Set WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=false before running your script
+
+# With implicit patching disabled, you must explicitly patch integrations
+import openai
+weave.patch_openai() # Now required for OpenAI tracing
+```
+
+**Explicit Patching (Manual):** You can explicitly patch integrations for fine-grained control.
+
+```python lines
+import weave
+weave.init('your-team-name/your-project-name')
+weave.integrations.patch_openai() # Enable OpenAI tracing
+weave.integrations.patch_anthropic() # Enable Anthropic tracing
+```
+
+
+### Post-process inputs and outputs
+
+You can customize how inputs and outputs are recorded (for example, to redact PII or secrets) by passing settings to the patch function:
+
+```python lines
+import weave.integrations
+
+def redact_inputs(inputs: dict) -> dict:
+ if "email" in inputs:
+ inputs["email"] = "[REDACTED]"
+ return inputs
+
+weave.init(...)
+weave.integrations.patch_openai(
+ settings={
+ "op_settings": {"postprocess_inputs": redact_inputs}
+ }
+)
+```
+For more on handling sensitive data, see [How to use Weave with PII data](/weave/cookbooks/pii).
+
+
+
+The TypeScript SDK only supports autopatching for OpenAI and Anthropic. OpenAI is automatically patched when you import Weave and does not require any additional configuration.
+
+Additionally, the TypeScript SDK does not support:
+- Configuring or disabling autopatching.
+- Input/output post-processing.
+
+For edge cases where automatic patching does not work (ESM, bundlers like Next.js), use explicit wrapping:
+
+```typescript
+import OpenAI from 'openai'
+import * as weave from 'weave'
+import { wrapOpenAI } from 'weave'
+
+const client = wrapOpenAI(new OpenAI())
+await weave.init('your-team-name/your-project-name')
+```
+
+For more details on ESM setup and troubleshooting, see the [TypeScript SDK Integration Guide](/weave/guides/integrations/js).
+
+
+
+
diff --git a/weave/guides/tracking/call-schema-reference.mdx b/weave/guides/tracking/call-schema-reference.mdx
new file mode 100644
index 0000000000..8a24716c8c
--- /dev/null
+++ b/weave/guides/tracking/call-schema-reference.mdx
@@ -0,0 +1,48 @@
+---
+title: "Call schema reference"
+description: "Reference for the Call object structure and properties"
+---
+
+This page provides a reference for the Call object schema in W&B Weave. For information on querying calls, see [Query and export calls](/weave/guides/tracking/querying-calls).
+
+## Call properties
+
+The table below outlines the key properties of a Call in Weave. For the complete implementation, see:
+- [class: CallSchema](/weave/reference/python-sdk/trace_server/trace_server_interface#class-callschema) in the Python SDK.
+- [Interface: CallSchema](/weave/reference/typescript-sdk/interfaces/callschema) in the TypeScript SDK.
+
+| Property | Type | Description |
+|----------|------|-------------|
+| `id` | string (uuid) | Unique identifier for the call |
+| `project_id` | string (optional) | Associated project identifier |
+| `op_name` | string | Name of the operation (can be a reference) |
+| `display_name` | string (optional) | User-friendly name for the call |
+| `trace_id` | string (uuid) | Identifier for the trace this call belongs to |
+| `parent_id` | string (uuid) | Identifier of the parent call |
+| `started_at` | datetime | Timestamp when the call started |
+| `attributes` | Dict[str, Any] | User-defined metadata about the call *(read-only during execution)* |
+| `inputs` | Dict[str, Any] | Input parameters for the call |
+| `ended_at` | datetime (optional) | Timestamp when the call ended |
+| `exception` | string (optional) | Error message if the call failed |
+| `output` | Any (optional) | Result of the call |
+| `summary` | Optional[SummaryMap] | Post-execution summary information. You can modify this during execution to record custom metrics. |
+| `wb_user_id` | Optional[str] | Associated W&B user ID |
+| `wb_run_id` | Optional[str] | Associated W&B run ID |
+| `deleted_at` | datetime (optional) | Timestamp of call deletion, if applicable |
+
+## Property details
+
+`CallSchema` properties play an important role in tracking and managing function calls:
+- The `id`, `trace_id`, and `parent_id` properties help organize and relate calls within the system.
+- Timing information (`started_at`, `ended_at`) support performance analysis.
+- The `attributes` and `inputs` properties provide context for the call. Attributes are frozen once the call starts, so set them before invocation with `weave.attributes`. `output` and `summary` capture the results.
+-You can store metrics or other post-call values in the `summary` property. Modify `call.summary` during execution. Any values you add is merged with Weave's computed summary data when the Call finishes.
+ - Weave's computed summary data:
+ - `costs`: The total cost of the call based on LLM model usage data and token pricing data. For more information on cost calculation, see [Track costs](/weave/guides/tracking/costs).
+ - `latency_ms`: The duration, in milliseconds, elapsed between `started_at` and `ended_at`. `null` if `status` is `RUNNING`.
+ - `status`: The execution status: `SUCCESS`, `ERROR`, `RUNNING`, `DESCENDANT_ERROR` (meaning the call itself succeeded but a descendant call errored). {/* [empty ref](/weave/reference/python-sdk/trace_server/trace_server_interface#class-tracestatus)*/}
+
+- Integration with W&B is facilitated through `wb_user_id` and `wb_run_id`.
+
+This comprehensive set of properties enables detailed tracking and analysis of function calls throughout your project.
+
diff --git a/weave/guides/tracking/create-call.mdx b/weave/guides/tracking/create-call.mdx
new file mode 100644
index 0000000000..6f78a43b35
--- /dev/null
+++ b/weave/guides/tracking/create-call.mdx
@@ -0,0 +1,291 @@
+---
+title: "Trace your code"
+description: "Instrument your running code so its execution appears as detailed traces in W&B Weave."
+---
+
+To see your running code as detailed traces in Weave you create Calls. You can do that in three main ways:
+
+## 1. Automatic tracking of LLM library calls
+Weave integrates automatically with many common integrations and frameworks, such as `openai`, `anthropic`, `cohere`, `mistral`, and `LangChain`.
+Import the LLM or framework library, initialize your Weave project, and then Weave automatically traces all of Calls made to the LLM or platform to your project
+ without any additional code changes. For a complete list of supported library integrations, see [Integrations overview](/weave/guides/integrations/).
+
+
+
+
+
+ ```python lines
+ import weave
+
+ from openai import OpenAI
+ client = OpenAI()
+
+ # Initialize Weave Tracing
+ weave.init('intro-example')
+
+ response = client.chat.completions.create(
+ model="gpt-4",
+ messages=[
+ {
+ "role": "user",
+ "content": "How are you?"
+ }
+ ],
+ temperature=0.8,
+ max_tokens=64,
+ top_p=1,
+ )
+ ```
+
+
+
+
+ ```typescript lines
+ import OpenAI from 'openai'
+ import * as weave from 'weave'
+
+ const client = new OpenAI()
+
+ // Initialize Weave Tracing
+ await weave.init('intro-example')
+
+ const response = await client.chat.completions.create({
+ model: 'gpt-4',
+ messages: [
+ {
+ role: 'user',
+ content: 'How are you?',
+ },
+ ],
+ temperature: 0.8,
+ max_tokens: 64,
+ top_p: 1,
+ });
+ ```
+
+ For a complete setup guide for JS / TS projects, see the [TypeScript SDK: Third-Party Integration Guide](/weave/guides/integrations/js).
+
+
+
+
+If you want more control over automatic behavior, see [Configure automatic LLM call tracking](/weave/guides/integrations/autopatching).
+
+## 2. Tracking of custom functions
+
+Often LLM applications have additional logic (such as pre/post processing, prompts, and more) that you want to track.
+
+
+
+ Weave allows you to manually track these Calls using the [`@weave.op`](/weave/reference/python-sdk/#function-op) decorator. For example:
+
+ ```python lines
+ import weave
+
+ # Initialize Weave Tracing
+ weave.init('intro-example')
+
+ # Decorate your function
+ @weave.op
+ def my_function(name: str):
+ return f"Hello, {name}!"
+
+ # Call your function -- Weave will automatically track inputs and outputs
+ print(my_function("World"))
+ ```
+
+ You can also track [methods on classes](#track-class-and-object-methods).
+
+
+ Weave allows you to manually track these Calls by wrapping your function with [`weave.op`](/weave/reference/typescript-sdk/functions/op). For example:
+
+ ```typescript lines
+ import * as weave from 'weave'
+
+ await weave.init('intro-example')
+
+ function myFunction(name: string) {
+ return `Hello, ${name}!`
+ }
+
+ const myFunctionOp = weave.op(myFunction)
+ ```
+
+ You can also define the wrapping inline:
+
+ ```typescript lines
+ const myFunctionOp = weave.op((name: string) => `Hello, ${name}!`)
+ ```
+
+ This works for both functions as well as methods on classes:
+
+ ```typescript lines
+ class MyClass {
+ constructor() {
+ this.myMethod = weave.op(this.myMethod)
+ }
+
+ myMethod(name: string) {
+ return `Hello, ${name}!`
+ }
+ }
+ ```
+
+
+
+
+### Track class and object methods
+
+You can also track class and object methods. You can track any method in a class by decorating the method with `weave.op`.
+
+
+
+
+
+ ```python lines
+ import weave
+
+ # Initialize Weave Tracing
+ weave.init("intro-example")
+
+ class MyClass:
+ # Decorate your method
+ @weave.op
+ def my_method(self, name: str):
+ return f"Hello, {name}!"
+
+ instance = MyClass()
+
+ # Call your method -- Weave will automatically track inputs and outputs
+ print(instance.my_method("World"))
+ ```
+
+
+
+ **Using decorators in TypeScript**
+
+ To use the `@weave.op` decorator with your TypeScript code, make sure your environment is properly configured:
+
+ - **TypeScript v5.0 or newer**: Decorators are supported out of the box and no additional configuration is required.
+ - **TypeScript older than v5.0**: Enable experimental support for decorators. For more details, see the [official TypeScript documentation on decorators](https://www.typescriptlang.org/docs/handbook/decorators.html).
+
+
+ You can apply `@weave.op` to instance methods for tracing.
+
+ ```typescript lines
+ class Foo {
+ @weave.op
+ async predict(prompt: string) {
+ return "bar"
+ }
+ }
+ ```
+
+ You can also apply `@weave.op` to static methods to monitor utility functions within a class.
+
+ ```typescript lines
+ class MathOps {
+ @weave.op
+ static square(n: number): number {
+ return n * n;
+ }
+ }
+ ```
+
+
+
+
+### Trace parallel (multi-threaded) function calls
+By default, parallel Calls all show up in Weave as separate root Calls. To get correct nesting under the same parent Op, use a `ThreadPoolExecutor`.
+
+
+
+
+ The following code sample demonstrates the use of [`ThreadPoolExecutor`](/weave/reference/python-sdk/trace/util#class-contextawarethreadpoolexecutor).
+ The first function, `func`, is a simple Op that takes `x` and returns `x+1`. The second function, `outer`, is another Op that accepts a list of inputs.
+ Inside `outer`, the use of `ThreadPoolExecutor` and `exc.map(func, inputs)` means that each call to `func` still carries the same parent trace context.
+
+ ```python lines
+ import weave
+
+ @weave.op
+ def func(x):
+ return x+1
+
+ @weave.op
+ def outer(inputs):
+ with weave.ThreadPoolExecutor() as exc:
+ exc.map(func, inputs)
+
+ # Update your Weave project name
+ client = weave.init('my-weave-project')
+ outer([1,2,3,4,5])
+ ```
+
+
+
+
+ ```plaintext
+ This feature is not available in the TypeScript SDK yet.
+ ```
+
+
+In the Weave UI, this produces a single parent Call with five nested child Calls, so that you get a fully hierarchical trace even though the increments run in parallel.
+
+
+
+## 3. Manual Call tracking
+
+You can also manually create Calls using the API directly.
+
+
+
+ ```python lines
+ import weave
+
+ # Initialize Weave Tracing
+ client = weave.init('intro-example')
+
+ def my_function(name: str):
+ # Start a Call
+ call = client.create_call(op="my_function", inputs={"name": name})
+
+ # ... your function code ...
+
+ # End a Call
+ client.finish_call(call, output="Hello, World!")
+
+ # Call your function
+ print(my_function("World"))
+ ```
+
+
+
+
+ ```plaintext
+ This feature is not available in the TypeScript SDK yet.
+ ```
+
+
+ * Start a Call: [POST `/call/start`](https://docs.wandb.ai/weave/reference/service-api/calls/call-start).
+ * End a Call: [POST `/call/end`](https://docs.wandb.ai/weave/reference/service-api/calls/call-end).
+ ```bash lines
+ curl -L 'https://trace.wandb.ai/call/start' \
+ -H 'Content-Type: application/json' \
+ -H 'Accept: application/json' \
+ -d '{
+ "start": {
+ "project_id": "string",
+ "id": "string",
+ "op_name": "string",
+ "display_name": "string",
+ "trace_id": "string",
+ "parent_id": "string",
+ "started_at": "2024-09-08T20:07:34.849Z",
+ "attributes": {},
+ "inputs": {},
+ "wb_run_id": "string"
+ }
+ }
+ ```
+
+
diff --git a/weave/guides/tracking/get-call-object.mdx b/weave/guides/tracking/get-call-object.mdx
new file mode 100644
index 0000000000..0bdbd7d1de
--- /dev/null
+++ b/weave/guides/tracking/get-call-object.mdx
@@ -0,0 +1,62 @@
+---
+title: "Get a handle to the Call object during execution"
+description: "Access the W&B Weave Call object at runtime for feedback, display names, and other metadata"
+---
+In Weave, when you use an Op, you can call the functions directly as you would any function:
+
+```python Python lines
+@weave.op
+def my_op():
+ ...
+
+my_op()
+```
+```typescript Typescript lines
+function myFunction() {
+ ...
+}
+
+const myFunctionOp = weave.op(myFunction)
+```
+
+
+However, you can instead get access to the Call object directly by invoking the `op.call` method, which returns both the result and the `Call` object.
+
+
+
+ ```python lines
+@weave.op
+def my_op():
+ ...
+
+output, call = my_op.call()
+ ```
+From here, the `call` object contains all the information about the Call, including the inputs, outputs, and other metadata. You can use `call` to set, update, fetch additional properties, or add feedback.
+
+If your Op is a method on a class, you need to pass the instance of the class as the first argument to `call`. The following example shows getting a handle to a Call object that is a method on a class:
+ ```python lines
+ import weave
+
+ # Initialize Weave Tracing
+ weave.init("intro-example")
+
+ class MyClass:
+ # Decorate your method
+ @weave.op
+ def my_method(self, name: str):
+ return f"Hello, {name}!"
+
+ instance = MyClass()
+
+ # Pass `instance` as the first argument to `call`.
+ result, call = instance.my_method.call(instance, "World")
+ ```
+
+
+ ```plaintext
+ This feature is not available in the TypeScript SDK yet.
+ ```
+
+
+
+
diff --git a/weave/guides/tracking/imgs/trace_export_modal.png b/weave/guides/tracking/imgs/trace_export_modal.png
new file mode 100644
index 0000000000..3e682aba2b
Binary files /dev/null and b/weave/guides/tracking/imgs/trace_export_modal.png differ
diff --git a/weave/guides/tracking/objects.mdx b/weave/guides/tracking/objects.mdx
index 02e18913df..674ab6ac6a 100644
--- a/weave/guides/tracking/objects.mdx
+++ b/weave/guides/tracking/objects.mdx
@@ -1,16 +1,16 @@
---
title: "Track and version objects"
-description: "Track and version any JSON-serializable object in Weave"
+description: "Track and version any JSON-serializable object in W&B Weave"
---
## Objects
-An **Object** is versioned, serializable data. Weave automatically versions objects when they change, creating an immutable history. Objects include:
+An **Object** is versioned, serializable data. Weave automatically versions objects when they change and creates an immutable history. Objects include:
- **Datasets**: Collections of examples for evaluation
- **Models**: Configurations and parameters for your LLM logic
- **Prompts**: Versioned prompt templates
-```python
+```python lines
dataset = weave.Dataset(
name="test-cases",
rows=[
@@ -53,9 +53,9 @@ Weave's serialization layer saves and versions objects.
-Saving an object with a name will create the first version of that object if it doesn't exist.
+When you save an object with a name, Weave creates the first version of that object if it does not exist.
-## Getting an object back
+## Get an object back
@@ -70,13 +70,13 @@ Saving an object with a name will create the first version of that object if it
- ```plaintext
+ ```plaintext lines
This feature is not available in TypeScript yet.
```
-## Deleting an object
+## Delete an object
@@ -88,7 +88,7 @@ Saving an object with a name will create the first version of that object if it
cat_names_ref.delete()
```
- Trying to access a deleted object will result in an error. Resolving an object that has a reference to a deleted object will return a `DeletedRef` object in place of the deleted object.
+ Accessing a deleted object returns an error. Resolving an object that has a reference to a deleted object returns a `DeletedRef` in place of the deleted object.
@@ -111,8 +111,8 @@ weave:////object/:
- _object_name_: object name
- _object_version_: either a version hash, a string like v0, v1..., or an alias like ":latest". All objects have the ":latest" alias.
-Refs can be constructed with a few different styles
+You can construct refs with a few different styles.
-- `weave.ref()`: requires `weave.init()` to have been called. Refers to the ":latest" version
+- `weave.ref()`: requires `weave.init()` to have been called. Refers to the ":latest" version.
- `weave.ref(:)`: requires `weave.init()` to have been called.
-- `weave.ref()`: can be constructed without calling weave.init
+- `weave.ref()`: can be constructed without calling weave.init.
diff --git a/weave/guides/tracking/ops.mdx b/weave/guides/tracking/ops.mdx
index 4ffb1911fe..a4676ed819 100644
--- a/weave/guides/tracking/ops.mdx
+++ b/weave/guides/tracking/ops.mdx
@@ -1,15 +1,15 @@
---
-title: "Automatically track function calls using Ops"
-description: "Versioned functions that automatically log all calls in Weave"
+title: "Customize Ops"
+description: "Learn how to color your Ops for better visibility, how to modify what's logged, and how to control the sampling rate"
---
-A Weave op is a versioned function that automatically logs all calls.
+A Weave Op is a versioned function that automatically logs all Calls.
- To create an op, decorate a python function with `weave.op()`
+ To create an Op, decorate a python function with `weave.op()`
- ```python lines lines
+ ```python lines
import weave
@weave.op()
@@ -20,17 +20,17 @@ A Weave op is a versioned function that automatically logs all calls.
track_me(15)
```
- Calling an op creates a new op version if the code has changed from the last call, and log the inputs and outputs of the function.
+ Calling an Op creates a new Op version if the code has changed from the last call, and logs the inputs and outputs of the function.
- Functions decorated with `@weave.op()` will behave normally (without code versioning and tracking), if you don't call `weave.init('your-project-name')` before calling them.
+ Functions that you decorate with `@weave.op()` behave normally (without code versioning and tracking) if you do not call `weave.init('your-project-name')` before calling them.
Ops can be [served](/weave/guides/tools/serve) or [deployed](/weave/guides/tools/deploy) using the Weave toolbelt.
- To create an op, wrap a typescript function with `weave.op`
+ To create an Op, wrap a typescript function with `weave.op`
```typescript lines
import * as weave from 'weave'
@@ -54,7 +54,7 @@ A Weave op is a versioned function that automatically logs all calls.
- You can customize the op's display name by setting the `name` parameter in the `@weave.op` decorator:
+ You can customize the Op's display name by setting the `name` parameter in the `@weave.op` decorator:
```python lines
@weave.op(name="custom_name")
@@ -72,7 +72,7 @@ A Weave op is a versioned function that automatically logs all calls.
## Apply kinds and colors
-To better organize your ops in the Weave UI, you can apply custom kinds and colors to them by adding the `kind` and `color` arguments to the `@weave.op` decorators in your code. For example, the following code applies an `LLM` `kind` and a `blue` `color` to the parent function, and a `tool` `kind` and a `red` `color` to a nested function:
+To better organize your Ops in the Weave UI, you can apply custom kinds and colors to them by adding the `kind` and `color` arguments to the `@weave.op` decorators in your code. For example, the following code applies an `LLM` `kind` and a `blue` `color` to the parent function, and a `tool` `kind` and a `red` `color` to a nested function:
@@ -101,7 +101,7 @@ This feature is not available in TypeScript yet.
-This applies the colors and kinds to your ops in the Weave UI, like this:
+This applies the colors and kinds to your Ops in the Weave UI, like this:

@@ -128,7 +128,7 @@ The available `color` values are:
- If you want to change the data that is logged to weave without modifying the original function (e.g. to hide sensitive data), you can pass `postprocess_inputs` and `postprocess_output` to the op decorator.
+ If you want to change the data that Weave logs without modifying the original function (for example, to hide sensitive data), you can pass `postprocess_inputs` and `postprocess_output` to the Op decorator.
`postprocess_inputs` takes in a dict where the keys are the argument names and the values are the argument values, and returns a dict with the transformed inputs.
@@ -173,9 +173,9 @@ The available `color` values are:
- You can control how frequently an op's calls are traced by setting the `tracing_sample_rate` parameter in the `@weave.op` decorator. This is useful for high-frequency ops where you only need to trace a subset of calls.
+ You can control how frequently an Op's calls are traced by setting the `tracing_sample_rate` parameter in the `@weave.op` decorator. This is useful for high-frequency Ops where you only need to trace a subset of calls.
- Note that sampling rates are only applied to root calls. If an op has a sample rate, but is called by another op first, then that sampling rate will be ignored.
+ Note that Weave applies sampling rates only to root calls. If an Op has a sample rate but another Op calls it first, that sampling rate is ignored.
```python lines
@weave.op(tracing_sample_rate=0.1) # Only trace ~10% of calls
@@ -187,10 +187,10 @@ The available `color` values are:
return x + 1
```
- When an op's call is not sampled:
+ When an Op's call is not sampled:
- The function executes normally
- No trace data is sent to Weave
- - Child ops are also not traced for that call
+ - Child Ops are also not traced for that call
The sampling rate must be between 0.0 and 1.0 inclusive.
@@ -206,15 +206,15 @@ The available `color` values are:
If you want to suppress the printing of call links during logging, you can set the `WEAVE_PRINT_CALL_LINK` environment variable to `false`. This can be useful if you want to reduce output verbosity and reduce clutter in your logs.
-```bash
+```bash lines
export WEAVE_PRINT_CALL_LINK=false
```
-## Deleting an op
+## Deleting an Op
- To delete a version of an op, call `.delete()` on the op ref.
+ To delete a version of an Op, call `.delete()` on the Op ref.
```python lines
weave.init('intro-example')
@@ -222,7 +222,7 @@ export WEAVE_PRINT_CALL_LINK=false
my_op_ref.delete()
```
- Trying to access a deleted op will result in an error.
+ Accessing a deleted Op returns an error.
diff --git a/weave/guides/tracking/querying-calls.mdx b/weave/guides/tracking/querying-calls.mdx
new file mode 100644
index 0000000000..6bff85ff98
--- /dev/null
+++ b/weave/guides/tracking/querying-calls.mdx
@@ -0,0 +1,111 @@
+---
+title: "Query and export Calls"
+description: "Filter, sort, and export your W&B Weave Call data"
+---
+
+In the Weave UI, you can export your data in multiple formats. It also shows the Python and cURL code that you can use to export the rows programatically.
+To export Calls:
+1. Navigate to [wandb.ai](https://wandb.ai/) and select your project.
+1. In the Weave project sidebar, click **Traces**.
+1. Select multiple Calls that you want to export by checking the row.
+1. In the **Traces** table toolbar, click the export/download button.
+1. In the **Export** modal, choose **Selected rows** or **All rows**. Click Export.
+
+
+
+
+
+
+
+## Fetch calls programmatically
+
+
+
+ To fetch calls using the Python API, you can use the [`client.get_calls`](/weave/reference/python-sdk/trace/weave_client#method-get_calls) method:
+
+ ```python lines
+ import weave
+
+ # Initialize the client
+ client = weave.init("your-project-name")
+
+ # Fetch calls
+ calls = client.get_calls(filter=...)
+ ```
+
+
+
+ To fetch calls using the TypeScript API, you can use the [`client.getCalls`](/weave/reference/typescript-sdk/classes/weaveclient#getcalls) method.
+ ```typescript
+ import * as weave from 'weave'
+
+ // Initialize the client
+ const client = await weave.init('intro-example')
+
+ // Fetch calls
+ const calls = await client.getCalls(filter=...)
+ ```
+
+
+ The most powerful query layer is at the Service API. To fetch calls using the Service API, you can make a request to the [`/calls/stream_query`](https://docs.wandb.ai/weave/reference/service-api/calls/calls-query-stream) endpoint.
+
+ ```bash
+ curl -L 'https://trace.wandb.ai/calls/stream_query' \
+ -H 'Content-Type: application/json' \
+ -H 'Accept: application/json' \
+ -d '{
+ "project_id": "string",
+ "filter": {
+ "op_names": [
+ "string"
+ ],
+ "input_refs": [
+ "string"
+ ],
+ "output_refs": [
+ "string"
+ ],
+ "parent_ids": [
+ "string"
+ ],
+ "trace_ids": [
+ "string"
+ ],
+ "call_ids": [
+ "string"
+ ],
+ "trace_roots_only": true,
+ "wb_user_ids": [
+ "string"
+ ],
+ "wb_run_ids": [
+ "string"
+ ]
+ },
+ "limit": 100,
+ "offset": 0,
+ "sort_by": [
+ {
+ "field": "string",
+ "direction": "asc"
+ }
+ ],
+ "query": {
+ "$expr": {}
+ },
+ "include_costs": true,
+ "include_feedback": true,
+ "columns": [
+ "string"
+ ],
+ "expand_columns": [
+ "string"
+ ]
+ }'
+ ```
+
+
+
+For complete details on call properties and fields, see the [Call schema reference](/weave/guides/tracking/call-schema-reference).
+
+
diff --git a/weave/guides/tracking/set-call-display.mdx b/weave/guides/tracking/set-call-display.mdx
new file mode 100644
index 0000000000..2ef0ce094a
--- /dev/null
+++ b/weave/guides/tracking/set-call-display.mdx
@@ -0,0 +1,112 @@
+---
+title: "Set Call display name"
+description: "Set or override the display name for a Call in W&B Weave tracing"
+---
+
+
+Ops produce Calls. An Op is a function or method that you decorate with `@weave.op`. By default, the Op's name is the function name, and the associated Calls have the same display name.
+
+You can override the display name for all Calls of a given Op in several ways.
+
+
+
+
+
+ 1. Change the display name at the time of calling the Op.
+ The following example uses the `__weave` dictionary to set the Call display name that will take precedence over the Op display name:
+ ```python lines
+ result = my_function("World", __weave={"display_name": "My Custom Display Name"})
+ ```
+
+
+
+
+ 2. Change the display name on a per-Call basis.
+ The following example uses the [`Op.call`](/weave/reference/python-sdk/trace/op#function-call) method to return a `call` object, which you can then use to set the display name using [`call.set_display_name`](/weave/reference/python-sdk/trace/weave_client#method-set_display_name):
+ ```python lines
+ result, call = my_function.call("World")
+ call.set_display_name("My Custom Display Name")
+ ```
+
+ 3. Change the display name for all Calls of a given Op.
+ The following example sets the new display name in the `@weave.op` function decorator itself to affect all Calls for the Op:
+
+ ```python lines
+ @weave.op(call_display_name="My Custom Display Name")
+ def my_function(name: str):
+ return f"Hello, {name}!"
+ ```
+
+ The `call_display_name` can also be a function that takes in a `call` object and returns a string. Weave passes the `call` object automatically when the function runs, so you can use it to dynamically generate names based on the function's name, Call inputs, fields, and so on.
+
+ One common use case is appending a timestamp to the function's name.
+
+ ```python lines
+ from datetime import datetime
+
+ @weave.op(call_display_name=lambda call: f"{call.func_name}__{datetime.now()}")
+ def func():
+ return ...
+ ```
+
+ You can also log custom metadata using `.attributes`.
+
+ ```python lines
+ def custom_attribute_name(call):
+ model = call.attributes["model"]
+ revision = call.attributes["revision"]
+ now = call.attributes["date"]
+
+ return f"{model}__{revision}__{now}"
+
+ @weave.op(call_display_name=custom_attribute_name)
+ def func():
+ return ...
+
+ with weave.attributes(
+ {
+ "model": "finetuned-llama-3.1-8b",
+ "revision": "v0.1.2",
+ "date": "2024-08-01",
+ }
+ ):
+ func() # the display name will be "finetuned-llama-3.1-8b__v0.1.2__2024-08-01"
+
+ with weave.attributes(
+ {
+ "model": "finetuned-gpt-4o",
+ "revision": "v0.1.3",
+ "date": "2024-08-02",
+ }
+ ):
+ func() # the display name will be "finetuned-gpt-4o__v0.1.3__2024-08-02"
+ ```
+
+
+ 4. Change the display name of the Op itself.
+ Calls associated with an Op have the same display name. If you override the name of the Op itself, the display name of the Call also changes. You can do this in two ways:
+
+ - Set the `name` property of the Op before any Calls are logged:
+ ```python lines
+ my_function.name = "My Custom Op Name"
+ ```
+
+ - Set the `name` option on the Op decorator:
+ ```python lines
+ @weave.op(name="My Custom Op Name")
+ ```
+
+
+ To override the default name of a call, use the `callDisplayName` option when calling `weave.op()`.
+
+ ```typescript lines {2}
+ const extractDinosOp = weave.op(extractDinos, {
+ callDisplayName: (input: string) => `Your New Display Name`
+ });
+ ```
+
+
+
+
+
+You can also [update a call's display name](/weave/guides/tracking/update-call) after execution.
\ No newline at end of file
diff --git a/weave/guides/tracking/threads.mdx b/weave/guides/tracking/threads.mdx
index 9488b207ff..5438d4773e 100644
--- a/weave/guides/tracking/threads.mdx
+++ b/weave/guides/tracking/threads.mdx
@@ -1,9 +1,9 @@
---
-title: "Track threads"
-description: "Track and analyze multi-turn conversations in your LLM applications using threads."
+title: "Trace threads"
+description: "Trace and analyze multi-turn conversations in your LLM applications using threads."
---
-With W&B Weave _Threads_, you can track and analyze multi-turn conversations in your LLM applications. Threads group related calls under a shared `thread_id`, allowing you to visualize complete sessions and track conversation-level metrics across turns. You can create threads programmatically, and visualize them in the Weave UI.
+With W&B Weave _Threads_, you can track and analyze multi-turn conversations in your LLM applications. Threads group related Calls under a shared `thread_id`, so you can visualize complete sessions and track conversation-level metrics across turns. You can create threads programmatically and visualize them in the Weave UI.
To get started with Threads, do the following:
@@ -24,20 +24,20 @@ Threads are useful when you want to organize and analyze:
- Session-based workflows
- Any sequence of related operations.
-Threads let you group calls by context, making it easier to understand how your system responds across multiple steps. For example, you can track a single user session, an agent's chain of decisions, or a complex request that spans infrastructure and business logic layers.
+Threads let you group Calls by context, making it easier to understand how your system responds across multiple steps. For example, you can track a single user session, an agent's chain of decisions, or a complex request that spans infrastructure and business logic layers.
-By structuring your application with threads and turns, you get cleaner metrics and better visibility in the Weave UI. Instead of seeing every low-level operation, you can focus on the high-level steps that matter.
+By structuring your application with threads and turns, you get cleaner metrics and better visibility in the Weave UI. Instead of seeing every low-level Op, you can focus on the high-level steps that matter.
## Definitions
### Thread
-A _Thread_ is a logical grouping of related calls that share a common conversational context. A Thread:
+A _Thread_ is a logical grouping of related Calls that share a common conversational context. A Thread:
- Has a unique `thread_id`
- Contains one or more _turns_
-- Maintains context across calls
-- Represent complete user sessions or interaction flows
+- Maintains context across Calls
+- Represents complete user sessions or interaction flows
### Turn
@@ -50,12 +50,12 @@ A _Turn_ is a high-level operation within a Thread, displayed in the UI as indiv
A _Call_ is any `@weave.op`-decorated function execution in your application.
-- _Turn calls_ are top-level operations that start new turns
-- _Nested calls_ are lower-level operations within a turn
+- _Turn Calls_ are top-level operations that start new turns
+- _Nested Calls_ are lower-level operations within a turn
### Trace
-A _Trace_ captures the full call stack for a single operation. Threads group traces together that are part of the same logical conversation or session. In other words, a thread is made up of multiple turns, each representing one part in the conversation. For more information on Traces, see the [Tracing overview](/weave/guides/tracking/tracing).
+A _Trace_ captures the full Call stack for a single operation. Threads group traces together that are part of the same logical conversation or session. In other words, a thread is made up of multiple turns, each representing one part in the conversation. For more information on Traces, see the [Tracing overview](/weave/guides/tracking/tracing).
## UI overview
@@ -77,17 +77,17 @@ In the Weave sidebar, select **Threads** to access the [Threads list view](#thre
### Threads detail drawer
-- Click any row to open the detail drawer for that row
+- Click any row to open the detail drawer for that row.
- Shows all turns within a thread.
- Turns are listed in the order they started (based on their start time, not by duration or end time).
-- Includes call-level metadata (latency, inputs, outputs)
-- Optionally shows message content or structured data if logged
+- Includes Call-level metadata (latency, inputs, outputs).
+- Optionally shows message content or structured data if logged.
- To view the full execution of a turn, you can open it from the thread detail drawer. This lets you drill into all nested operations that occurred during that specific turn.
-- If a turn includes messages extracted from LLM calls, they will appear in the right-hand chat pane. These messages typically come from calls made by supported integrations (e.g., `openai.ChatCompletion.create`) and must meet specific criteria to display. For more information, see [Chat view behavior](#chat-view-behavior).
+- If a turn includes messages extracted from LLM calls, they will appear in the right-hand chat pane. These messages typically come from calls made by supported integrations (for example, `openai.ChatCompletion.create`) and must meet specific criteria to display. For more information, see [Chat view behavior](#chat-view-behavior).
### Chat view behavior
-The chat pane displays structured message data extracted from LLM calls made during each turn. This view gives you a conversational-style rendering of the interaction.
+The chat pane displays structured message data extracted from LLM Calls made during each turn. This view gives you a conversational-style rendering of the interaction.

@@ -95,15 +95,15 @@ The chat pane displays structured message data extracted from LLM calls made dur
#### What qualifies as a message?
-Messages are extracted from calls within a turn that represent direct interactions with LLM providers (e.g., sending a prompt and receiving a response). Only calls that are not further nested inside other calls are shown as messages. This avoids duplicating intermediate steps or aggregated internal logic.
+Weave extracts messages from Calls within a turn that represent direct interactions with LLM providers (for example, sending a prompt and receiving a response). Only Calls that are not further nested inside other Calls appear as messages. This avoids duplicating intermediate steps or aggregated internal logic.
-Typically, messages are emitted by automatically patched third-party SDKs like:
+Typically, automatically patched third-party SDKs emit messages, for example:
- `openai.ChatCompletion.create`
- `anthropic.Anthropic.completion`
#### What happens if no messages are present?
-If a turn doesn't emit any messages, the chat pane will show an empty message section for that turn. However, the chat pane may still include messages from other turns in the same thread.
+If a turn does not emit any messages, the chat pane shows an empty message section for that turn. The chat pane may still include messages from other turns in the same thread.
#### Turn and Chat interactions
@@ -114,7 +114,7 @@ If a turn doesn't emit any messages, the chat pane will show an empty message se
You can open the full trace for a turn by clicking into it.
-A back button appears in the upper left corner to return to the thread detail view. UI state (like scroll position) is not preserved across the transition.
+A back button appears in the upper left corner to return to the thread detail view. Weave does not preserve UI state (such as scroll position) across the transition.

@@ -125,12 +125,12 @@ A back button appears in the upper left corner to return to the thread detail vi
Each example in this section demonstrates a different strategy for organizing turns and threads in your application. For most examples, you should provide your own LLM call or system behavior inside the stub functions.
- To track a session or conversation, use the `weave.thread()` context manager.
-- Decorate logical operations with `@weave.op` to track them as turns or nested calls.
+- Decorate logical operations with `@weave.op` to track them as turns or nested Calls.
- If you pass a `thread_id`, Weave uses it to group all operations in that block under the same thread. If you omit the `thread_id`, Weave auto-generates a unique one for you.
The return value from `weave.thread()` is a `ThreadContext` object with a `thread_id` property, which you can log, reuse, or pass to other systems.
-Nested `weave.thread()` contexts always start a new thread unless the same `thread_id` is reused. Ending a child context does not interrupt or overwrite the parent context. This allows for forked thread structures or layered thread orchestration, depending on your app logic.
+Nested `weave.thread()` contexts always start a new thread unless you reuse the same `thread_id`. Ending a child context does not interrupt or overwrite the parent context. This allows for forked thread structures or layered thread orchestration, depending on your app logic.
### Basic thread creation
@@ -354,7 +354,7 @@ if __name__ == "__main__":
### Resume a previous session
-Sometimes you need to resume a previously started session and continue adding calls to the same thread. In other cases, you may not be able to resume an existing session and must start a new thread instead.
+Sometimes you need to resume a previously started session and continue adding Calls to the same thread. In other cases, you may not be able to resume an existing session and must start a new thread instead.
When implementing optional thread resumption, **never** leave the `thread_id` parameter as `None`, as this will disable thread grouping entirely. Instead, always provide a valid thread ID. If you need to create a new thread, generate a unique identifier using a function like `generate_id()`.
diff --git a/weave/guides/tracking/trace-disable.mdx b/weave/guides/tracking/trace-disable.mdx
new file mode 100644
index 0000000000..0bad853de7
--- /dev/null
+++ b/weave/guides/tracking/trace-disable.mdx
@@ -0,0 +1,78 @@
+---
+title: "Disable tracing"
+description: "Learn options to disable or conditionally turn off W&B Weave tracing"
+---
+
+There are different options available to control the level of Weave tracing in your application, depending on your environment and needs.
+
+## Environment variable
+
+In situations where you want to unconditionally disable tracing for the entire program, you can set the environment variable `WEAVE_DISABLED=true`.
+
+`WEAVE_DISABLED` is read only once, at function-definition time. This variable cannot be used to toggle tracing at runtime.
+
+## Client initialization
+
+Sometimes, you may want to conditionally enable tracing for a specific initialization based on some condition. In this case, you can initialize the client with the `disabled` flag in init settings.
+
+
+
+ ```python lines
+import weave
+
+# Initialize the client
+client = weave.init(..., settings={"disabled": True})
+ ```
+
+
+ ```plaintext
+ This feature is not available for the TypeScript SDK yet.
+ ```
+
+
+
+## Context manager
+
+To conditionally disable tracing for a specific block of code, you can use a tracing context manager. Use `with tracing_disabled()` to suppress tracing **only for the function calls executed inside the `with` block**. Use it in application code to scope which calls should not be logged.
+
+
+
+```python lines
+import weave
+from weave.trace.context.call_context import tracing_disabled
+
+client = weave.init('your-team/your-project-name')
+
+@weave.op
+def my_op():
+ ...
+
+with tracing_disabled():
+ my_op()
+```
+
+
+ ```plaintext
+ This feature is not available for the TypeScript SDK yet.
+ ```
+
+
+
+Although tracing behavior is fixed when functions are defined, this can be used for runtime control when combined with application logic. For example, you can wrap the context manager in a conditional to dynamically enable or disable tracing based on a runtime value:
+
+
+
+```python lines
+if should_trace:
+ my_op()
+else:
+ with tracing_disabled():
+ my_op()
+```
+
+
+ ```plaintext
+ This feature is not available for the TypeScript SDK yet.
+ ```
+
+
\ No newline at end of file
diff --git a/weave/guides/tracking/trace-generator-func.mdx b/weave/guides/tracking/trace-generator-func.mdx
new file mode 100644
index 0000000000..b945b432c4
--- /dev/null
+++ b/weave/guides/tracking/trace-generator-func.mdx
@@ -0,0 +1,73 @@
+---
+title: "Trace generator functions"
+description: "Track sync and async generator functions with W&B Weave tracing"
+---
+
+W&B Weave supports tracing both sync and async generator functions, including deeply nested patterns.
+
+
+Because generators yield values lazily, Weave logs outputs only when the generator is fully consumed (for example, when you convert it to a list).
+To ensure Weave captures outputs in the trace, fully consume the generator (for example, with `list()`).
+
+
+
+
+
+ ```python
+ from typing import Generator
+ import weave
+
+ weave.init("my-project")
+
+ # This function uses a simple sync generator.
+ # Weave will trace the call and its input (`x`),
+ # but output values are only captured once the generator is consumed (for example, with `list()`).
+ @weave.op
+ def basic_gen(x: int) -> Generator[int, None, None]:
+ yield from range(x)
+
+ # A normal sync function used within the generator pipeline.
+ # Its calls are also traced independently by Weave.
+ @weave.op
+ def inner(x: int) -> int:
+ return x + 1
+
+ # A sync generator that calls another traced function (`inner`).
+ # Each yielded value comes from a separate traced call to `inner`.
+ @weave.op
+ def nested_generator(x: int) -> Generator[int, None, None]:
+ for i in range(x):
+ yield inner(i)
+
+ # A more complex generator that composes the above generator.
+ # Tracing here produces a hierarchical call tree:
+ # - `deeply_nested_generator` (parent)
+ # - `nested_generator` (child)
+ # - `inner` (grandchild)
+ @weave.op
+ def deeply_nested_generator(x: int) -> Generator[int, None, None]:
+ for i in range(x):
+ for j in nested_generator(i):
+ yield j
+
+ # The generator must be *consumed* for Weave to capture outputs.
+ # This is true for both sync and async generators.
+ res = deeply_nested_generator(4)
+ list(res) # Triggers tracing of all nested calls and yields
+ ```
+
+
+
+ ```plaintext
+ This feature is not available in the TypeScript SDK yet.
+ ```
+
+
+The following screenshot shows the **Traces** page with a selected trace of the preceding code. The center panel shows the trace tree for the selected trace. The trace tree shows the `deeply_nested_generator`, `nested_generator`, and `inner` Ops in the trace tree hierarchy.
+
+
+## Consuming generators
+
+Weave captures generator outputs only after you fully consume the generator. Consume the generator by iterating over it (for example, with `list()`, a `for` loop, or `next()` until exhaustion). The same applies to async generators when you use `async for` or equivalent consumption.
+
+For more on decorating functions and methods with `@weave.op`, see [Create calls](/weave/guides/tracking/create-call).
diff --git a/weave/guides/tracking/trace-to-run.mdx b/weave/guides/tracking/trace-to-run.mdx
new file mode 100644
index 0000000000..d805bdf586
--- /dev/null
+++ b/weave/guides/tracking/trace-to-run.mdx
@@ -0,0 +1,91 @@
+---
+title: "Link a W&B run to trace function calls"
+description: "Associate Weave traces with W&B runs for experiment tracking"
+---
+
+## View a W&B run in the Traces table
+
+With W&B Weave, you can trace function calls in your code and link them directly to the [W&B runs](https://docs.wandb.ai/models/runs/) in which they were executed.
+When you trace a function with `@weave.op()` and call it inside a `wandb.init()` context, Weave automatically associates the trace with the W&B run.
+Links to any associated runs are shown in the Traces table.
+
+
+
+The following Python code shows how traced Ops are linked to W&B
+runs when executed inside a `wandb.init()` context. These traces appear in the
+Weave UI and are associated with the corresponding run.
+
+
+To view a W&B run as a Weave trace:
+
+1. In the terminal, install dependencies.
+
+```bash lines
+pip install wandb weave
+```
+
+2. Log in to W&B.
+
+```bash lines
+wandb login
+```
+
+3. In the following script, replace `your-team-name/your-project-name` with your actual W&B entity/project:
+```python lines
+import wandb
+import weave
+
+def example_wandb(projname):
+ # Split projname into entity and project
+ entity, project = projname.split("/", 1)
+
+ # Initialize Weave context for tracing
+ weave.init(projname)
+
+ # Define a traceable Op
+ @weave.op()
+ def say(message: str) -> str:
+ return f"I said: {message}"
+
+ # First W&B run
+ with wandb.init(
+ entity=entity,
+ project=project,
+ notes="Experiment 1",
+ tags=["baseline", "paper1"],
+ ) as run:
+ say("Hello, world!")
+ say("How are you!")
+ run.log({"messages": 2})
+
+ # Second W&B run
+ with wandb.init(
+ entity=entity,
+ project=project,
+ notes="Experiment 2",
+ tags=["baseline", "paper1"],
+ ) as run:
+ say("Hello, world from experiment 2!")
+ say("How are you!")
+ run.log({"messages": 2})
+
+if __name__ == "__main__":
+ # Replace this with your actual W&B username/project
+ example_wandb("your-team-name/your-project-name")
+```
+
+4. Run the script.
+
+```bash lines
+python weave_trace_with_wandb.py
+```
+
+5. Navigate to [https://weave.wandb.ai](https://weave.wandb.ai) and select your project.
+6. In the **Weave project sidebar**, click **Traces**. Links to any associated runs are displayed in the Traces table.
+
+
+```plaintext
+This feature is not available for the TypeScript SDK yet.
+```
+
+
diff --git a/weave/guides/tracking/tracing.mdx b/weave/guides/tracking/tracing.mdx
index dee5a50f00..d1008b1975 100644
--- a/weave/guides/tracking/tracing.mdx
+++ b/weave/guides/tracking/tracing.mdx
@@ -1,21 +1,9 @@
---
-title: "Tracing Basics"
-description: "Track and monitor your AI application's execution with Weave tracing"
+title: "Understand Ops and Calls"
+description: "Learn how Ops and Calls create the foundation of W&B Weave's tracing system."
---
-
-
-
-
-
-
-
-
-
-
-
-
## Ops
@@ -49,7 +37,7 @@ A **Call** is a logged execution of an Op. Every time an Op runs, Weave creates
- Parent-child relationships (for nested calls)
- Any errors that occurred
-Calls form the backbone of Weave's tracing system and provide the data for debugging, analysis, and evaluation.
+Calls show up as **Traces** in the Weave UI and provide the data for debugging, analysis, and evaluation. For the full Call object structure and properties, see the [Call schema reference](/weave/guides/tracking/call-schema-reference).
Calls are similar to spans in the [OpenTelemetry](https://opentelemetry.io) data model. A Call can:
@@ -57,1168 +45,4 @@ Calls are similar to spans in the [OpenTelemetry](https://opentelemetry.io) data
- Have parent and child Calls, forming a tree structure
-## Creating Calls
-
-There are three main ways to create Calls in Weave:
-
-### 1. Automatic tracking of LLM libraries
-
-
-
- Weave automatically tracks [calls to common LLM libraries](/weave/guides/integrations) like `openai`, `anthropic`, `cohere`, and `mistral`. Simply call [`weave.init('project_name')`](/weave/reference/python-sdk#function-init) at the start of your program:
-
-
- You can control Weave's default tracking behavior [using the `autopatch_settings` argument in `weave.init`](#configure-autopatching).
-
-
- ```python lines
- import weave
-
- from openai import OpenAI
- client = OpenAI()
-
- # Initialize Weave Tracing
- weave.init('intro-example')
-
- response = client.chat.completions.create(
- model="gpt-4",
- messages=[
- {
- "role": "user",
- "content": "How are you?"
- }
- ],
- temperature=0.8,
- max_tokens=64,
- top_p=1,
- )
- ```
-
-
-
- Weave automatically tracks [calls to common LLM libraries](/weave/guides/integrations), such as `openai`.
-
- ```typescript lines
- import OpenAI from 'openai'
- import * as weave from 'weave'
-
- const client = new OpenAI()
-
- // Initialize Weave Tracing
- await weave.init('intro-example')
-
- const response = await client.chat.completions.create({
- model: 'gpt-4',
- messages: [
- {
- role: 'user',
- content: 'How are you?',
- },
- ],
- temperature: 0.8,
- max_tokens: 64,
- top_p: 1,
- });
- ```
-
- For a complete setup guide for JS / TS projects, see the [TypeScript SDK: Third-Party Integration Guide](/weave/guides/integrations/js).
-
-
-
-
-You can store metrics or other post-call values in the `summary` dictionary of a Call. Modify `call.summary` during execution and any values you add will be merged with Weave's computed summary data when the call finishes.
-
-### 2. Decorating and wrapping functions
-
-However, often LLM applications have additional logic (such as pre/post processing, prompts, etc.) that you want to track.
-
-
-
- Weave allows you to manually track these calls using the [`@weave.op`](/weave/reference/python-sdk/#function-op) decorator. For example:
-
- ```python lines lines
- import weave
-
- # Initialize Weave Tracing
- weave.init('intro-example')
-
- # Decorate your function
- @weave.op
- def my_function(name: str):
- return f"Hello, {name}!"
-
- # Call your function -- Weave will automatically track inputs and outputs
- print(my_function("World"))
- ```
-
- You can also track [methods on classes](#4-track-class-and-object-methods).
-
- #### Trace sync & async generator functions
-
- Weave supports tracing both sync and async generator functions, including deeply nested patterns.
-
-
- Since generators yield values lazily, the outputs are only logged when the generator is fully consumed (e.g., by converting it to a list).
- To ensure outputs are captured in the trace, fully consume the generator (e.g., by using `list()`).
-
-
- ```python lines lines
- from typing import Generator
- import weave
-
- weave.init("my-project")
-
- # This function uses a simple sync generator.
- # Weave will trace the call and its input (`x`),
- # but output values are only captured once the generator is consumed (e.g., via `list()`).
- @weave.op
- def basic_gen(x: int) -> Generator[int, None, None]:
- yield from range(x)
-
- # A normal sync function used within the generator pipeline.
- # Its calls are also traced independently by Weave.
- @weave.op
- def inner(x: int) -> int:
- return x + 1
-
- # A sync generator that calls another traced function (`inner`).
- # Each yielded value comes from a separate traced call to `inner`.
- @weave.op
- def nested_generator(x: int) -> Generator[int, None, None]:
- for i in range(x):
- yield inner(i)
-
- # A more complex generator that composes the above generator.
- # Tracing here produces a hierarchical call tree:
- # - `deeply_nested_generator` (parent)
- # - `nested_generator` (child)
- # - `inner` (grandchild)
- @weave.op
- def deeply_nested_generator(x: int) -> Generator[int, None, None]:
- for i in range(x):
- for j in nested_generator(i):
- yield j
-
- # The generator must be *consumed* for Weave to capture outputs.
- # This is true for both sync and async generators.
- res = deeply_nested_generator(4)
- list(res) # Triggers tracing of all nested calls and yields
- ```
-
- 
-
-
-
- Weave allows you to manually track these calls by wrapping your function with [`weave.op`](/weave/reference/typescript-sdk/functions/op). For example:
-
- ```typescript lines
- import * as weave from 'weave'
-
- await weave.init('intro-example')
-
- function myFunction(name: string) {
- return `Hello, ${name}!`
- }
-
- const myFunctionOp = weave.op(myFunction)
- ```
-
- You can also define the wrapping inline:
-
- ```typescript
- const myFunctionOp = weave.op((name: string) => `Hello, ${name}!`)
- ```
-
- This works for both functions as well as methods on classes:
-
- ```typescript
- class MyClass {
- constructor() {
- this.myMethod = weave.op(this.myMethod)
- }
-
- myMethod(name: string) {
- return `Hello, ${name}!`
- }
- }
- ```
-
-
-
-#### Getting a handle to the call object during execution
-
-
-
- Sometimes it is useful to get a handle to the `Call` object itself. You can do this by calling the `op.call` method, which returns both the result and the `Call` object. For example:
-
- ```python lines lines
- result, call = my_function.call("World")
- ```
-
- Then you can use `call` to set, update, or fetch additional properties (most commonly used to get the ID of the call to be used for feedback).
-
-
- If your op is a method on a class, you need to pass the instance as the first argument to the op (see example below).
-
- ```python lines lines
- # Notice that we pass the `instance` as the first argument.
- print(instance.my_method.call(instance, "World"))
- ```
-
- ```python lines lines
- import weave
-
- # Initialize Weave Tracing
- weave.init("intro-example")
-
- class MyClass:
- # Decorate your method
- @weave.op
- def my_method(self, name: str):
- return f"Hello, {name}!"
-
- instance = MyClass()
-
- # Call your method -- Weave will automatically track inputs and outputs
- instance.my_method.call(instance, "World")
- ```
-
-
- ```plaintext
- This feature is not available in the TypeScript SDK yet.
- ```
-
-
-
-#### Set call display name at execution
-
-
-
- Sometimes you may want to override the display name of a call. You can achieve this in one of four ways:
-
- 1. Change the display name at the time of calling the op:
-
- ```python lines lines
- result = my_function("World", __weave={"display_name": "My Custom Display Name"})
- ```
-
-
- Using the `__weave` dictionary sets the call display name which will take precedence over the Op display name.
-
-
- 2. Change the display name on a per-call basis. This uses the [`Op.call`](/weave/reference/python-sdk/trace/op#function-call) method to return a `Call` object, which you can then use to set the display name using [`Call.set_display_name`](/weave/reference/python-sdk/trace/weave_client#method-set_display_name).
- ```python lines lines
- result, call = my_function.call("World")
- call.set_display_name("My Custom Display Name")
- ```
-
- 3. Change the display name for all Calls of a given Op:
-
- ```python lines lines
- @weave.op(call_display_name="My Custom Display Name")
- def my_function(name: str):
- return f"Hello, {name}!"
- ```
-
- 4. The `call_display_name` can also be a function that takes in a `Call` object and returns a string. The `Call` object will be passed automatically when the function is called, so you can use it to dynamically generate names based on the function's name, call inputs, fields, etc.
-
- 1. One common use case is just appending a timestamp to the function's name.
-
- ```py
- from datetime import datetime
-
- @weave.op(call_display_name=lambda call: f"{call.func_name}__{datetime.now()}")
- def func():
- return ...
- ```
-
- 2. You can also log custom metadata using `.attributes`
-
- ```py
- def custom_attribute_name(call):
- model = call.attributes["model"]
- revision = call.attributes["revision"]
- now = call.attributes["date"]
-
- return f"{model}__{revision}__{now}"
-
- @weave.op(call_display_name=custom_attribute_name)
- def func():
- return ...
-
- with weave.attributes(
- {
- "model": "finetuned-llama-3.1-8b",
- "revision": "v0.1.2",
- "date": "2024-08-01",
- }
- ):
- func() # the display name will be "finetuned-llama-3.1-8b__v0.1.2__2024-08-01"
-
- with weave.attributes(
- {
- "model": "finetuned-gpt-4o",
- "revision": "v0.1.3",
- "date": "2024-08-02",
- }
- ):
- func() # the display name will be "finetuned-gpt-4o__v0.1.3__2024-08-02"
- ```
-
- **Technical Note:** "Calls" are produced by "Ops". An Op is a function or method that is decorated with `@weave.op`.
- By default, the Op's name is the function name, and the associated calls will have the same display name. The above example shows how to override the display name for all Calls of a given Op. Sometimes, users wish to override the name of the Op itself. This can be achieved in one of two ways:
-
- 1. Set the `name` property of the Op before any calls are logged
- ```python lines lines
- my_function.name = "My Custom Op Name"
- ```
-
- 2. Set the `name` option on the op decorator
- ```python lines lines
- @weave.op(name="My Custom Op Name)
- ```
-
-
- To override the default name of a call, use the `callDisplayName` option when calling `weave.op()`.
-
- ```typescript lines {2}
- const extractDinosOp = weave.op(extractDinos, {
- callDisplayName: (input: string) => `Your New Display Name`
- });
- ```
-
- You can also [update a call's display name](/weave/guides/tracking/tracing#set-display-name) after execution.
-
-
-
-#### Trace parallel (multi-threaded) function calls
-
-
-
- By default, parallel calls all show up in Weave as separate root calls. To get correct nesting under the same parent `op`, use [`ThreadPoolExecutor`](/weave/reference/python-sdk/trace/util#class-contextawarethreadpoolexecutor).
-
- The following code sample demonstrates the use of `ThreadPoolExecutor`.
- The first function, `func`, is a simple `op` that takes `x` and returns `x+1`. The second function, `outer`, is another `op` that accepts a list of inputs.
- Inside `outer`, the use of `ThreadPoolExecutor` and `exc.map(func, inputs)` means that each call to `func` still carries the same parent trace context.
-
- ```python lines
- import weave
-
- @weave.op
- def func(x):
- return x+1
-
- @weave.op
- def outer(inputs):
- with weave.ThreadPoolExecutor() as exc:
- exc.map(func, inputs)
-
- # Update your Weave project name
- client = weave.init('my-weave-project')
- outer([1,2,3,4,5])
- ```
-
- In the Weave UI, this produces a single parent call with five nested child calls, so that you get a fully hierarchical trace even though the increments run in parallel.
-
- 
-
-
- ```plaintext
- This feature is not available in the TypeScript SDK yet.
- ```
-
-
-
-### 3. Manual Call tracking
-
-You can also manually create Calls using the API directly.
-
-
-
-
- ```python lines lines
- import weave
-
- # Initialize Weave Tracing
- client = weave.init('intro-example')
-
- def my_function(name: str):
- # Start a call
- call = client.create_call(op="my_function", inputs={"name": name})
-
- # ... your function code ...
-
- # End a call
- client.finish_call(call, output="Hello, World!")
-
- # Call your function
- print(my_function("World"))
- ```
-
-
-
-
- ```plaintext
- This feature is not available in the TypeScript SDK yet.
- ```
-
-
-
-
- * Start a call: [POST `/call/start`](https://docs.wandb.ai/weave/reference/service-api/calls/call-start)
- * End a call: [POST `/call/end`](https://docs.wandb.ai/weave/reference/service-api/calls/call-end)
- ```bash
- curl -L 'https://trace.wandb.ai/call/start' \
- -H 'Content-Type: application/json' \
- -H 'Accept: application/json' \
- -d '{
- "start": {
- "project_id": "string",
- "id": "string",
- "op_name": "string",
- "display_name": "string",
- "trace_id": "string",
- "parent_id": "string",
- "started_at": "2024-09-08T20:07:34.849Z",
- "attributes": {},
- "inputs": {},
- "wb_run_id": "string"
- }
- }
- ```
-
-
-
-### 4. Track class and object methods
-
-You can also track class and object methods.
-
-
-
- Track any method on a class using `weave.op`.
-
- ```python lines lines
- import weave
-
- # Initialize Weave Tracing
- weave.init("intro-example")
-
- class MyClass:
- # Decorate your method
- @weave.op
- def my_method(self, name: str):
- return f"Hello, {name}!"
-
- instance = MyClass()
-
- # Call your method -- Weave will automatically track inputs and outputs
- print(instance.my_method("World"))
- ```
-
-
-
-
-
- **Using decorators in TypeScript**
-
- To use the `@weave.op` decorator with your TypeScript code, make sure your environment is properly configured:
-
- - **TypeScript v5.0 or newer**: Decorators are supported out of the box and no additional configuration is required.
- - **TypeScript older than v5.0**: Enable experimental support for decorators. For more details, see the [official TypeScript documentation on decorators](https://www.typescriptlang.org/docs/handbook/decorators.html).
-
- #### Decorate a class method
-
- Use `@weave.op` to trace instance methods.
-
- ```typescript
- class Foo {
- @weave.op
- async predict(prompt: string) {
- return "bar"
- }
- }
- ```
-
- #### Decorate a static class method
-
- Apply `@weave.op` to static methods to monitor utility functions within a class.
-
- ```typescript
- class MathOps {
- @weave.op
- static square(n: number): number {
- return n * n;
- }
- }
- ```
-
-
-
-
-## Viewing Calls
-
-
-
-To view a call in the web app:
-1. Navigate to your project's **Traces** tab.
-2. Find the call you want to view in the list
-3. Click on the call to open its details page
-
-The details page will show the call's inputs, outputs, runtime, and any additional metadata.
-
-
-
-
-To view a call using the Weave Python SDK, you can use the [`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) method:
-
-```python lines
-import weave
-
-# Initialize the client
-client = weave.init("your-project-name")
-
-# Get a specific call by its ID
-call = client.get_call("call-uuid-here")
-
-print(call)
-```
-
-
-```typescript lines
-import * as weave from 'weave'
-
-// Initialize the client
-const client = await weave.init('intro-example')
-
-// Get a specific call by its ID
-const call = await client.getCall('call-uuid-here')
-
-console.log(call)
-```
-
-
-
-To view a call using the Service API, you can make a request to the [`/call/read`](https://docs.wandb.ai/weave/reference/service-api/calls/call-read) endpoint.
-
-```bash
-curl -L 'https://trace.wandb.ai/call/read' \
--H 'Content-Type: application/json' \
--H 'Accept: application/json' \
--d '{
- "project_id": "string",
- "id": "string",
-}'
-```
-
-
-
-### Customize rendered traces with `weave.Markdown`
-
-You can use `weave.Markdown` to customize how your trace information is displayed without losing the original data. This allows you to render your inputs and outputs as readable blocks of formatted content while preserving the underlying data structure.
-
-
-
-Use `postprocess_inputs` and `postprocess_output` functions in your `@weave.op` decorator to format your trace data. The following code sample uses postprocessors to render a call in Weave with emojis and more readable formatting:
-
-```python lines
-import weave
-
-def postprocess_inputs(query) -> weave.Markdown:
- search_box = f"""
-**Search Query:**
-``+`
-{query}
-``+`
-"""
- return {"search_box": weave.Markdown(search_box),
- "query": query}
-
-def postprocess_output(docs) -> weave.Markdown:
- formatted_docs = f"""
-# {docs[0]["title"]}
-
-{docs[0]["content"]}
-
-[Read more]({docs[0]["url"]})
-
----
-
-# {docs[1]["title"]}
-
-{docs[1]["content"]}
-
-[Read more]({docs[1]["url"]})
-"""
- return weave.Markdown(formatted_docs)
-
-@weave.op(
- postprocess_inputs=postprocess_inputs,
- postprocess_output=postprocess_output,
-)
-def rag_step(query):
- # example newspaper articles of the companies on the S&P 500
- docs = [
- {
- "title": "OpenAI",
- "content": "OpenAI is a company that makes AI models.",
- "url": "https://www.openai.com",
- },
- {
- "title": "Google",
- "content": "Google is a company that makes search engines.",
- "url": "https://www.google.com",
- },
- ]
- return docs
-
-if __name__ == "__main__":
- weave.init('markdown_renderers')
- rag_step("Tell me about OpenAI")
-```
-
-
-```plaintext
-This feature is not available in the TypeScript SDK yet.
-```
-
-
-In the following screenshot, you can view the difference between the unformatted and formatted outputs, respectively.
-
-
-
-## Updating Calls
-
-Calls are mostly immutable once created, however, there are a few mutations which are supported:
-* [Set Display Name](#set-display-name)
-* [Add Feedback](#add-feedback)
-* [Delete a Call](#delete-a-call)
-
-You can perform all of these mutations from the UI by navigating to the call detail page:
-
-
-
-
-
-### Set display name
-
-
-
-In order to set the display name of a call, you can use the [`Call.set_display_name()`](/weave/reference/python-sdk/trace/weave_client#method-set-display-name) method.
-
-```python lines lines
-import weave
-
-# Initialize the client
-client = weave.init("your-project-name")
-
-# Get a specific call by its ID
-call = client.get_call("call-uuid-here")
-
-# Set the display name of the call
-call.set_display_name("My Custom Display Name")
-```
-
-
-To set the display name of a call, use [`client.updateCall`](/weave/reference/typescript-sdk/classes/weaveclient#updatecall) to update by call ID directly:
-
-```typescript lines
-import * as weave from 'weave'
-
-// Initialize the client
-const client = await weave.init('your-project-name')
-
-// Update the display name of a call by its ID
-await client.updateCall('call-uuid-here', 'My Custom Display Name')
-```
-
-
-
-To set the display name of a call using the Service API, you can make a request to the [`/call/update`](https://docs.wandb.ai/weave/reference/service-api/calls/call-update) endpoint.
-
-```bash
-curl -L 'https://trace.wandb.ai/call/update' \
--H 'Content-Type: application/json' \
--H 'Accept: application/json' \
--d '{
- "project_id": "string",
- "call_id": "string",
- "display_name": "string",
-}'
-```
-
-
-
-You can also [set a call's display name at execution](#set-call-display-name-at-execution).
-
-### Add feedback
-
-Please see the [Feedback Documentation](/weave/guides/tracking/feedback) for more details.
-
-### Delete a Call
-
-
-
-To delete a Call using the Python API, you can use the [`Call.delete`](/weave/reference/python-sdk/trace/weave_client#method-delete) method.
-
-```python lines lines
-import weave
-
-# Initialize the client
-client = weave.init("your-project-name")
-
-# Get a specific call by its ID
-call = client.get_call("call-uuid-here")
-
-# Delete the call
-call.delete()
-```
-
-
-
-```plaintext
-This feature is not available in the TypeScript SDK yet.
-```
-
-
-To delete a call using the Service API, you can make a request to the [`/calls/delete`](https://docs.wandb.ai/weave/reference/service-api/calls/calls-delete) endpoint.
-
-```bash
-curl -L 'https://trace.wandb.ai/calls/delete' \
--H 'Content-Type: application/json' \
--H 'Accept: application/json' \
--d '{
- "project_id": "string",
- "call_ids": [
- "string"
- ],
-}'
-```
-
-
-
-### Delete multiple Calls
-
-
-
- To delete batches of Calls using the Python API, pass a list of Call IDs to `delete_calls()`.
-
-
- - The maximum amount of Calls that can be deleted is `1000`.
- - Deleting a Call also deletes all of its children.
-
-
- ```python lines lines
- import weave
-
- # Initialize the client
- client = weave.init("my-project")
-
- # Get all calls from client
- all_calls = client.get_calls()
-
- # Get list of first 1000 Call objects
- first_1000_calls = all_calls[:1000]
-
- # Get list of first 1000 Call IDs
- first_1000_calls_ids = [c.id for c in first_1000_calls]
-
- # Delete first 1000 Call objects by ID
- client.delete_calls(call_ids=first_1000_calls_ids)
- ```
-
-
-
- ```plaintext
- This feature is not available in the TypeScript SDK yet.
- ```
-
-
-
-## Querying and exporting Calls
-
-
-
-
-
-The `/calls` page of your project ("Traces" tab) contains a table view of all the Calls in your project. From there, you can:
-* Sort
-* Filter
-* Export
-
-
-
-
-
-The Export Modal (shown above) allows you to export your data in a number of formats, as well as shows the Python & CURL equivalents for the selected calls!
-The easiest way to get started is to construct a view in the UI, then learn more about the export API via the generated code snippets.
-
-
-
- To fetch calls using the Python API, you can use the [`client.get_calls`](/weave/reference/python-sdk/trace/weave_client#method-get_calls) method:
-
- ```python lines
- import weave
-
- # Initialize the client
- client = weave.init("your-project-name")
-
- # Fetch calls
- calls = client.get_calls(filter=...)
- ```
-
-
-
- To fetch calls using the TypeScript API, you can use the [`client.getCalls`](/weave/reference/typescript-sdk/classes/weaveclient#getcalls) method.
- ```typescript
- import * as weave from 'weave'
-
- // Initialize the client
- const client = await weave.init('intro-example')
-
- // Fetch calls
- const calls = await client.getCalls(filter=...)
- ```
-
-
- The most powerful query layer is at the Service API. To fetch calls using the Service API, you can make a request to the [`/calls/stream_query`](https://docs.wandb.ai/weave/reference/service-api/calls/calls-query-stream) endpoint.
-
- ```bash
- curl -L 'https://trace.wandb.ai/calls/stream_query' \
- -H 'Content-Type: application/json' \
- -H 'Accept: application/json' \
- -d '{
- "project_id": "string",
- "filter": {
- "op_names": [
- "string"
- ],
- "input_refs": [
- "string"
- ],
- "output_refs": [
- "string"
- ],
- "parent_ids": [
- "string"
- ],
- "trace_ids": [
- "string"
- ],
- "call_ids": [
- "string"
- ],
- "trace_roots_only": true,
- "wb_user_ids": [
- "string"
- ],
- "wb_run_ids": [
- "string"
- ]
- },
- "limit": 100,
- "offset": 0,
- "sort_by": [
- {
- "field": "string",
- "direction": "asc"
- }
- ],
- "query": {
- "$expr": {}
- },
- "include_costs": true,
- "include_feedback": true,
- "columns": [
- "string"
- ],
- "expand_columns": [
- "string"
- ]
- }'
- ```
-
-
-
-### Call schema
-
-Please see the [schema](/weave/reference/python-sdk/trace_server/trace_server_interface#class-callschema) for a complete list of fields.
-
-| Property | Type | Description |
-|----------|------|-------------|
-| `id` | string (uuid) | Unique identifier for the call |
-| `project_id` | string (optional) | Associated project identifier |
-| `op_name` | string | Name of the operation (can be a reference) |
-| `display_name` | string (optional) | User-friendly name for the call |
-| `trace_id` | string (uuid) | Identifier for the trace this call belongs to |
-| `parent_id` | string (uuid) | Identifier of the parent call |
-| `started_at` | datetime | Timestamp when the call started |
-| `attributes` | Dict[str, Any] | User-defined metadata about the call *(read-only during execution)* |
-| `inputs` | Dict[str, Any] | Input parameters for the call |
-| `ended_at` | datetime (optional) | Timestamp when the call ended |
-| `exception` | string (optional) | Error message if the call failed |
-| `output` | Any (optional) | Result of the call |
-| `summary` | Optional[SummaryMap] | Post-execution summary information. You can modify this during execution to record custom metrics. |
-| `wb_user_id` | Optional[str] | Associated Weights & Biases user ID |
-| `wb_run_id` | Optional[str] | Associated Weights & Biases run ID |
-| `deleted_at` | datetime (optional) | Timestamp of call deletion, if applicable |
-
-The table above outlines the key properties of a Call in Weave. Each property plays a crucial role in tracking and managing function calls:
-
-- The `id`, `trace_id`, and `parent_id` fields help in organizing and relating calls within the system.
-- Timing information (`started_at`, `ended_at`) allows for performance analysis.
-- The `attributes` and `inputs` fields provide context for the call. Attributes are frozen once the call starts, so set them before invocation with `weave.attributes`. `output` and `summary` capture the results, and you can update `summary` during execution to log additional metrics.
-- Integration with Weights & Biases is facilitated through `wb_user_id` and `wb_run_id`.
-
-This comprehensive set of properties enables detailed tracking and analysis of function calls throughout your project.
-
-Calculated Fields:
- * Cost
- * Duration
- * Status
-
-## Saved views
-
-You can save your Trace table configurations, filters, and sorts as _saved views_ for quick access to your preferred setup. You can configure and access saved views via the UI and the Python SDK. For more information, see [Saved Views](/weave/guides/tools/saved-views).
-
-## View a W&B run in the Traces table
-
-With Weave, you can trace function calls in your code and link them directly to the [W&B runs](https://docs.wandb.ai/models/runs/) in which they were executed.
-When you trace a function with @weave.op() and call it inside a wandb.init() context, Weave automatically associates the trace with the W&B run.
-Links to any associated runs are shown in the Traces table.
-
-
-
-The following Python code shows how traced operations are linked to W&B
-runs when executed inside a `wandb.init()` context. These traces appear in the
-Weave UI and are associated with the corresponding run.
-
-```python lines
-import wandb
-import weave
-
-def example_wandb(projname):
- # Split projname into entity and project
- entity, project = projname.split("/", 1)
-
- # Initialize Weave context for tracing
- weave.init(projname)
-
- # Define a traceable operation
- @weave.op()
- def say(message: str) -> str:
- return f"I said: {message}"
-
- # First W&B run
- with wandb.init(
- entity=entity,
- project=project,
- notes="Experiment 1",
- tags=["baseline", "paper1"],
- ) as run:
- say("Hello, world!")
- say("How are you!")
- run.log({"messages": 2})
-
- # Second W&B run
- with wandb.init(
- entity=entity,
- project=project,
- notes="Experiment 2",
- tags=["baseline", "paper1"],
- ) as run:
- say("Hello, world from experiment 2!")
- say("How are you!")
- run.log({"messages": 2})
-
-if __name__ == "__main__":
- # Replace this with your actual W&B username/project
- example_wandb("your-username/your-project")
-```
-
-To use the code sample:
-
-1. In the terminal, install dependencies:
-
-```bash
-pip install wandb weave
-```
-
-2. Log in to W&B:
-
-```bash
-wandb login
-```
-
-3. In the script, replace `your-username/your-project` with your actual W&B entity/project.
-4. Run the script:
-
-```bash
-python weave_trace_with_wandb.py
-```
-5. Visit [https://weave.wandb.ai](https://weave.wandb.ai) and select your project.
-6. In the **Traces** tab, view the trace output. Links to any associated runs are shown in the Traces table.
-
-
-```plaintext
-This feature is not available for the TypeScript SDK yet.
-```
-
-
-
-## Configure autopatching
-
-By default, Weave automatically patches and tracks calls to common LLM libraries like `openai`, `anthropic`, `cohere`, and `mistral`.
-
-
-
-
-The `autopatch_settings` argument is deprecated. Use `implicitly_patch_integrations=False` to disable implicit patching, or call specific patch functions like `patch_openai(settings={...})` to configure settings per integration.
-
-
-### Disable all autopatching
-
-```python lines
-weave.init(..., implicitly_patch_integrations=False)
-```
-
-### Enable specific integrations
-
-```python lines
-import weave
-
-weave.init(..., implicitly_patch_integrations=False)
-
-# Then manually patch only the integrations you want
-weave.integrations.patch_anthropic()
-weave.integrations.patch_cohere()
-```
-
-### Post-process inputs and outputs
-
-You can customize how inputs and outputs (such as for PII data) are handled by passing settings to the patch function:
-
-```python lines
-import weave.integrations
-
-def redact_inputs(inputs: dict) -> dict:
- if "email" in inputs:
- inputs["email"] = "[REDACTED]"
- return inputs
-
-weave.init(...)
-weave.integrations.patch_openai(
- settings={
- "op_settings": {"postprocess_inputs": redact_inputs}
- }
-)
-```
-
-
-
-The TypeScript SDK only supports autopatching for OpenAI and Anthropic. OpenAI is automatically patched when you import Weave and doesn't require any additional configuration.
-
-Additionally, the TypeScript SDK doesn't support:
-- Configuring or disabling autopatching
-- Input/output post-processing
-
-For edge cases where automatic patching doesn't work (ESM, bundlers like Next.js), use explicit wrapping:
-
-```typescript
-import OpenAI from 'openai'
-import * as weave from 'weave'
-import { wrapOpenAI } from 'weave'
-
-const client = wrapOpenAI(new OpenAI())
-await weave.init('your-team/my-project')
-```
-
-For more details on ESM setup and troubleshooting, see the [TypeScript SDK Integration Guide](/weave/guides/integrations/js).
-
-
-For more details, see [How to use Weave with PII data](/weave/cookbooks/pii).
-
-## FAQs
-
-### How do I stop large traces from being truncated?
-
-For more information, see [Trace data is truncated](/weave/guides/troubleshooting#trace-data-is-truncated) in the [Troubleshooting guide](/weave/guides/troubleshooting).
-
-### How do I disable tracing?
-
-#### Environment variable
-
-In situations where you want to unconditionally disable tracing for the entire program, you can set the environment variable `WEAVE_DISABLED=true`.
-
-`WEAVE_DISABLED` is read only once, at function-defintion time. This variable cannot be used to toggle tracing at runtime.
-
-#### Client initialization
-
-Sometimes, you may want to conditionally enable tracing for a specific initialization based on some condition. In this case, you can initialize the client with the `disabled` flag in init settings.
-
-
-
-```python lines
-import weave
-
-# Initialize the client
-client = weave.init(..., settings={"disabled": True})
-```
-
-
-```plaintext
-This feature is not available for the TypeScript SDK yet.
-```
-
-
-
-#### Context manager
-
-To conditionally disable tracing for a specific block of code, you can use a tracing context manager. Use `with tracing_disabled()` to suppress tracing **only for the function calls executed inside the `with` block**. It is intended to be used in application code to scope which calls should not be logged.
-
-```python lines
-import weave
-from weave.trace.context.call_context import tracing_disabled
-
-client = weave.init('your-team/your-project-name')
-
-@weave.op
-def my_op():
- ...
-
-with tracing_disabled():
- my_op()
-```
-
-Although tracing behavior is fixed when functions are defined, this can be used for runtime control when combined with application logic. For example, you can wrap the context manager in a conditional to dynamically enable or disable tracing based on a runtime value:
-
-```python lines
-if should_trace:
- my_op()
-else:
- with tracing_disabled():
- my_op()
-```
-
-### How do I capture information about a Call?
-
-Typically you would call an op directly:
-
-```python lines
-@weave.op
-def my_op():
- ...
-
-my_op()
-```
-
-However, you can also get access to the call object directly by invoking the `call` method on the op:
-
-```python lines
-@weave.op
-def my_op():
- ...
-
-output, call = my_op.call()
-```
-From here, the `call` object contains all the information about the call, including the inputs, outputs, and other metadata.
diff --git a/weave/guides/tracking/update-call.mdx b/weave/guides/tracking/update-call.mdx
new file mode 100644
index 0000000000..0995db55e5
--- /dev/null
+++ b/weave/guides/tracking/update-call.mdx
@@ -0,0 +1,158 @@
+---
+title: "Update and delete Calls"
+description: "Modify display names, add feedback, and delete Calls in W&B Weave"
+---
+
+
+Most Call properties are immutable after creation. The following mutations are supported:
+* [Set display name](#set-display-name)
+* [Add feedback](#add-feedback)
+* [Delete a Call](#delete-a-call)
+
+You can perform all of these mutations from the UI by navigating to the Call detail page:
+To update a Call in the web app:
+1. Navigate to [wandb.ai](https://wandb.ai/) and select your project.
+1. In the Weave project sidebar, click **Traces**.
+1. Find the Call you want to view in the table.
+1. Click on the Call to open its details page.
+1. Click the `Feedback` tab in the Call detail's tab bar.
+
+Here you can edit the display name of the Call, add feedback, or delete the Call.
+
+
+
+
+### Set display name
+
+
+
+To set the display name of a Call, use the [`Call.set_display_name()`](/weave/reference/python-sdk/trace/weave_client#method-set-display-name) method.
+
+```python lines
+import weave
+
+# Initialize the client
+client = weave.init("your-project-name")
+
+# Get a specific Call by its ID
+call = client.get_call("call-uuid-here")
+
+# Set the display name of the Call
+call.set_display_name("My Custom Display Name")
+```
+
+
+To set the display name of a Call, use [`client.updateCall`](/weave/reference/typescript-sdk/classes/weaveclient#updatecall) to update by call ID directly:
+
+```typescript lines
+import * as weave from 'weave'
+
+// Initialize the client
+const client = await weave.init('your-project-name')
+
+// Update the display name of a Call by its ID
+await client.updateCall('call-uuid-here', 'My Custom Display Name')
+```
+
+
+
+To set the display name of a Call using the Service API, make a request to the [`/call/update`](https://docs.wandb.ai/weave/reference/service-api/calls/call-update) endpoint.
+
+```bash lines
+curl -L 'https://trace.wandb.ai/call/update' \
+-H 'Content-Type: application/json' \
+-H 'Accept: application/json' \
+-d '{
+ "project_id": "string",
+ "call_id": "string",
+ "display_name": "string",
+}'
+```
+
+
+
+You can also [set a Call's display name at execution](/weave/guides/tracking/get-call-object).
+
+### Add feedback
+
+Please see the [Feedback Documentation](/weave/guides/tracking/feedback) for more details.
+
+### Delete a Call
+
+
+
+To delete a Call using the Python API, use the [`Call.delete`](/weave/reference/python-sdk/trace/weave_client#method-delete) method.
+
+```python lines
+import weave
+
+# Initialize the client
+client = weave.init("your-project-name")
+
+# Get a specific Call by its ID
+call = client.get_call("call-uuid-here")
+
+# Delete the Call
+call.delete()
+```
+
+
+
+```plaintext lines
+This feature is not available in the TypeScript SDK yet.
+```
+
+
+To delete a Call using the Service API, make a request to the [`/calls/delete`](https://docs.wandb.ai/weave/reference/service-api/calls/calls-delete) endpoint.
+
+```bash lines
+curl -L 'https://trace.wandb.ai/calls/delete' \
+-H 'Content-Type: application/json' \
+-H 'Accept: application/json' \
+-d '{
+ "project_id": "string",
+ "call_ids": [
+ "string"
+ ],
+}'
+```
+
+
+
+### Delete multiple Calls
+
+
+
+ To delete batches of Calls using the Python API, pass a list of Call IDs to `delete_calls()`.
+
+
+ - The maximum amount of Calls that can be deleted is `1000`.
+ - Deleting a Call also deletes all of its children.
+
+
+ ```python lines
+ import weave
+
+ # Initialize the client
+ client = weave.init("my-project")
+
+ # Get all Calls from client
+ all_calls = client.get_calls()
+
+ # Get list of first 1000 Call objects
+ first_1000_calls = all_calls[:1000]
+
+ # Get list of first 1000 Call IDs
+ first_1000_calls_ids = [c.id for c in first_1000_calls]
+
+ # Delete first 1000 Calls by ID
+ client.delete_calls(call_ids=first_1000_calls_ids)
+ ```
+
+
+
+ ```plaintext lines
+ This feature is not available in the TypeScript SDK yet.
+ ```
+
+
diff --git a/weave/guides/tracking/view-call.mdx b/weave/guides/tracking/view-call.mdx
new file mode 100644
index 0000000000..33cdf83cd1
--- /dev/null
+++ b/weave/guides/tracking/view-call.mdx
@@ -0,0 +1,136 @@
+---
+title: "View and customize trace display"
+description: "View calls in the UI and customize how trace data is displayed"
+---
+
+After you create Calls in W&B Weave, you often want to open a single call to inspect its inputs, outputs, and metadata. This page shows how to view a call in the UI or in the SDK, and how to customize how trace data is rendered in the UI using `weave.Markdown`.
+
+
+
+To view a Call in the UI:
+1. Navigate to [wandb.ai](https://wandb.ai/) and select your project.
+1. In the Weave project sidebar, click **Traces**.
+1. Find the Call you want to view in the table.
+1. Click on the Call to open its details page.
+
+For details on the Trace view, see [Navigate the Weave Trace view](/weave/guides/tracking/trace-tree).
+
+
+
+
+To view a call using the W&B Weave Python SDK, use the [`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) method:
+
+```python lines
+import weave
+
+# Initialize the client
+client = weave.init("your-project-name")
+
+# Get a specific call by its ID
+call = client.get_call("call-uuid-here")
+
+print(call)
+```
+
+
+```typescript lines
+import * as weave from 'weave'
+
+// Initialize the client
+const client = await weave.init('intro-example')
+
+// Get a specific call by its ID
+const call = await client.getCall('call-uuid-here')
+
+console.log(call)
+```
+
+
+
+To view a call using the Service API, make a request to the [`/call/read`](https://docs.wandb.ai/weave/reference/service-api/calls/call-read) endpoint.
+
+```bash
+curl -L 'https://trace.wandb.ai/call/read' \
+-H 'Content-Type: application/json' \
+-H 'Accept: application/json' \
+-d '{
+ "project_id": "string",
+ "id": "string",
+}'
+```
+
+
+
+## Customize rendered traces with `weave.Markdown`
+
+Use `weave.Markdown` to customize how your trace information is displayed without losing the original data. This allows you to render your inputs and outputs as readable blocks of formatted content while preserving the underlying data structure.
+
+
+
+Use `postprocess_inputs` and `postprocess_output` functions in your `@weave.op` decorator to format your trace data. The following code sample uses postprocessors to render a call in Weave with emojis and more readable formatting:
+
+```python lines
+import weave
+
+def postprocess_inputs(query) -> weave.Markdown:
+ search_box = f"""
+**Search Query:**
+``+`
+{query}
+``+`
+"""
+ return {"search_box": weave.Markdown(search_box),
+ "query": query}
+
+def postprocess_output(docs) -> weave.Markdown:
+ formatted_docs = f"""
+# {docs[0]["title"]}
+
+{docs[0]["content"]}
+
+[Read more]({docs[0]["url"]})
+
+---
+
+# {docs[1]["title"]}
+
+{docs[1]["content"]}
+
+[Read more]({docs[1]["url"]})
+"""
+ return weave.Markdown(formatted_docs)
+
+@weave.op(
+ postprocess_inputs=postprocess_inputs,
+ postprocess_output=postprocess_output,
+)
+def rag_step(query):
+ # example newspaper articles of the companies on the S&P 500
+ docs = [
+ {
+ "title": "OpenAI",
+ "content": "OpenAI is a company that makes AI models.",
+ "url": "https://www.openai.com",
+ },
+ {
+ "title": "Google",
+ "content": "Google is a company that makes search engines.",
+ "url": "https://www.google.com",
+ },
+ ]
+ return docs
+
+if __name__ == "__main__":
+ weave.init('markdown_renderers')
+ rag_step("Tell me about OpenAI")
+```
+
+
+```plaintext
+This feature is not available in the TypeScript SDK yet.
+```
+
+
+In the following screenshot, you can see the unformatted and formatted outputs side by side.
+
+
diff --git a/weave/tutorial-tracing_2.mdx b/weave/tutorial-tracing_2.mdx
index 9d61d12c39..0a8e965363 100644
--- a/weave/tutorial-tracing_2.mdx
+++ b/weave/tutorial-tracing_2.mdx
@@ -1,20 +1,11 @@
---
-title: "Track Application Logic"
-description: "Learn how to track data flow and metadata in your LLM applications"
+title: "Trace nested functions"
+description: "Learn how to track deeply nested Call structures with W&B tracing"
---
-In the [Track LLM inputs & outputs](/weave/quickstart) tutorial, the basics of tracking the inputs and outputs of your LLMs was covered.
+LLM-powered applications can contain multiple LLMs calls and additional data processing and validation logic that is important to monitor. Even in deeply nested call structures common in many applications, Weave tracks parent-child relationships across nested functions, as long as `weave.op()` is added to each function you want to trace.
-In this tutorial you will learn how to:
-
-- **Track data** as it flows through your application
-- **Track metadata** at call time
-
-## Tracking nested function calls
-
-LLM-powered applications can contain multiple LLMs calls and additional data processing and validation logic that is important to monitor. Even deep nested call structures common in many apps, Weave will keep track of the parent-child relationships in nested functions as long as `weave.op()` is added to every function you'd like to track.
-
-Building on the [quickstart example](/weave/quickstart), the following code adds additional logic to count the returned items from the LLM and wrap them all in a higher level function. Additionally, the example uses `weave.op()` to trace every function, its call order, and its parent-child relationship:
+The following code builds on the [quickstart example](/weave/quickstart) and adds logic to count the returned items from the LLM and wrap them in a higher-level function. Additionally, the example uses `weave.op()` to trace every function, its call order, and its parent-child relationship:
@@ -72,9 +63,9 @@ Building on the [quickstart example](/weave/quickstart), the following code adds
```
**Nested functions**
- When you run the above code, you see the the inputs and outputs from the two nested functions (`extract_dinos` and `count_dinos`), as well as the automatically-logged OpenAI trace.
+ When you run the preceding code, the **Traces** page shows the inputs and outputs from the two nested functions (`extract_dinos` and `count_dinos`), as well as the automatically-logged OpenAI trace.
- 
+ 
@@ -129,10 +120,10 @@ Building on the [quickstart example](/weave/quickstart), the following code adds
**Nested functions**
- When you run the above code, you see the the inputs and outputs from the two nested functions (`extractDinos` and `countDinos`), as well as the automatically-logged OpenAI trace.
+ When you run the preceding code, the **Traces** page shows the inputs and outputs from the two nested functions (`extract_dinos` and `count_dinos`), as well as the automatically-logged OpenAI trace.
{/* TODO: Update to TS screenshot */}
- 
+ 
@@ -162,17 +153,19 @@ Continuing our example from above:
```plaintext
- This feature is not available in TypeScript yet. Stay tuned!
+ This feature is not available in TypeScript yet.
```
-We recommend tracking metadata at run time, such as your user IDs and your code's environment status (development, staging, or production).
+We recommend that you track metadata at run time, such as your user IDs and your code's environment status (development, staging, or production).
-To track system settings, such as a system prompt, we recommend using [Weave Models](/weave/guides/core-types/models)
+We recommennd that to track system settings, such as a system prompt, use [Weave Models](/weave/guides/core-types/models).
+For more information on using attributes, see [Define and log attributes](/weave/guides/tools/attributes).
+
## What's next?
- Follow the [App Versioning tutorial](/weave/tutorial-weave_models) to capture, version, and organize ad-hoc prompt, model, and application changes.