Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
45 changes: 36 additions & 9 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -706,18 +706,46 @@
"pages": [
"weave/concepts/what-is-weave",
{
"group": "Track your application",
"group": "Trace your application",
"pages": [
"weave/guides/tracking/tracing",
"weave/tutorial-tracing_2",
{
"group": "Tracing basics",
"pages": [
"weave/guides/tracking/tracing",
"weave/guides/tracking/create-call",
"weave/guides/tracking/trace-tree",
"weave/guides/tracking/querying-calls"
]
},
{
"group": "Advanced tracing",
"pages": [
"weave/guides/tracking/trace-generator-func",
"weave/tutorial-tracing_2",
"weave/guides/tracking/threads",
"weave/guides/tracking/ops",
"weave/guides/tools/attributes",
"weave/guides/core-types/media",
"weave/guides/tracking/view-call"
]
},
{
"group": "Work with Calls",
"pages": [
"weave/guides/tracking/update-call",
"weave/guides/tracking/call-schema-reference",
"weave/guides/tracking/get-call-object",
"weave/guides/tracking/set-call-display"
]
},
"weave/guides/tracking/trace-disable",
"weave/guides/tracking/costs",
"weave/guides/core-types/media",

"weave/guides/tools/saved-views",
"weave/guides/tools/comparison",
"weave/guides/tracking/trace-tree",
"weave/guides/tracking/threads",
"weave/guides/tracking/trace-plots",
"weave/guides/tools/attributes",
"weave/guides/tracking/trace-to-run",
"weave/guides/tools/weave-in-workspaces"
]
},
Expand All @@ -731,7 +759,6 @@
"weave/guides/evaluation/weave_local_scorers",
"weave/guides/evaluation/evaluation_logger",
"weave/guides/core-types/leaderboards",
"weave/guides/tools/attributes",
"weave/guides/tools/column-mapping",
"weave/guides/evaluation/dynamic_leaderboards"
]
Expand All @@ -750,8 +777,7 @@
"weave/tutorial-weave_models",
"weave/guides/core-types/models",
"weave/guides/core-types/prompts",
"weave/guides/tracking/objects",
"weave/guides/tracking/ops"
"weave/guides/tracking/objects"
]
},
{
Expand All @@ -768,6 +794,7 @@
"group": "Integrate with your LLM provider and frameworks",
"pages": [
"weave/guides/integrations",
"weave/guides/integrations/autopatching",
{
"group": "LLM Providers",
"pages": [
Expand Down
Binary file removed images/export_modal.png
Binary file not shown.
Binary file removed images/screenshots/basic_call.png
Binary file not shown.
Binary file removed images/screenshots/calls_filter.png
Binary file not shown.
Binary file removed images/screenshots/calls_macro.png
Binary file not shown.
22 changes: 1 addition & 21 deletions ja/weave/guides/tracking/tracing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,7 @@ title: トレースの基本
description: Weave のトレース機能を使用して、 AI アプリケーション の実行を追跡・モニタリングします
---

<Frame>
![Weave Calls Screenshot](/images/screenshots/calls_macro.png)
</Frame>

<Frame>
![Weave Calls Screenshot](/images/screenshots/basic_call.png)
</Frame>

<Frame>
![Weave Calls Screenshot](/images/screenshots/calls_filter.png)
</Frame>

---

Call は Weave における基本的な構成要素です。これらは関数の単一の実行を表し、以下を含みます。
- Inputs (引数)
Expand Down Expand Up @@ -535,7 +524,6 @@ API を直接使用して手動で Call を作成することもできます。

詳細ページには、Call の入力、出力、実行時間、および追加のメタデータが表示されます。

![ウェブアプリでの Call 表示](/images/screenshots/basic_call.png)
</Tab>
<Tab title="Python">
Weave Python SDK を使用して Call を表示するには、[`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) メソッドを使用できます。
Expand Down Expand Up @@ -805,19 +793,11 @@ curl -L 'https://trace.wandb.ai/calls/delete' \

## Querying and exporting Calls

<Frame>
![多くの Call のスクリーンショット](/images/screenshots/calls_filter.png)
</Frame>

プロジェクトの `/calls` ページ ("Traces" タブ) には、プロジェクト内のすべての Call のテーブルビューが表示されます。そこでは以下のことが可能です。
* ソート
* フィルタリング
* エクスポート

<Frame>
![Calls テーブルビュー](/images/export_modal.png)
</Frame>

エクスポートモーダル(上記)では、データをさまざまな形式でエクスポートできるほか、選択した Call に対応する Python および CURL のコードスニペットも表示されます。
UI でビューを作成してから、生成されたコードスニペットを通じてエクスポート API について学ぶのが最も簡単な方法です。

Expand Down
22 changes: 1 addition & 21 deletions ko/weave/guides/tracking/tracing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,7 @@ title: Tracing 기초
description: Weave tracing을 사용하여 AI 애플리케이션의 실행을 추적하고 모니터링하세요.
---

<Frame>
![Weave Calls Screenshot](/images/screenshots/calls_macro.png)
</Frame>

<Frame>
![Weave Calls Screenshot](/images/screenshots/basic_call.png)
</Frame>

<Frame>
![Weave Calls Screenshot](/images/screenshots/calls_filter.png)
</Frame>

---

Calls는 Weave 의 핵심 빌드 블록입니다. 이는 다음을 포함한 단일 함수 실행을 나타냅니다:
- Inputs (인수)
Expand Down Expand Up @@ -534,7 +523,6 @@ API를 직접 사용하여 수동으로 Calls를 생성할 수도 있습니다.

상세 페이지에는 호출의 입력, 출력, 런타임 및 추가 메타데이터가 표시됩니다.

![Web App에서 Call 보기](/images/screenshots/basic_call.png)
</Tab>
<Tab title="Python">
Weave Python SDK를 사용하여 호출을 보려면 [`get_call`](/weave/reference/python-sdk/trace/weave_client#method-get_call) 메소드를 사용할 수 있습니다:
Expand Down Expand Up @@ -804,19 +792,11 @@ curl -L 'https://trace.wandb.ai/calls/delete' \

## Querying and exporting Calls

<Frame>
![많은 호출들의 스크린샷](/images/screenshots/calls_filter.png)
</Frame>

프로젝트의 `/calls` 페이지("Traces" 탭)에는 프로젝트의 모든 Calls에 대한 테이블 뷰가 포함되어 있습니다. 여기에서 다음을 수행할 수 있습니다:
* 정렬
* 필터링
* 내보내기

<Frame>
![Calls Table View](/images/export_modal.png)
</Frame>

내보내기 모달(위 그림 참조)을 사용하면 다양한 형식으로 데이터를 내보낼 수 있을 뿐만 아니라, 선택한 호출에 해당하는 Python 및 CURL 코드를 보여줍니다!
가장 쉽게 시작하는 방법은 UI에서 뷰를 구성한 다음, 생성된 코드 조각을 통해 내보내기 API에 대해 자세히 알아보는 것입니다.

Expand Down
41 changes: 3 additions & 38 deletions weave/guides/integrations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,50 +3,15 @@ title: Integrations overview
description: "Seamlessly trace and monitor LLM calls across 30+ providers and frameworks with Weave's automatic patching, supporting OpenAI, Anthropic, Google AI, and major orchestration tools without code changes."
---

# Integrations

Weave provides **automatic implicit patching** for all supported integrations by default:

**Implicit Patching (Automatic):** Libraries are automatically patched regardless of when they are imported.

```python lines
# Option 1: Import before weave.init()
import openai
import weave
weave.init('my-project') # OpenAI is automatically patched!

# Option 2: Import after weave.init()
import weave
weave.init('my-project')
import anthropic # Automatically patched via import hook!
```

**Disabling Implicit Patching:** You can disable automatic patching if you prefer explicit control.

```python lines
import weave

# Option 1: Via settings parameter
weave.init('my-project', settings={'implicitly_patch_integrations': False})

# Option 2: Via environment variable
# Set WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=false before running your script

# With implicit patching disabled, you must explicitly patch integrations
import openai
weave.patch_openai() # Now required for OpenAI tracing
```
W&B Weave provides logging integrations for popular LLM providers and orchestration frameworks. These integrations allow you to seamlessly trace calls made through various libraries, enhancing your ability to monitor and analyze your AI applications.

**Explicit Patching (Manual):** You can explicitly patch integrations for fine-grained control.
If you use LLM provider libraries (such as OpenAI, Anthropic, Cohere, or Mistral) in your application, you want those API calls to show up in W&B Weave as traced Calls: inputs, outputs, latency, token usage, and cost. Without help, you would have to wrap every `client.chat.completions.create()` (or equivalent) in `@weave.op` or manual instrumentation, which is tedious and easy to miss something.

```python lines
import weave
weave.init('my-project')
weave.integrations.patch_openai() # Enable OpenAI tracing
weave.integrations.patch_anthropic() # Enable Anthropic tracing
```
Weave automatically intercepts (patches) supported LLM client libraries. Your application code stays unchanged: you use the provider SDK as usual, and each request is recorded as a Weave Call. You get full tracing with minimal setup.

W&B Weave provides logging integrations for popular LLM providers and orchestration frameworks. These integrations allow you to seamlessly trace calls made through various libraries, enhancing your ability to monitor and analyze your AI applications.

## LLM Providers

Expand Down
110 changes: 110 additions & 0 deletions weave/guides/integrations/autopatching.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
title: "Control automatic LLM call tracking"
description: "Control how W&B Weave automatically records calls to OpenAI, Anthropic, and other LLM libraries"
---



If you use LLM provider libraries (such as OpenAI, Anthropic, Cohere, or Mistral) in your application, autopatching is Weave’s way of taking care of tracing all your LLM calls for you. When you call `weave.init()`, Weave automatically intercepts (patches) supported LLM client libraries. Your application code stays unchanged: you use the provider SDK as usual, and each request is recorded as a Weave Call. You get full tracing with minimal setup.

This page describes when and how to change that behavior: turning automatic tracking off, limiting it to specific providers, or post-processing inputs and outputs (for example, to redact PII).

## Default behavior

By default, Weave automatically patches and tracks calls to common LLM libraries such as `openai` and `anthropic`. Call `weave.init(...)` at the start of your program and use those libraries normally. Their calls will appear in your project’s Traces.

## Configure autopatching

<Tabs>
<Tab title="Python">
<Warning>
The `autopatch_settings` argument is deprecated. Use `implicitly_patch_integrations=False` to disable implicit patching, or call specific patch functions like `patch_openai(settings={...})` to configure settings per integration.
</Warning>

Weave provides **automatic implicit patching** for all supported integrations by default:

**Implicit Patching (Automatic):** Libraries are automatically patched regardless of when they are imported.

```python lines
# Option 1: Import before weave.init()
import openai
import weave
weave.init('your-team-name/your-project-name') # OpenAI is automatically patched!

# Option 2: Import after weave.init()
import weave
weave.init('your-team-name/your-project-name')
import anthropic # Automatically patched via import hook!
```

**Disabling Implicit Patching:** You can disable automatic patching if you prefer explicit control.

```python lines
import weave

# Option 1: Via settings parameter
weave.init('your-team-name/your-project-name', settings={'implicitly_patch_integrations': False})

# Option 2: Via environment variable
# Set WEAVE_IMPLICITLY_PATCH_INTEGRATIONS=false before running your script

# With implicit patching disabled, you must explicitly patch integrations
import openai
weave.patch_openai() # Now required for OpenAI tracing
```

**Explicit Patching (Manual):** You can explicitly patch integrations for fine-grained control.

```python lines
import weave
weave.init('your-team-name/your-project-name')
weave.integrations.patch_openai() # Enable OpenAI tracing
weave.integrations.patch_anthropic() # Enable Anthropic tracing
```


### Post-process inputs and outputs

You can customize how inputs and outputs are recorded (for example, to redact PII or secrets) by passing settings to the patch function:

```python lines
import weave.integrations

def redact_inputs(inputs: dict) -> dict:
if "email" in inputs:
inputs["email"] = "[REDACTED]"
return inputs

weave.init(...)
weave.integrations.patch_openai(
settings={
"op_settings": {"postprocess_inputs": redact_inputs}
}
)
```
For more on handling sensitive data, see [How to use Weave with PII data](/weave/cookbooks/pii).
</Tab>
<Tab title="TypeScript">

The TypeScript SDK only supports autopatching for OpenAI and Anthropic. OpenAI is automatically patched when you import Weave and does not require any additional configuration.

Additionally, the TypeScript SDK does not support:
- Configuring or disabling autopatching.
- Input/output post-processing.

For edge cases where automatic patching does not work (ESM, bundlers like Next.js), use explicit wrapping:

```typescript
import OpenAI from 'openai'
import * as weave from 'weave'
import { wrapOpenAI } from 'weave'

const client = wrapOpenAI(new OpenAI())
await weave.init('your-team-name/your-project-name')
```

For more details on ESM setup and troubleshooting, see the [TypeScript SDK Integration Guide](/weave/guides/integrations/js).
</Tab>
</Tabs>


48 changes: 48 additions & 0 deletions weave/guides/tracking/call-schema-reference.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: "Call schema reference"
description: "Reference for the Call object structure and properties"
---

This page provides a reference for the Call object schema in W&B Weave. For information on querying calls, see [Query and export calls](/weave/guides/tracking/querying-calls).

## Call properties

The table below outlines the key properties of a Call in Weave. For the complete implementation, see:
- [class: CallSchema](/weave/reference/python-sdk/trace_server/trace_server_interface#class-callschema) in the Python SDK.
- [Interface: CallSchema](/weave/reference/typescript-sdk/interfaces/callschema) in the TypeScript SDK.

| Property | Type | Description |
|----------|------|-------------|
| `id` | string (uuid) | Unique identifier for the call |
| `project_id` | string (optional) | Associated project identifier |
| `op_name` | string | Name of the operation (can be a reference) |
| `display_name` | string (optional) | User-friendly name for the call |
| `trace_id` | string (uuid) | Identifier for the trace this call belongs to |
| `parent_id` | string (uuid) | Identifier of the parent call |
| `started_at` | datetime | Timestamp when the call started |
| `attributes` | Dict[str, Any] | User-defined metadata about the call *(read-only during execution)* |
| `inputs` | Dict[str, Any] | Input parameters for the call |
| `ended_at` | datetime (optional) | Timestamp when the call ended |
| `exception` | string (optional) | Error message if the call failed |
| `output` | Any (optional) | Result of the call |
| `summary` | Optional[SummaryMap] | Post-execution summary information. You can modify this during execution to record custom metrics. |
| `wb_user_id` | Optional[str] | Associated W&B user ID |
| `wb_run_id` | Optional[str] | Associated W&B run ID |
| `deleted_at` | datetime (optional) | Timestamp of call deletion, if applicable |

## Property details

`CallSchema` properties play an important role in tracking and managing function calls:
- The `id`, `trace_id`, and `parent_id` properties help organize and relate calls within the system.
- Timing information (`started_at`, `ended_at`) support performance analysis.
- The `attributes` and `inputs` properties provide context for the call. Attributes are frozen once the call starts, so set them before invocation with `weave.attributes`. `output` and `summary` capture the results.
-You can store metrics or other post-call values in the `summary` property. Modify `call.summary` during execution. Any values you add is merged with Weave's computed summary data when the Call finishes.
- Weave's computed summary data:
- `costs`: The total cost of the call based on LLM model usage data and token pricing data. For more information on cost calculation, see [Track costs](/weave/guides/tracking/costs).
- `latency_ms`: The duration, in milliseconds, elapsed between `started_at` and `ended_at`. `null` if `status` is `RUNNING`.
- `status`: The execution status: `SUCCESS`, `ERROR`, `RUNNING`, `DESCENDANT_ERROR` (meaning the call itself succeeded but a descendant call errored). {/* [empty ref](/weave/reference/python-sdk/trace_server/trace_server_interface#class-tracestatus)*/}

- Integration with W&B is facilitated through `wb_user_id` and `wb_run_id`.

This comprehensive set of properties enables detailed tracking and analysis of function calls throughout your project.

Loading
Loading