Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
188 changes: 188 additions & 0 deletions examples/otel-traces/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
# OTEL Traces Example

This example demonstrates how to push OpenTelemetry traces from your AI application to Highflame Workbench for analysis and monitoring.

## Overview

The `generate_traces.py` script shows how to:

- Configure OpenTelemetry to send traces to Highflame's OTEL endpoint
- Create spans for LLM operations (OpenAI in this example)
- Add custom attributes to track model, prompts, responses, and usage metrics
- View and analyze traces in the Highflame Workbench UI

## Prerequisites

- Python 3.9 or higher
- OpenAI API key (for this example)
- Highflame authorization credentials

## Installation

1. Install the required dependencies:

```bash
pip install -r requirements.txt
```

2. Set up your environment variables (see Configuration section below)

## Configuration

### Required Environment Variables

**OTEL_EXPORTER_OTLP_HEADERS** (Required)

- Authorization header for Highflame Workbench
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header"
```
Comment on lines +37 to +40
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example value your-otel-header for OTEL_EXPORTER_OTLP_HEADERS is a bit vague. Providing a more concrete example of the expected key=value format would be more helpful for users.

Suggested change
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="your-otel-header"
```
- Example:
```bash
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=<your-token>"
```


**OPENAI_API_KEY** (Required for this example)

- Your OpenAI API key
- Example:
```bash
export OPENAI_API_KEY="sk-your-openai-api-key"
```

### Optional Environment Variables

**OTLP_ENDPOINT**

- OTEL endpoint URL
- Example:
```bash
export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"
```
Comment on lines +52 to +58
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example uses a custom environment variable OTLP_ENDPOINT. To align with OpenTelemetry standards, it's better to use the standard OTEL_EXPORTER_OTLP_TRACES_ENDPOINT variable. This makes the example more familiar to users experienced with OpenTelemetry and allows for simplifying the Python script.

Suggested change
**OTLP_ENDPOINT**
- OTEL endpoint URL
- Example:
```bash
export OTLP_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"
```
**OTEL_EXPORTER_OTLP_TRACES_ENDPOINT**
- OTEL endpoint URL
- Example:
```bash
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://cerberus-http.api-dev.highflame.dev/v1/traces"


**OTEL_SERVICE_NAME** (Optional)

- Name of your service for identification in Workbench
- Default: `"trace-generator"`
- Example:
```bash
export OTEL_SERVICE_NAME="my-ai-application"
```

## Usage

1. Set your environment variables:

```bash
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example for OTEL_EXPORTER_OTLP_HEADERS includes %20, which is likely incorrect. The OpenTelemetry exporter does not URL-decode this value, so %20 will be sent as part of the header. A space should be used instead, with the value quoted in the shell.

Suggested change
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic%20<your-credentials>"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-credentials>"

export OPENAI_API_KEY="sk-your-openai-api-key"
```

2. Run the script:

```bash
python generate_traces.py
```

3. Expected output:

```
2
```

The script will:

- Make an OpenAI API call (calculating "1 + 1")
- Create an OpenTelemetry span for the operation
- Add attributes including model, prompt, response, and token usage
- Send the trace to Highflame Workbench
- Print the answer

## Viewing Traces in Workbench

After running the script:

1. **Access Workbench UI**: Navigate to your Highflame Workbench dashboard
2. **View Traces**: Look for traces with:
- Service name: `trace-generator` (or your custom `OTEL_SERVICE_NAME`)
- Span name: `openai.chat.completions.create`
3. **Analyze Data**: You can view:
- Trace timeline and duration
- Model information (`llm.model`)
- Prompt and response data (`input`, `output`, `prompt.user_question`)
- Token usage metrics (`llm.usage.prompt_tokens`, `llm.usage.completion_tokens`, `llm.usage.total_tokens`)
- Response ID and preview

## Trace Attributes

The script adds the following attributes to each span:

| Attribute | Description | Example |
| ----------------------------- | --------------------------- | ---------------- |
| `llm.model` | LLM model used | `"gpt-4o"` |
| `prompt.user_question` | User's question/prompt | `"1 + 1 = "` |
| `response.id` | Unique response ID | `"chatcmpl-..."` |
| `response.preview` | Response preview | `"2"` |
| `input` | Full input text | `"1 + 1 = "` |
| `output` | Full output text | `"2"` |
Comment on lines +120 to +124
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The attributes prompt.user_question and input are redundant in the example script, as are response.preview and output. This can be confusing for users. To improve clarity, I recommend removing the redundant attributes (prompt.user_question and response.preview) from both the script and this documentation.

Suggested change
| `prompt.user_question` | User's question/prompt | `"1 + 1 = "` |
| `response.id` | Unique response ID | `"chatcmpl-..."` |
| `response.preview` | Response preview | `"2"` |
| `input` | Full input text | `"1 + 1 = "` |
| `output` | Full output text | `"2"` |
| `response.id` | Unique response ID | `"chatcmpl-..."` |
| `input` | Full input text | `"1 + 1 = "` |
| `output` | Full output text | `"2"` |

| `llm.usage.prompt_tokens` | Number of prompt tokens | `10` |
| `llm.usage.completion_tokens` | Number of completion tokens | `1` |
| `llm.usage.total_tokens` | Total tokens used | `11` |

## Customization

### Using a Different LLM Provider

You can adapt this script for other LLM providers:

1. Replace the OpenAI client with your provider's SDK
2. Update the span attributes to match your provider's response format
3. Keep the OTEL configuration the same

### Custom Service Name

Set a custom service name to identify your application:

```bash
export OTEL_SERVICE_NAME="my-custom-service"
python generate_traces.py
```

### Custom Endpoint

Use a different OTEL endpoint:

```bash
export OTLP_ENDPOINT="https://your-custom-endpoint.com/v1/traces"
python generate_traces.py
```

## Troubleshooting

### Connection Errors

If you get connection errors when sending traces:

1. **Verify endpoint URL**: Check that `OTLP_ENDPOINT` is correct
2. **Check authorization header**: Ensure `OTEL_EXPORTER_OTLP_HEADERS` is properly formatted
3. **Network connectivity**: Verify you can reach the endpoint

### OpenAI API Errors

If you encounter OpenAI API errors:

1. **Check API key**: Verify `OPENAI_API_KEY` is set and valid
2. **Check quota**: Ensure your OpenAI account has available quota
3. **Verify model**: Ensure the model name (`gpt-4o`) is correct and available

### Traces Not Appearing in Workbench

If traces don't appear in Workbench:

1. **Check authorization**: Verify your `OTEL_EXPORTER_OTLP_HEADERS` credentials are correct
2. **Wait a few seconds**: Traces may take a moment to appear
3. **Check service name**: Look for traces with your `OTEL_SERVICE_NAME` (default: `trace-generator`)
4. **Verify endpoint**: Ensure you're using the correct endpoint for your environment

## Related Documentation

- [Highflame Workbench Documentation](https://docs.highflame.ai/)
- [OpenTelemetry Python Documentation](https://opentelemetry.io/docs/instrumentation/python/)
- [OpenTelemetry OTLP Protocol](https://opentelemetry.io/docs/specs/otlp/)
72 changes: 72 additions & 0 deletions examples/otel-traces/generate_traces.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
import os
from typing import Any, Dict

from openai import OpenAI
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor


def init_tracer() -> trace.Tracer:
"""Configure an OTLP exporter that talks directly to the collector."""
resource = Resource.create(
{
"service.name": os.getenv("OTEL_SERVICE_NAME", "trace-generator"),
"service.namespace": "javelin-cerberus",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The service.namespace is hardcoded to "javelin-cerberus". This appears to be an internal or environment-specific value. For a general-purpose example, it's better to remove this line to make the script more broadly applicable without modification.

}
)
provider = TracerProvider(resource=resource)
exporter = OTLPSpanExporter(
endpoint=os.getenv("OTLP_ENDPOINT")
)
Comment on lines +21 to +23
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The OTLPSpanExporter can be initialized without arguments, as it will automatically use standard OpenTelemetry environment variables like OTEL_EXPORTER_OTLP_TRACES_ENDPOINT and OTEL_EXPORTER_OTLP_HEADERS. This simplifies the code and aligns the example with standard OTEL practices. This change assumes you've also updated the README.md to use OTEL_EXPORTER_OTLP_TRACES_ENDPOINT.

    exporter = OTLPSpanExporter()

provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
return trace.get_tracer(__name__)


tracer = init_tracer()
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Accessing os.environ["OPENAI_API_KEY"] directly will raise a KeyError if the environment variable is not set, causing the script to crash. It's more user-friendly to use os.getenv() and check for the variable's existence, raising a descriptive error if it's missing.

api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
    raise ValueError("The OPENAI_API_KEY environment variable is not set.")
client = OpenAI(api_key=api_key)



def record_completion_attributes(span: trace.Span, usage: Any) -> None:
if not usage:
return

for key in ("prompt_tokens", "completion_tokens", "total_tokens"):
value: Any = getattr(usage, key, None)
if value is None and isinstance(usage, Dict):
value = usage.get(key)
if value is not None:
span.set_attribute(f"llm.usage.{key}", value)


def generate_trace() -> None:
with tracer.start_as_current_span("openai.chat.completions.create") as span:
completion = client.chat.completions.create(
model="gpt-4o",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name "gpt-4o" is hardcoded here and also when setting span attributes. To improve maintainability and avoid duplication, consider defining it as a constant at the module level and reusing it.

messages=[
{
"role": "system",
"content": "You are a very accurate calculator. Output only the result.",
},
{"role": "user", "content": "1 + 1 = "},
],
metadata={"someMetadataKey": "someValue"},
)

span.set_attribute("llm.model", "gpt-4o")
span.set_attribute("prompt.user_question", "1 + 1 = ")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The span attribute prompt.user_question is redundant with the input attribute set on line 66. To simplify the example and reduce confusion, it's better to remove this line and use only the input attribute for the prompt.

span.set_attribute("response.id", completion.id)
record_completion_attributes(span, getattr(completion, "usage", {}) or {})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expression getattr(completion, "usage", {}) or {} is a bit complex. You can make this more readable by passing completion.usage directly to record_completion_attributes. The function already handles cases where usage is None.

        record_completion_attributes(span, completion.usage)


answer = completion.choices[0].message.content
span.set_attribute("response.preview", answer)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The span attribute response.preview is redundant with the output attribute set on line 67. To simplify the example, consider removing this line and using only the output attribute for the response content.

span.set_attribute("input", "1 + 1 = ")
span.set_attribute("output", answer)
print(answer)


if __name__ == "__main__":
generate_trace()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The file is missing a final newline character. It's a standard convention to end files with a newline.

    generate_trace()

8 changes: 8 additions & 0 deletions examples/otel-traces/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# OpenTelemetry dependencies for sending traces to Highflame Workbench
opentelemetry-api>=1.32.1
opentelemetry-sdk>=1.32.1
opentelemetry-exporter-otlp-proto-http>=1.32.1

# OpenAI SDK for this example
# Note: You can replace this with your preferred LLM provider's SDK
openai>=1.0.0
Loading