Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@
"tracing/structure/tags",
"tracing/structure/image",
"tracing/structure/continuing-traces",
"tracing/structure/providers"
"tracing/structure/providers",
"tracing/structure/flushing"
]
},
"tracing/realtime",
Expand Down
20 changes: 18 additions & 2 deletions tracing/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,25 @@ Each span has:
- **Name**: The name of the span (e.g., `gemini.generate_content`)
- **Input**: The input of the function representing a span. In case of an LLM call, this is the prompt.
- **Output**: The output of the function representing a span. In case of an LLM call, this is the response from the model.
- **Duration**: How long the operation took to execute
- **Duration**: How long the operation took to execute.
- **Path**: Hierarchical path of the span in the trace (e.g., `get_user.validate.api_call`).
- **Attributes**: Input parameters, return values, and other metadata
- **Attributes**: Input parameters, return values, and other metadata.
- **Parent span ID**: A reference to the parent span. If the span is the root span of the trace, the parent span ID is not specified.

#### Span lifecycle

Spans are created, can optionally be activated, and need to be ended.

**Span creation** also known as **starting a span** is a process of creating a new span object with a start timestamp.
**Span activation** means setting the span as the current active span. This means that spans created afterwards will become the children of the current span.
**Span ending** is a process of setting the end timestamp on the span and closing the span.

<Tip>
A span that is not ended will never be visible in the UI.
</Tip>

Note that it is not required to activate a span. For example, if you know a small operation will not have children spans, you can create it without activating it.
Ending such a span is still required. These spans do not have children spans, so they are sometimes called **leaf spans**. A common example of a leaf span is a simple LLM call.

### Trace

Expand Down
3 changes: 2 additions & 1 deletion tracing/langgraph-visualization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -103,5 +103,6 @@ async def create_and_run_graph():
```

<Frame>
<img src="/images/traces/langgraph-visualization.png" />
<img src="/images/traces/langgraph-visualization.png"
alt="Example trace with LangGraph visualization and arrow indicating the location of the button to enable trace visualization" />
</Frame>
37 changes: 19 additions & 18 deletions tracing/structure/continuing-traces.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ const generateResponse = async (span: Span, processedData: any) => {
// Main orchestrator
const handleRequest = async (userInput: string) => {
const rootSpan = Laminar.startSpan('handleUserRequest');

try {
const processedData = await processData(rootSpan, userInput);
const response = await generateResponse(rootSpan, processedData);
Expand All @@ -59,7 +59,8 @@ const handleRequest = async (userInput: string) => {
```

<Warning>
Remember to call `span.end()` to complete the trace. You can also pass `true` as the third argument (`endOnExit`) to the last `Laminar.withSpan()` call to automatically end the span.
Remember to call `span.end()` to complete the trace.
You can also pass `true` as the third argument (`endOnExit`) to the last `Laminar.withSpan()` call to automatically end the span.
</Warning>

</Tab>
Expand All @@ -86,7 +87,7 @@ def generate_response(span: Span, processed_data: dict):

def handle_request(user_input: str):
root_span = Laminar.start_span(name="handle_user_request")

try:
processed_data = process_data(root_span, user_input)
response = generate_response(root_span, processed_data)
Expand Down Expand Up @@ -122,12 +123,12 @@ let spanContext: string | null = null;

const firstHandler = async () => {
const span = Laminar.startSpan({ name: 'firstHandler' });

// Serialize the span context
spanContext = Laminar.serializeLaminarSpanContext(span);

// Store spanContext in database, send via HTTP header, etc.

span.end();
};

Expand All @@ -137,10 +138,10 @@ const secondHandler = async () => {
name: 'secondHandler',
parentSpanContext: spanContext ?? undefined,
});

// This span will be a child of the first handler's span
// Your code here...

span.end();
};
```
Expand All @@ -158,12 +159,12 @@ const response = await fetch('/api/service-b', {
// Service B - deserialize and continue
app.post('/api/service-b', (req, res) => {
const spanContext = req.headers['x-laminar-span-context'];

const span = Laminar.startSpan({
name: 'serviceBHandler',
parentSpanContext: spanContext
parentSpanContext: spanContext,
});

// Handle request...
span.end();
});
Expand All @@ -180,24 +181,24 @@ def first_handler(request: str) -> str:
with Laminar.start_as_current_span(name="first_handler") as span:
# Serialize the span context
span_context = Laminar.serialize_span_context(span)

# Store in database, pass via message queue, etc.
save_context_to_db(span_context)

return "Processing started"

# Deserialize and continue trace (e.g., in second service)
def second_handler(context_id: str) -> str:
# Retrieve serialized context
serialized_context = get_context_from_db(context_id)

# Deserialize span context
parent_span_context = (
Laminar.deserialize_span_context(serialized_context)
if serialized_context
else None
)

with Laminar.start_as_current_span(
name="second_handler",
parent_span_context=parent_span_context
Expand All @@ -218,7 +219,7 @@ app = Flask(__name__)
def call_service_b():
with Laminar.start_as_current_span(name="service_a_call"):
span_context = Laminar.serialize_span_context()

response = requests.post('http://service-b/api/process',
headers={'X-Laminar-Span-Context': span_context},
json={'data': 'some data'}
Expand All @@ -229,13 +230,13 @@ def call_service_b():
@app.route('/api/process', methods=['POST'])
def process_request():
span_context = request.headers.get('X-Laminar-Span-Context')

parent_span_context = (
Laminar.deserialize_span_context(span_context)
if span_context
else None
)

with Laminar.start_as_current_span(
name="service_b_handler",
parent_span_context=parent_span_context
Expand Down
103 changes: 103 additions & 0 deletions tracing/structure/flushing.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
sidebarTitle: Flushing Spans and Shutting Down
title: Flushing Spans and Shutting Down Laminar Tracing
description: Learn how to flush spans to Laminar backend and shut down Laminar tracing
---

## Background

Laminar span processor batches spans in memory before sending them to the backend.
This is done to improve performance and reduce the number of network requests.

However, there are cases when you need to flush spans immediately.
Most often, this happens at the end of the program when you want to ensure that all spans are sent to the backend before the program exits.

## Flushing spans

You can flush spans manually using the `Laminar.flush()` method.

```javascript
import { Laminar } from '@lmnr-ai/lmnr';

Laminar.initialize();

// your code here

// at the end of your script, when you are done tracing
await Laminar.flush(); // NOTE: don't forget to await
```

```python
from lmnr import Laminar

Laminar.initialize()

# your code here

# at the end of your script, when you are done tracing
Laminar.flush()
```

## Shutting down Laminar tracing

You can shut down Laminar tracing manually using the `Laminar.shutdown()` method.

```javascript
import { Laminar } from '@lmnr-ai/lmnr';

Laminar.initialize();

// your code here

// at the end of your script, when you are done tracing.
// This will also flush any remaining spans.
await Laminar.shutdown(); // NOTE: don't forget to await

// IMPORTANT: you CAN re-initialize Laminar after shutting it down.
```

```python
from lmnr import Laminar

Laminar.initialize()

# your code here

# at the end of your script, when you are done tracing
# This will also flush any remaining spans.
Laminar.shutdown()

# IMPORTANT: Laminar CANNOT be re-initialized after shutting it down inside
# the same process. For force flushing spans, refer to
# the section below on edge runtimes.
```

### Edge runtimes and AWS Lambda

In python, in order not to lose spans at the end of your Lambda or another edge runtime, use `Laminar.force_flush()` to force flush the spans immediately.

```python
# Inside your Lambda handler
from lmnr import Laminar

Laminar.initialize()

# your code here

Laminar.force_flush()
```

#### Technical explanation

In Python, Laminar.flush() is a synchronous function. However, the actual flush is done in a background daemon thread.
This means that if the process exits before the flush is complete, the spans will be lost. This is a common
failure scenario for edge runtimes like AWS Lambda.

This is NOT a problem in JavaScript, because the JavaScript OpenTelemetry SDK exposes `shutdown()` and `flush()` as asynchronous functions.
Lambda will wait for the function to complete, as long as `await` is used on `Laminar.shutdown()` or `Laminar.flush()`.

One possible way to avoid this issue is to completely shutdown Laminar at the end of your Lambda handler. This is because `shutdown()` is a synchronous function that blocks the main thread until the flush is complete.
The problem with this approach is that if Lambda re-uses the same container for the next request, the Laminar SDK will not be usable again.

Therefore, we expose `Laminar.force_flush()` as a way to force flush the spans and block on the flush operation until it is complete.
Behind the scenes, this will shutdown and re-initialize OpenTelemetry SDKs, thus ensuring that the spans are flushed and the SDK is usable again.
10 changes: 8 additions & 2 deletions tracing/structure/image.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,10 @@ const analyzeImage = async (imagePath, userQuestion) =>
});

// Example usage
const result = await analyzeImage('eiffel_tower.jpg', 'What information is shown in this image?');
const result = await analyzeImage(
'eiffel_tower.jpg',
'What information is shown in this image?',
);
console.log(result);
```
</Tab>
Expand Down Expand Up @@ -123,7 +126,10 @@ def analyze_image(image_path: str, user_question: str):
return response.choices[0].message.content

# Example usage
result = analyze_image("eiffel_tower.jpg", "What information is shown in this image?")
result = analyze_image(
"eiffel_tower.jpg",
"What information is shown in this image?",
)
print(result)
```
</Tab>
Expand Down
Loading