Skip to content

Conversation

@nora-shap
Copy link
Member

Problem

The LLM issue detection task was fetching full span data for every trace in Sentry, then sending bits of that telemetry to Seer in individual requests. We want to use EAPTrace instead which would include much more data in a format better optimized for llm analysis. This requires a significant restructuring of the request/response formats between this task and its seer endpoint.

There was also a lil bug in how we were selecting traces for each transaction - cleared that up and introduced a tiny bit of variation to trace selection logic.

Solution

Changed the request/response flow so Sentry sends only trace IDs to Seer in a single bundled request. Now, Seer fetches the full EAPTrace data itself via Sentry's existing get_trace_waterfall RPC endpoint and uses that as the input for llm detection.

Changes to Sentry → Seer Request

Before:

  • Sentry sent truncated trace telemetry
  • Multiple fields: trace_id, project_id, transaction_name, total_spans, spans: list[Span]
  • Sent one trace at a time

After:

  • Sentry sends only trace metadata: trace_id and normalized transaction_name
  • Sends up to 50 traces in a single request
  • Seer fetches full EAPTrace data via RPC

Changes to Seer → Sentry Response

Updated DetectedIssue model to include context fields:

  • Added trace_id: str - which trace the issue was found in
  • Added transaction_name: str - normalized transaction name
  • These are pass-through fields Seer must return from the request

Trace Selection Logic

  • Query top transactions by sum(span.duration) over 30-minute window
  • Deduplicate by normalized transaction name
  • For each unique transaction, select one representative trace using a randomized time sub-window (1-8 minute offset)

Breaking Changes

This is a breaking change to the Seer integration. Deployment requires:

  1. Stop the task (issue-detection.llm-detection.enabled = false)
  2. Deploy Seer changes to handle new request format and fetch traces via RPC
  3. Deploy this Sentry change
  4. Re-enable the task
    This will not impact any customers.

@linear
Copy link

linear bot commented Dec 5, 2025

@github-actions github-actions bot added the Scope: Backend Automatically applied to PRs that change backend components label Dec 5, 2025

NUM_TRANSACTIONS_TO_PROCESS = 20
LOWER_SPAN_LIMIT = 20
UPPER_SPAN_LIMIT = 500
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these will be handled on the seer side

@nora-shap nora-shap force-pushed the nora/ID-1121 branch 2 times, most recently from 52a714f to d0ece3c Compare December 5, 2025 23:09
@nora-shap nora-shap marked this pull request as ready for review December 5, 2025 23:18
@nora-shap nora-shap requested review from a team as code owners December 5, 2025 23:18
if processed_count >= NUM_TRANSACTIONS_TO_PROCESS:
break
seer_request = {
"telemetry": [{**trace.dict(), "kind": "trace"} for trace in evidence_traces],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

feels like we could use better variable names here since it's just the id/name instead of an actual trace now

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree - cleaned it up on the seer side, updating this pr to match

@codecov
Copy link

codecov bot commented Dec 5, 2025

Codecov Report

❌ Patch coverage is 85.07463% with 10 lines in your changes missing coverage. Please review.
✅ All tests successful. No failed tests found.

Files with missing lines Patch % Lines
src/sentry/tasks/llm_issue_detection/detection.py 81.81% 6 Missing ⚠️
src/sentry/tasks/llm_issue_detection/trace_data.py 87.87% 4 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff            @@
##           master   #104485   +/-   ##
========================================
  Coverage   80.52%    80.52%           
========================================
  Files        9330      9330           
  Lines      400645    400699   +54     
  Branches    25689     25689           
========================================
+ Hits       322624    322669   +45     
- Misses      77555     77564    +9     
  Partials      466       466           

Comment on lines 267 to 269
except (ValueError, TypeError) as e:
raise LLMIssueDetectionError(
message="Seer response parsing error",

This comment was marked as outdated.

@nora-shap nora-shap requested a review from a team December 9, 2025 17:58
Comment on lines +265 to +275
raw_response_data = response.json()
response_data = IssueDetectionResponse.parse_obj(raw_response_data)
except (ValueError, TypeError, ValidationError) as e:
raise LLMIssueDetectionError(
message="Seer response parsing error",
status=response.status,
project_id=project_id,
organization_id=organization_id,
response_data=response.data.decode("utf-8"),
error_message=str(e),
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Task fails due to unhandled LLMIssueDetectionError from Seer API.
Severity: CRITICAL | Confidence: High

🔍 Detailed Analysis

The detect_llm_issues_for_project task lacks proper error handling for failures originating from the Seer API. If make_signed_seer_api_request (line 249) fails, or if the response status is not 2xx (lines 255-262), or if JSON parsing fails (lines 264-275), an LLMIssueDetectionError is raised uncaught. This causes the entire task to fail, preventing any issue detection for the project, unlike the previous implementation which handled such errors gracefully.

💡 Suggested Fix

Implement a try-except block around the Seer API request and response processing to catch LLMIssueDetectionError and log it, allowing the task to continue or retry gracefully.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.

Location: src/sentry/tasks/llm_issue_detection/detection.py#L255-L275

Potential issue: The `detect_llm_issues_for_project` task lacks proper error handling
for failures originating from the Seer API. If `make_signed_seer_api_request` (line 249)
fails, or if the response status is not 2xx (lines 255-262), or if JSON parsing fails
(lines 264-275), an `LLMIssueDetectionError` is raised uncaught. This causes the entire
task to fail, preventing any issue detection for the project, unlike the previous
implementation which handled such errors gracefully.

Did we get this right? 👍 / 👎 to inform future reviews.
Reference ID: 6587752

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Scope: Backend Automatically applied to PRs that change backend components

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants