Replace custom Sentry HTTP client with standard sentry SDK#902
Replace custom Sentry HTTP client with standard sentry SDK#902
Conversation
Since the project has moved to daemon mode with a long-lived process, replace the hand-rolled SentryClient (manual DSN parsing, auth headers, minreq HTTP POSTs) with the official sentry Rust SDK. Changes: - Add sentry crate (0.37) with ureq transport, backtrace, contexts - Replace SentryClient struct with sentry::Client + sentry::Hub - Build proper sentry::protocol::Event structs instead of raw JSON - Use sentry::Hub::capture_event for dual DSN (OSS + Enterprise) support - Extract DSN resolution, client building, and tag helpers into functions - Keep PostHog, metrics, CAS, and TelemetryEnvelope IPC unchanged Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
|
|
…icit flush - Hoist sentry::Client instances to telemetry_flush_loop scope so they are created once and reused across all flush cycles (avoids spinning up transport worker threads every 3s). - Preserve original event timestamps by parsing RFC 3339 strings from the TelemetryEnvelope into SystemTime, instead of relying on sentry SDK auto-timestamping at flush time. - Add explicit sentry_clients.flush() after capturing events to ensure the async transport delivers all enqueued events before the batch is considered complete. - Remove #[allow(dead_code)] from ErrorEvent/PerformanceEvent timestamp fields since they are now actively used. Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
Guard against negative i64 from dt.timestamp() being cast to u64, which would wrap to u64::MAX and panic in SystemTime addition. Now falls back to SystemTime::now() for pre-epoch or unparseable timestamps. Co-Authored-By: Sasha Varlamov <sasha@sashavarlamov.com>
| let secs = dt.timestamp(); | ||
| if secs >= 0 { | ||
| Some(SystemTime::UNIX_EPOCH + std::time::Duration::from_secs(secs as u64)) |
There was a problem hiding this comment.
🟡 parse_timestamp truncates sub-second precision from event timestamps
The parse_timestamp function uses dt.timestamp() which returns only whole seconds, discarding all sub-second precision from the RFC 3339 timestamp string. The old code passed the timestamp string directly into the JSON payload (e.g., "timestamp": error.timestamp), preserving full millisecond/nanosecond precision. Now every Sentry event timestamp is rounded down to the nearest second.
Fix: include subsecond nanoseconds
Use dt.timestamp_subsec_nanos() to reconstruct a full-precision SystemTime:
Some(SystemTime::UNIX_EPOCH + std::time::Duration::new(secs as u64, dt.timestamp_subsec_nanos()))| let secs = dt.timestamp(); | |
| if secs >= 0 { | |
| Some(SystemTime::UNIX_EPOCH + std::time::Duration::from_secs(secs as u64)) | |
| let secs = dt.timestamp(); | |
| if secs >= 0 { | |
| Some(SystemTime::UNIX_EPOCH + std::time::Duration::new(secs as u64, dt.timestamp_subsec_nanos())) |
Was this helpful? React with 👍 or 👎 to provide feedback.
Summary
Replaces the hand-rolled
SentryClient(manual DSN parsing,X-Sentry-Authheader construction, raw JSON payloads viaminreq) with the officialsentryRust SDK (v0.37,ureqtransport).What changed:
sentrycrate withbacktrace,contexts,ureq,rustlsfeaturesSentryClientstruct and itsfrom_dsn/send_eventmethods withsentry::Client+sentry::Hubsentry::protocol::Eventstructs instead of rawserde_json::Valueblobsresolve_enterprise_dsn,resolve_oss_dsn,build_sentry_client,base_sentry_tags,context_to_extra,parse_sentry_level,parse_timestamp,send_event_to_clientSentryClientsstruct holds long-livedArc<sentry::Client>instances, created once in the flush loop and reused across all flush cyclesSystemTime) instead of relying on SDK auto-timestampingsentry_clients.flush(5s)call after each batch ensures the async transport delivers events before the batch is considered completeTelemetryEnvelopeIPC are unchangedUpdates since last revision
parse_timestampnow guards against negativei64values fromdt.timestamp()— pre-epoch or malformed timestamps fall back toSystemTime::now()instead of panicking from unsigned wraparound.Review & Testing Checklist for Human
sentry_clients.flush(Duration::from_secs(5))blocks thespawn_blockingtask for up to 5s if the network is slow.FLUSH_INTERVALis 3s, so a slow flush could cause batches to accumulate. Additionally, theflush()return value (bool indicating whether all events were sent) is currently ignored — if the timeout is reached, events are silently dropped with no logging. Verify this is acceptable or add adebug_logon timeout.hyper,hyper-util,tokio-rustls,http,http-body, etc. Confirm this is acceptable for binary size and compile time, and thatureqtransport is actually being used (not the defaultreqwest/curltransport pulling in hyper transitively via some other path).parse_timestampusesdt.timestamp()which truncates sub-second precision from the original RFC 3339 timestamps. The old code passed the full string through to Sentry's JSON API. Verify whether sub-second precision matters for your Sentry dashboards.SentryClientsis constructed once when the flush loop starts. If DSN configuration changes at runtime (e.g., config file update, env var change), the clients won't pick up the new DSNs until the daemon restarts. Confirm this matches the expected behavior.Suggested test plan: Deploy to a test environment with valid OSS/Enterprise DSNs configured. Trigger error, performance, and message telemetry events, then verify they appear in both Sentry dashboards with correct tags, extra data, level, release version, and timestamps that match event creation time (not flush time).
Notes
minreqis still a dependency (used by PostHog and CAS flush paths).SentryClient::send_event()was fully synchronous (blocking HTTP call per event, with status code check). The new code enqueues events viahub.capture_event()(non-blocking) then callsclient.flush()once at the end of the batch — this should be more efficient when sending multiple events per cycle, but individual per-event error responses are no longer surfaced.Link to Devin session: https://app.devin.ai/sessions/32562d5f120d422fb3a6fa92035193f1
Requested by: @svarlamov