Skip to content

fix(spv): clear stale SPV state and banner on RPC mode switch#668

Open
thepastaclaw wants to merge 2 commits intodashpay:v1.0-devfrom
thepastaclaw:fix/spv-state-leak-banner-cleanup
Open

fix(spv): clear stale SPV state and banner on RPC mode switch#668
thepastaclaw wants to merge 2 commits intodashpay:v1.0-devfrom
thepastaclaw:fix/spv-state-leak-banner-cleanup

Conversation

@thepastaclaw
Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw commented Feb 27, 2026

Issue Being Fixed

After switching from SPV to RPC backend mode, two pieces of stale SPV state could linger:

  1. spv_connected_peers and spv_no_peers_since in ConnectionStatus were not cleared when entering the RPC branch of trigger_refresh(). If a user switched backend modes without a full reset(), the degraded peer timer could still be active, potentially causing spv_peer_degraded() to return true even though SPV was no longer in use.

  2. The SPV degraded banner (SPV_DEGRADED_BANNER) was only cleared inside the if backend_mode == CoreBackendMode::Spv block. After switching to RPC, the banner would persist on screen since the clear path was never reached.

Solution

  • connection_status.rs: Clear spv_connected_peers (→ 0) and spv_no_peers_since (→ None) at the top of the CoreBackendMode::Rpc branch in trigger_refresh().
  • app.rs: Add an else branch to clear the SPV degraded banner when not in SPV mode.

Validation

  • Both fixes are additive — no existing behavior is altered for SPV mode
  • RPC mode now always starts with clean SPV state, matching the behavior of reset()
  • The banner clear is idempotent (no-op if already cleared)

Follow-up from #658 which introduced the degraded peer detection and banner.

Summary by CodeRabbit

  • Bug Fixes
    • Fixed an issue where stale peer connection information was not properly cleared when switching between SPV and RPC connection modes, preventing the application from maintaining accurate connection status.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 27, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 730d571a-2c17-40e3-9aab-dc733808eca2

📥 Commits

Reviewing files that changed from the base of the PR and between b16afeb and 2b7f722.

📒 Files selected for processing (1)
  • src/context/connection_status.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/context/connection_status.rs

📝 Walkthrough

Walkthrough

Added logic in the RPC branch of refresh_zmq_and_spv to clear stale SPV state when switching from SPV to RPC mode. The change sets spv_connected_peers to 0 and resets spv_no_peers_since to None, ensuring SPV state doesn't persist when operating in RPC mode.

Changes

Cohort / File(s) Summary
SPV State Cleanup
src/context/connection_status.rs
Added logic to clear stale SPV peer state (spv_connected_peers = 0, spv_no_peers_since = None) when switching from SPV to RPC mode.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🐰 A hop and a skip, stale SPV must go,
When RPC takes the stage, we reset and flow,
Peers cleared to zero, no lingering past,
Fresh connection state—clean and fast! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: clearing stale SPV state when switching to RPC mode, which is the primary focus of the changeset in connection_status.rs.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/context/connection_status.rs`:
- Around line 444-450: Reset the SPV status in the same RPC-branch cleanup where
you clear spv_connected_peers and spv_no_peers_since: set self.spv_status to a
non-Stopping state (e.g., SpvStatus::Stopped or the equivalent default) so that
code checking `spv_status == Stopping` (used to enforce the 200ms fast refresh)
won't remain true after switching to RPC; update the same block that touches
spv_connected_peers and spv_no_peers_since to also update spv_status via the
same atomic/mutex accessors used elsewhere.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 011319d and b16afeb.

📒 Files selected for processing (2)
  • src/app.rs
  • src/context/connection_status.rs

Comment on lines +444 to +450
// Clear stale SPV state that may linger if the backend mode
// switched from SPV to RPC without a full `reset()`.
self.spv_connected_peers.store(0, Ordering::Relaxed);
*self
.spv_no_peers_since
.lock()
.unwrap_or_else(|e| e.into_inner()) = None;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset spv_status in RPC branch to prevent stale fast-refresh cadence.

At Line [444]-Line [450], peer-related SPV fields are cleared, but spv_status is not. Since Line [394] uses spv_status == Stopping to force a 200ms refresh interval, switching to RPC during SPV shutdown can leave RPC polling stuck at the fast cadence and spam GetBestChainLocks.

Proposed minimal fix
             CoreBackendMode::Rpc => {
                 // Clear stale SPV state that may linger if the backend mode
                 // switched from SPV to RPC without a full `reset()`.
+                self.set_spv_status(SpvStatus::Idle);
                 self.spv_connected_peers.store(0, Ordering::Relaxed);
                 *self
                     .spv_no_peers_since
                     .lock()
                     .unwrap_or_else(|e| e.into_inner()) = None;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/context/connection_status.rs` around lines 444 - 450, Reset the SPV
status in the same RPC-branch cleanup where you clear spv_connected_peers and
spv_no_peers_since: set self.spv_status to a non-Stopping state (e.g.,
SpvStatus::Stopped or the equivalent default) so that code checking `spv_status
== Stopping` (used to enforce the 200ms fast refresh) won't remain true after
switching to RPC; update the same block that touches spv_connected_peers and
spv_no_peers_since to also update spv_status via the same atomic/mutex accessors
used elsewhere.

@thepastaclaw
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 27, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/context/connection_status.rs (1)

444-450: ⚠️ Potential issue | 🟠 Major

Reset spv_status in RPC cleanup to avoid stale 200ms polling.

In Line [444]-Line [450], peer/timer SPV fields are cleared, but spv_status is not. Because Line [394] forces a 200ms refresh when status is Stopping, an SPV→RPC switch can keep RPC in fast polling mode.

Proposed minimal fix
         CoreBackendMode::Rpc => {
             // Clear stale SPV state that may linger if the backend mode
             // switched from SPV to RPC without a full `reset()`.
+            self.set_spv_status(SpvStatus::Idle);
             self.spv_connected_peers.store(0, Ordering::Relaxed);
             *self
                 .spv_no_peers_since
                 .lock()
                 .unwrap_or_else(|e| e.into_inner()) = None;

Based on learnings: "when reviewing Rust files, avoid proposing extensive changes to RPC-related code paths... keep scope small and document the deprecation context."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/context/connection_status.rs` around lines 444 - 450, The RPC cleanup
currently clears spv_connected_peers and spv_no_peers_since but leaves
spv_status intact, which can leave the status as Stopping and force 200ms
polling; update the same cleanup block to reset spv_status as well (e.g., set
the spv_status holder to None or its neutral/idle state) so that spv_status no
longer triggers fast polling after an SPV→RPC switch; locate the fields
spv_connected_peers, spv_no_peers_since and spv_status in the same struct and
add the spv_status reset alongside the two existing clears.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/context/connection_status.rs`:
- Around line 444-450: The RPC cleanup currently clears spv_connected_peers and
spv_no_peers_since but leaves spv_status intact, which can leave the status as
Stopping and force 200ms polling; update the same cleanup block to reset
spv_status as well (e.g., set the spv_status holder to None or its neutral/idle
state) so that spv_status no longer triggers fast polling after an SPV→RPC
switch; locate the fields spv_connected_peers, spv_no_peers_since and spv_status
in the same struct and add the spv_status reset alongside the two existing
clears.

ℹ️ Review info

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 011319d and b16afeb.

📒 Files selected for processing (2)
  • src/app.rs
  • src/context/connection_status.rs

@lklimek
Copy link
Copy Markdown
Contributor

lklimek commented Mar 3, 2026

@thepastaclaw please rebase

When the backend switches from SPV to RPC without a full reset(),
stale spv_no_peers_since and spv_connected_peers values could persist,
causing the degraded timer to remain active incorrectly.

Additionally, the SPV degraded banner was only managed inside the
SPV mode branch, so switching away from SPV while the banner was
showing would leave it stuck on screen.

Fixes:
1. Clear SPV peer fields at the top of the RPC branch in
   refresh_zmq_and_spv().
2. Clear the SPV degraded banner unconditionally when not in SPV mode.
@thepastaclaw thepastaclaw force-pushed the fix/spv-state-leak-banner-cleanup branch from b16afeb to 675c518 Compare March 4, 2026 05:18
@thepastaclaw
Copy link
Copy Markdown
Collaborator Author

Checked — branch is already up to date with v1.0-dev (no rebase needed, no conflicts). Let me know if you're seeing something specific that needs updating.

@lklimek lklimek marked this pull request as ready for review March 6, 2026 13:40
@thepastaclaw
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 19, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Collaborator Author

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Small, focused fix that clears stale SPV peer-tracking state when in RPC mode. The core fix is correct but the cleanup is incomplete — spv_status is not cleared, which can leave the connection poll loop on fast 200ms cadence indefinitely after SPV shutdown.

Reviewed commit: 2b7f722

🟡 2 suggestion(s)

1 additional finding

🟡 suggestion: Banner not updated when backend mode changes but connection state stays the same

src/app.rs (line 914)

update_connection_banner() only compares previous vs current overall state. If SPV Disconnected → RPC Disconnected, the function returns early and leaves the old SPV banner. The PR description mentions banner cleanup but the diff doesn't touch app.rs.

🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `src/context/connection_status.rs`:
- [SUGGESTION] lines 461-468: RPC cleanup doesn't reset spv_status — stale Stopping state affects poll cadence
  The new RPC branch clears `spv_connected_peers` and `spv_no_peers_since`, but doesn't reset `spv_status`. After `stop_spv()` sets status to `Stopping`, `trigger_refresh()` continues using the 200ms shutdown poll cadence instead of the normal 5s interval. Also, `spv_last_error` is not cleared, which `reset()` does handle (lines 99-100, 113-114).

Consider adding `self.spv_status = SpvStatus::Idle;` and `self.spv_last_error = None;` to match the full cleanup that `reset()` performs.

In `src/app.rs`:
- [SUGGESTION] line 914: Banner not updated when backend mode changes but connection state stays the same
  `update_connection_banner()` only compares previous vs current overall state. If SPV Disconnected → RPC Disconnected, the function returns early and leaves the old SPV banner. The PR description mentions banner cleanup but the diff doesn't touch app.rs.

Comment on lines 461 to +468
CoreBackendMode::Rpc => {
// Clear stale SPV state that may linger if the backend mode
// switched from SPV to RPC without a full `reset()`.
self.spv_connected_peers.store(0, Ordering::Relaxed);
*self
.spv_no_peers_since
.lock()
.unwrap_or_else(|e| e.into_inner()) = None;
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Suggestion: RPC cleanup doesn't reset spv_status — stale Stopping state affects poll cadence

The new RPC branch clears spv_connected_peers and spv_no_peers_since, but doesn't reset spv_status. After stop_spv() sets status to Stopping, trigger_refresh() continues using the 200ms shutdown poll cadence instead of the normal 5s interval. Also, spv_last_error is not cleared, which reset() does handle (lines 99-100, 113-114).

Consider adding self.spv_status = SpvStatus::Idle; and self.spv_last_error = None; to match the full cleanup that reset() performs.

source: ['codex-general', 'claude-general']

🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `src/context/connection_status.rs`:
- [SUGGESTION] lines 461-468: RPC cleanup doesn't reset spv_status — stale Stopping state affects poll cadence
  The new RPC branch clears `spv_connected_peers` and `spv_no_peers_since`, but doesn't reset `spv_status`. After `stop_spv()` sets status to `Stopping`, `trigger_refresh()` continues using the 200ms shutdown poll cadence instead of the normal 5s interval. Also, `spv_last_error` is not cleared, which `reset()` does handle (lines 99-100, 113-114).

Consider adding `self.spv_status = SpvStatus::Idle;` and `self.spv_last_error = None;` to match the full cleanup that `reset()` performs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants