feat(spv): enable mempool transaction detection via bloom filters#776
feat(spv): enable mempool transaction detection via bloom filters#776
Conversation
Bump dash-sdk to platform commit 6d9efa97b which includes rust-dashcore mempool support (MempoolManager with bloom filter monitoring) and SDK RPITIT refactor. Key changes: - Configure SPV client with BloomFilter mempool strategy - Update process_mempool_transaction() call signature - Adapt to Network::Dash -> Network::Mainnet rename - Add DB migration 29 to update stored "dash" network strings - Add Send-asserting wrappers for backend task futures to work around RPITIT HRTB Send inference limitations in the SDK Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces Changes
Sequence Diagram(s)sequenceDiagram
participant SPV as SpvManager
participant CS as ConnectionStatus
participant UI as UI Frame Loop
participant Backend as Backend/Chainlocks
SPV->>CS: set_spv_status(status)\nset_spv_connected_peers(count)\nset_spv_last_error(opt_err)
CS->>CS: update atomics/mutexes\nupdate spv_no_peers_since
CS->>CS: refresh_state()
UI->>CS: read overall_state
CS-->>UI: overall_state
Backend->>CS: update_from_chainlocks(rpc_online)
CS-->>Backend: reported connection state
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Rebase platform dependency onto aa86b74f (before PR #3376 which paradoxically broke Send inference) so the compiler can natively prove the backend task future is Send. Changes: - Update dash-sdk rev to 2414799b7 (mempool bump on clean base) - Remove assert_send() transmute and all _send() wrapper methods - Revert app.rs to call run_backend_task() directly This commit is self-contained — revert it to restore the workaround if a future platform update reintroduces the HRTB regression. See: dashpay/platform#3376 See: rust-lang/rust#100013 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add structured tracing to wait helpers and create_funded_test_wallet() for diagnosing IS lock timing issues. All new logs are trace-level (enable with RUST_LOG=backend_e2e=trace) except the final setup summary which logs at info. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Update dash-sdk to platform commit d9de0fd2 which includes rust-dashcore c451a1c6 adding notify_wallet_addresses_changed() for triggering mempool bloom filter rebuilds. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Call notify_wallet_addresses_changed() after loading a new wallet into SPV and after registering DashPay contact addresses. This ensures the mempool bloom filter includes newly added addresses so incoming transactions are detected immediately rather than waiting for the next block or SPV reconnect. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The hardcoded .add_directive("backend_e2e=info") was overriding
RUST_LOG=backend_e2e=trace. Change to use RUST_LOG as primary source
with info as fallback when the env var is not set.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…itions Reverts bloom filter rebuild API additions (notify_wallet_addresses_changed) that were added in commit eb948ae. The upstream MempoolManager has bugs (masternode sync cascade blocking activation) that make these calls ineffective. Failing tests proving these bugs are tracked in dashpay/rust-dashcore#567. Platform rev pinned to 2af8cac6 which uses rust-dashcore 1af96dd (PR #558). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Updates platform to be2082b3c which pins rust-dashcore b03252af, bringing bloom filter staleness detection via monitor_revision polling. Adapts imports for key-wallet-manager → key-wallet crate merge (upstream rust-dashcore #503). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…rness The E2E harness was broadcasting funding transactions before the MempoolManager's bloom filter was loaded to peers. Peers never relayed IS locks for those txs because the filter didn't exist yet. - Add wait_for_spv_running() that polls ConnectionStatus::spv_status() until SpvStatus::Running (fires after SyncComplete + bloom filter built) - Call it in BackendTestContext::init() after spendable balance is confirmed - Add 200ms sleep in create_funded_test_wallet() after wallet appears in SPV to let the mempool manager tick rebuild the bloom filter Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
ConnectionStatus.spv_status is only updated from the UI frame loop, which doesn't run in the E2E test harness. Read from SpvManager's internal RwLock instead. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Integrates upstream rust-dashcore SPV mempool support into dash-evo-tool and updates the codebase for the upstream Network::Dash → Network::Mainnet rename, including DB migration and E2E timing adjustments so bloom-filter mempool tracking is active before broadcasts.
Changes:
- Enable SPV mempool tracking via
MempoolStrategy::BloomFilterand adapt to updated wallet-manager APIs. - Migrate persisted network strings from
"dash"to"mainnet"and update UI/backend logic/tests to useNetwork::Mainnet. - Stabilize backend E2E tests by waiting for SPV
Runningand adding bloom-filter rebuild timing.
Reviewed changes
Copilot reviewed 32 out of 33 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/backend-e2e/framework/wait.rs | Adds tracing/telemetry to polling helpers and introduces wait_for_spv_running() gating. |
| tests/backend-e2e/framework/harness.rs | Waits for SPV Running before tests broadcast; adds bloom filter rebuild delay; adjusts tracing env filter init. |
| src/ui/wallets/wallets_screen/dialogs.rs | Updates mainnet address hint check to Network::Mainnet. |
| src/ui/tools/masternode_list_diff_screen.rs | Updates network matching to Network::Mainnet for mainnet-specific MN list defaults. |
| src/ui/tokens/tokens_screen/mod.rs | Updates test fixtures to Network::Mainnet. |
| src/ui/theme.rs | Updates mainnet color mapping match arm to Network::Mainnet. |
| src/ui/network_chooser_screen.rs | Switches mainnet handling from Network::Dash to Network::Mainnet across chooser logic. |
| src/ui/identities/withdraw_screen.rs | Updates mainnet address hint check to Network::Mainnet. |
| src/ui/identities/transfer_screen.rs | Updates platform-address hint check to Network::Mainnet. |
| src/ui/contracts_documents/contracts_documents_screen.rs | Updates mainnet gating check to Network::Mainnet. |
| src/ui/components/left_panel.rs | Updates “mainnet vs other networks” UI logic to use Network::Mainnet. |
| src/spv/manager.rs | Enables mempool tracking and updates wallet-manager imports and mempool tx processing signature usage. |
| src/model/wallet/mod.rs | Updates network comparisons and derivation helpers to use Network::Mainnet. |
| src/model/settings.rs | Sets default settings network to Network::Mainnet. |
| src/model/qualified_identity/qualified_identity_public_key.rs | Updates mainnet checks to Network::Mainnet for address rendering. |
| src/model/qualified_identity/mod.rs | Updates default decoded identity network placeholder to Network::Mainnet. |
| src/database/utxo.rs | Updates tests to use "mainnet"/Network::Mainnet for mainnet filtering. |
| src/database/settings.rs | Adds legacy "dash" → Network::Mainnet parsing for older DBs; updates tests. |
| src/database/initialization.rs | Bumps DB version and adds migration to rename "dash" to "mainnet" in stored network fields. |
| src/database/contested_names.rs | Updates mainnet contest duration check to Network::Mainnet. |
| src/context/wallet_lifecycle.rs | Updates wallet-network mapping to WalletNetwork::Mainnet. |
| src/context/mod.rs | Updates default platform version selection match to Network::Mainnet. |
| src/context/connection_status.rs | Updates mainnet online detection match to Network::Mainnet. |
| src/config.rs | Updates config routing for mainnet to use Network::Mainnet. |
| src/components/core_p2p_handler.rs | Updates mainnet port/timeout matching to Network::Mainnet. |
| src/backend_task/platform_info.rs | Updates mainnet epoch timing constants match to Network::Mainnet. |
| src/backend_task/mnlist.rs | Updates mainnet seed height match to Network::Mainnet. |
| src/backend_task/dashpay/hd_derivation.rs | Updates tests to derive mainnet xpubs using Network::Mainnet. |
| src/backend_task/core/mod.rs | Updates wallet-manager import path and mainnet config/chainlock selection to Network::Mainnet. |
| src/app_dir.rs | Updates mainnet Core dir/config selection to Network::Mainnet. |
| src/app.rs | Updates application default/chosen network handling and routing from Network::Dash to Network::Mainnet. |
| Cargo.toml | Updates dash-sdk git dependency revision to include mempool-support changes. |
| Cargo.lock | Locks updated transitive dependencies for new platform/rust-dashcore revisions. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
🔍 Re-review: All 3 previous findings remain unresolvedI re-ran a full parallel review (security, project consistency, Rust code quality — 3 specialist agents). The good news: the Network::Dash → Network::Mainnet rename is thorough across 33 files, the mempool bloom filter integration is clean, and the E2E harness improvements are solid. The bad news: all three findings from the previous review remain unaddressed. Blockers (unchanged)
Additional findings (new, lower severity)
What's good
Fix priority
Full report (9 findings, 3 agents, redundancy ratio 68%) is available as a build artifact. 🤖 Co-authored by Claudius the Magnificent AI Agent |
Eliminate polling of SpvManager.status() from the UI frame loop. SpvManager event handlers now push status, peer count, and error updates directly to ConnectionStatus via new setter methods, making SPV status visible to headless/test code without a UI polling cycle. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ion health Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 33 out of 34 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
Cargo.toml (1)
21-29:⚠️ Potential issue | 🔴 CriticalThe
dash-sdkcommit hash93556aa4bcannot be verified in the dashpay/platform repository.This commit does not appear to exist in any public Dashpay repositories. Update the dependency to use a valid commit hash from
https://github.com/dashpay/platform. This will cause build failures if left as-is.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 21 - 29, The dash-sdk dependency in Cargo.toml references a non-existent commit rev "93556aa4b" which prevents fetching the crate; update the dash-sdk git dependency (the dash-sdk = { git = "...", rev = "93556aa4b", ... } entry) to a valid commit hash or tag from the https://github.com/dashpay/platform repository (or switch to a published crate/version) so Cargo can resolve it; verify the commit exists on the upstream repo and replace the rev value accordingly, then re-run cargo update/build to confirm resolution.src/ui/tools/masternode_list_diff_screen.rs (1)
1287-1295:⚠️ Potential issue | 🟠 MajorUse a saturating lower bound for historical chain-lock loads.
When the requested
block_heightis belowmax_blocks,block_height - max_blocksunderflows here. In debug builds that can panic; in release builds it wraps and the range below becomes empty instead of starting at height 0.♻️ Proposed fix
- block_height - max_blocks + block_height.saturating_sub(max_blocks)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/tools/masternode_list_diff_screen.rs` around lines 1287 - 1295, The calculation of start_height can underflow when block_height < max_blocks; update the start bound logic in masternode_list_diff_screen.rs (the start_height computation that currently compares base_block_height to loaded_list_height) to use a saturating lower bound by replacing the raw subtraction (block_height - max_blocks) with a safe subtraction such as block_height.saturating_sub(max_blocks) or an explicit conditional that yields 0 when block_height < max_blocks; keep the rest of the branch (the loaded_list_height check and other variables like base_block_height, block_height, max_blocks, loaded_list_height) unchanged.
♻️ Duplicate comments (1)
src/database/initialization.rs (1)
7-9:⚠️ Potential issue | 🔴 CriticalMigration 29 still targets the wrong network-scoped table.
Line 913 still lists
wallet_addresses, but that table's schema in this file (Lines 339-349) has nonetworkcolumn. Because Lines 7-9 and 54-56 make migration 29 run automatically,UPDATE wallet_addresses ...will fail withno such column: network, roll back the migration, and block startup on upgrade.identity_token_balancesis the omitted network-scoped table instead (src/database/tokens.rs:177-190), so its mainnet rows would otherwise stay stranded under"dash".Suggested fix
let tables = [ "settings", "wallet", - "wallet_addresses", + "identity_token_balances", "platform_address_balances", "utxos", "asset_lock_transaction",Also applies to: 52-56, 903-935
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/database/initialization.rs` around lines 7 - 9, Migration 29 in initialization.rs incorrectly targets wallet_addresses (which has no network column) causing the UPDATE to fail; locate the migration entry for version 29 (the block that runs when DEFAULT_DB_VERSION/DEFAULT_NETWORK cause auto-migration) and change the UPDATE/WHERE to operate on identity_token_balances instead of wallet_addresses, referencing the identity_token_balances table defined in tokens.rs (which contains the network column); keep the same update logic but target identity_token_balances and ensure the migration identifier/version remains 29 so the migration will apply to mainnet rows as intended.
🧹 Nitpick comments (4)
tests/backend-e2e/framework/wait.rs (1)
160-184: Fail fast if SPV entersError.Line 171 only exits on
Running. Once the status has moved toError, this loop can never succeed, so the helper just burns the full timeout and hides the real startup failure behind a generic timeout message.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/backend-e2e/framework/wait.rs` around lines 160 - 184, In wait_for_spv_running, detect SpvStatus::Error and return immediately with a descriptive Err instead of waiting the full timeout; inside the polling loop (where you call app_context.connection_status().spv_status()), add a branch that if status == SpvStatus::Error you return Err(...) with a message including the error state (and any available diagnostic), otherwise continue waiting for SpvStatus::Running; keep the existing timeout logic but fail fast on Error to avoid masking startup failures.src/backend_task/core/mod.rs (1)
43-51: Consider sharingnetworks_address_compatible()with wallet code.This module and
src/model/wallet/mod.rsboth needed the same Mainnet update in this PR. Pulling the rule into a single helper would keep payment validation and wallet logic from drifting on the next network/address-format change.Also applies to: 181-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/core/mod.rs` around lines 43 - 51, Duplicate network-address compatibility logic exists in networks_address_compatible (in backend_task/core/mod.rs) and in src/model/wallet/mod.rs; extract that rule into a single shared helper (e.g., a new function named networks_address_compatible or networks_address_compatible_internals in a common/lib module) and replace both local implementations to call the shared helper so Mainnet/Testnet/Devnet/Regtest rules remain consistent; update imports in backend_task::core and model::wallet to use the new helper and remove the duplicated code blocks.src/model/wallet/mod.rs (1)
152-156: Centralize the coin-type mapping instead of repeatingMainnet => 5.These helpers now encode the same
Mainnet => 5/ testnet-like=> 1branch in several places even though this module already hasDASH_COIN_TYPE,DASH_TESTNET_COIN_TYPE, andcoin_type(). Reusing one source of truth will make future derivation updates safer.Also applies to: 189-193, 217-241, 1499-1503
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/model/wallet/mod.rs` around lines 152 - 156, Several functions in this module hardcode the coin-type mapping (e.g., using `Mainnet => 5` / testnet => 1) instead of using the existing canonical values; update callers (including `is_bip44_path`, the blocks around lines referenced, and any other helpers) to use the module's `coin_type(network)` function or the constants `DASH_COIN_TYPE` / `DASH_TESTNET_COIN_TYPE` as the single source of truth; replace the `match network { Network::Mainnet => 5, _ => 1 }` logic with a call to `coin_type(network)` (or compare against the constants) so all derivation checks consistently reference the centralized coin-type mapping.tests/backend-e2e/framework/harness.rs (1)
261-264: Consider making the bloom filter rebuild delay configurable or polling-based.The fixed 200ms sleep assumes the mempool manager tick (100ms) will always complete bloom filter rebuild within that window. This is fragile if the tick interval changes upstream or under load.
A more robust approach would poll until the bloom filter includes the wallet's addresses, but given this is test harness code and the 200ms provides a 2× safety margin over the 100ms tick, this is acceptable for now.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/backend-e2e/framework/harness.rs` around lines 261 - 264, The fixed tokio::time::sleep(Duration::from_millis(200)).await in create_funded_test_wallet is fragile—replace it with a configurable delay or, preferably, a polling loop that waits until the mempool manager's bloom filter has been rebuilt and includes the wallet's addresses (instead of assuming two tick intervals); make the wait configurable (e.g., a timeout constant or parameter) and poll at small intervals checking the bloom filter state, falling back to an error after the timeout so tests don't hang.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/context/connection_status.rs`:
- Around line 150-171: spv_no_peers_since can become stale because it's only
updated in set_spv_connected_peers; update set_spv_status (the method that
changes SPV state) to clear or recompute spv_no_peers_since consistent with the
new status: when transitioning out of an active/connected state set
spv_no_peers_since = None, and when transitioning into a no-peers active state
initialize it to Some(Instant::now()) (use the same locking pattern as in
set_spv_connected_peers); ensure tooltip_text() and spv_status() semantics
remain consistent by keeping the same Ordering/lock usage and touching the
spv_no_peers_since field only within set_spv_status and set_spv_connected_peers.
---
Outside diff comments:
In `@Cargo.toml`:
- Around line 21-29: The dash-sdk dependency in Cargo.toml references a
non-existent commit rev "93556aa4b" which prevents fetching the crate; update
the dash-sdk git dependency (the dash-sdk = { git = "...", rev = "93556aa4b",
... } entry) to a valid commit hash or tag from the
https://github.com/dashpay/platform repository (or switch to a published
crate/version) so Cargo can resolve it; verify the commit exists on the upstream
repo and replace the rev value accordingly, then re-run cargo update/build to
confirm resolution.
In `@src/ui/tools/masternode_list_diff_screen.rs`:
- Around line 1287-1295: The calculation of start_height can underflow when
block_height < max_blocks; update the start bound logic in
masternode_list_diff_screen.rs (the start_height computation that currently
compares base_block_height to loaded_list_height) to use a saturating lower
bound by replacing the raw subtraction (block_height - max_blocks) with a safe
subtraction such as block_height.saturating_sub(max_blocks) or an explicit
conditional that yields 0 when block_height < max_blocks; keep the rest of the
branch (the loaded_list_height check and other variables like base_block_height,
block_height, max_blocks, loaded_list_height) unchanged.
---
Duplicate comments:
In `@src/database/initialization.rs`:
- Around line 7-9: Migration 29 in initialization.rs incorrectly targets
wallet_addresses (which has no network column) causing the UPDATE to fail;
locate the migration entry for version 29 (the block that runs when
DEFAULT_DB_VERSION/DEFAULT_NETWORK cause auto-migration) and change the
UPDATE/WHERE to operate on identity_token_balances instead of wallet_addresses,
referencing the identity_token_balances table defined in tokens.rs (which
contains the network column); keep the same update logic but target
identity_token_balances and ensure the migration identifier/version remains 29
so the migration will apply to mainnet rows as intended.
---
Nitpick comments:
In `@src/backend_task/core/mod.rs`:
- Around line 43-51: Duplicate network-address compatibility logic exists in
networks_address_compatible (in backend_task/core/mod.rs) and in
src/model/wallet/mod.rs; extract that rule into a single shared helper (e.g., a
new function named networks_address_compatible or
networks_address_compatible_internals in a common/lib module) and replace both
local implementations to call the shared helper so
Mainnet/Testnet/Devnet/Regtest rules remain consistent; update imports in
backend_task::core and model::wallet to use the new helper and remove the
duplicated code blocks.
In `@src/model/wallet/mod.rs`:
- Around line 152-156: Several functions in this module hardcode the coin-type
mapping (e.g., using `Mainnet => 5` / testnet => 1) instead of using the
existing canonical values; update callers (including `is_bip44_path`, the blocks
around lines referenced, and any other helpers) to use the module's
`coin_type(network)` function or the constants `DASH_COIN_TYPE` /
`DASH_TESTNET_COIN_TYPE` as the single source of truth; replace the `match
network { Network::Mainnet => 5, _ => 1 }` logic with a call to
`coin_type(network)` (or compare against the constants) so all derivation checks
consistently reference the centralized coin-type mapping.
In `@tests/backend-e2e/framework/harness.rs`:
- Around line 261-264: The fixed
tokio::time::sleep(Duration::from_millis(200)).await in
create_funded_test_wallet is fragile—replace it with a configurable delay or,
preferably, a polling loop that waits until the mempool manager's bloom filter
has been rebuilt and includes the wallet's addresses (instead of assuming two
tick intervals); make the wait configurable (e.g., a timeout constant or
parameter) and poll at small intervals checking the bloom filter state, falling
back to an error after the timeout so tests don't hang.
In `@tests/backend-e2e/framework/wait.rs`:
- Around line 160-184: In wait_for_spv_running, detect SpvStatus::Error and
return immediately with a descriptive Err instead of waiting the full timeout;
inside the polling loop (where you call
app_context.connection_status().spv_status()), add a branch that if status ==
SpvStatus::Error you return Err(...) with a message including the error state
(and any available diagnostic), otherwise continue waiting for
SpvStatus::Running; keep the existing timeout logic but fail fast on Error to
avoid masking startup failures.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b9315ad4-97ae-4180-81c5-143e4293d52a
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (33)
CLAUDE.mdCargo.tomlsrc/app.rssrc/app_dir.rssrc/backend_task/core/mod.rssrc/backend_task/dashpay/hd_derivation.rssrc/backend_task/mnlist.rssrc/backend_task/platform_info.rssrc/components/core_p2p_handler.rssrc/config.rssrc/context/connection_status.rssrc/context/mod.rssrc/context/wallet_lifecycle.rssrc/database/contested_names.rssrc/database/initialization.rssrc/database/settings.rssrc/database/utxo.rssrc/model/qualified_identity/mod.rssrc/model/qualified_identity/qualified_identity_public_key.rssrc/model/settings.rssrc/model/wallet/mod.rssrc/spv/manager.rssrc/ui/components/left_panel.rssrc/ui/contracts_documents/contracts_documents_screen.rssrc/ui/identities/transfer_screen.rssrc/ui/identities/withdraw_screen.rssrc/ui/network_chooser_screen.rssrc/ui/theme.rssrc/ui/tokens/tokens_screen/mod.rssrc/ui/tools/masternode_list_diff_screen.rssrc/ui/wallets/wallets_screen/dialogs.rstests/backend-e2e/framework/harness.rstests/backend-e2e/framework/wait.rs
…ration 29 wallet_addresses table has no `network` column, causing migration 29 (rename "dash" → "mainnet") to fail with "no such column: network". Replace with identity_token_balances which does have the column. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
If a table update fails during the dash→mainnet rename, the error now identifies which table caused the failure. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/database/initialization.rs (1)
909-941: Past critical issue resolved —wallet_addressescorrectly excluded.The table list now correctly omits
wallet_addresses(which lacks anetworkcolumn) and includesidentity_token_balances(which has one). The migration logic is sound.
Consider defensive table/column existence check.
If an edge case arises where a table doesn't exist (e.g., partial corruption), the migration will fail and roll back. For extra resilience, you could skip tables that lack the
networkcolumn:🛡️ Optional: defensive check before UPDATE
for table in tables { + let has_network_col: bool = conn + .query_row( + &format!( + "SELECT COUNT(*) FROM pragma_table_info('{table}') WHERE name='network'" + ), + [], + |row| row.get::<_, i32>(0).map(|c| c > 0), + ) + .unwrap_or(false); + if !has_network_col { + tracing::debug!("migration 29: skipping table `{table}` (no network column)"); + continue; + } conn.execute( &format!("UPDATE {table} SET network = 'mainnet' WHERE network = 'dash'"), [], )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/database/initialization.rs` around lines 909 - 941, In rename_network_dash_to_mainnet, make the migration resilient by checking each table actually exists and has a "network" column before running the UPDATE: for each table in the tables array use the Connection (conn) to query its schema (e.g., PRAGMA table_info(table) or an equivalent SELECT) to detect a "network" column and skip the update if absent; if the table itself is missing, treat that as non-fatal and continue rather than mapping the error to rusqlite::Error::InvalidParameterName — only call conn.execute(...) when the column is present and propagate real SQL errors from the UPDATE otherwise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/database/initialization.rs`:
- Around line 909-941: In rename_network_dash_to_mainnet, make the migration
resilient by checking each table actually exists and has a "network" column
before running the UPDATE: for each table in the tables array use the Connection
(conn) to query its schema (e.g., PRAGMA table_info(table) or an equivalent
SELECT) to detect a "network" column and skip the update if absent; if the table
itself is missing, treat that as non-fatal and continue rather than mapping the
error to rusqlite::Error::InvalidParameterName — only call conn.execute(...)
when the column is present and propagate real SQL errors from the UPDATE
otherwise.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 77a101ba-0b1f-472c-859c-566869411e79
📒 Files selected for processing (1)
src/database/initialization.rs
write_status() updated the internal RwLock but did not push to ConnectionStatus. The UI reads ConnectionStatus.spv_status() to show the SPV Sync Status section, so it stayed hidden until spawn_progress_watcher fired its first async tick. Now start() and stop() push status to ConnectionStatus immediately, matching the pattern used in all other status transitions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Move the ConnectionStatus push into write_status() so every status transition automatically propagates to the UI. Remove redundant manual pushes from start(), stop(), run_spv_loop(), and run_client(). Async event handlers (spawn_progress_watcher, spawn_sync_event_handler) still push directly because they write to the RwLock without going through write_status() (they hold Arc<RwLock>, not &self). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- fix(ui): testnet MNListDiff fallback now uses Network::Testnet (was incorrectly Network::Mainnet in file-not-found path) - fix(spv): progress watcher no longer clears last_error when a second error arrives with error_msg=None (preserves first error) - fix(spv): clear spv_no_peers_since when SPV transitions to non-active state, preventing stale "no peers" tooltip warnings - docs: clarify ConnectionStatus scope — health is push-based, detailed sync progress may still be read from SpvManager Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
These crates were renamed/added upstream in rust-dashcore 450f72f and were missing from the default EnvFilter, hiding debug logs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/spv/manager.rs (1)
1107-1158:⚠️ Potential issue | 🟠 MajorDo not let late SPV callbacks overwrite
Stopping/Stopped.After
stop()requests shutdown, both handlers can still promote the state from a late callback.spawn_progress_watcher()unconditionally mapsis_syncedtoRunning, andspawn_sync_event_handler()does the same forSyncComplete(and can also flip toErroron a lateManagerError). If one of those lands before the channels close, the UI jumps out ofStopping, andtrigger_refresh()loses its fast 200 ms shutdown polling path.Possible fix
- if let Ok(mut status_guard) = status.write() { - if is_synced { + if let Ok(mut status_guard) = status.write() { + if matches!(*status_guard, SpvStatus::Stopping | SpvStatus::Stopped) { + new_status = None; + } else if is_synced { *status_guard = SpvStatus::Running; new_status = Some(SpvStatus::Running); } else if is_error { *status_guard = SpvStatus::Error; new_status = Some(SpvStatus::Error); @@ - if matches!(event, SyncEvent::SyncComplete { .. }) - && let Ok(mut guard) = status.write() - { + if matches!(event, SyncEvent::SyncComplete { .. }) + && let Ok(mut guard) = status.write() + && !matches!(*guard, SpvStatus::Stopping | SpvStatus::Stopped) + { *guard = SpvStatus::Running; drop(guard); if let Some(cs) = &connection_status { cs.set_spv_status(SpvStatus::Running); cs.refresh_state(); } }Also applies to: 1211-1247
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/manager.rs` around lines 1107 - 1158, Late SPV callbacks can overwrite Stopping/Stopped; change the logic in spawn_progress_watcher()/spawn_sync_event_handler() around writing to status (the block that sets SpvStatus::Running/Syncing/Error) to first inspect the current *status_guard and refuse to promote/overwrite if it is SpvStatus::Stopping or SpvStatus::Stopped. Concretely, when acquiring status.write() in the code that computes new_status, add a guard like if matches!(*status_guard, SpvStatus::Stopping | SpvStatus::Stopped) { new_status = None; } before assigning Running/Syncing/Error so late callbacks no longer flip the state; apply the same guard before calling cs.set_spv_status(...) so ConnectionStatus is not updated from Stopping/Stopped either.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/spv/manager.rs`:
- Around line 449-452: The code currently pushes raw backend/SPV errors into
ConnectionStatus via manager.connection_status_snapshot() and
cs.set_spv_last_error(Some(err)); instead, stop passing the raw err string into
ConnectionStatus::spv_last_error — replace it with a short, user-facing
action-oriented message (e.g. "Sync failed — please check your network and
retry") and record the raw error in SpvManager::last_error or the application
log; update all occurrences that call cs.set_spv_last_error(Some(...))
(including the blocks around connection_status_snapshot() shown and the other
referenced locations) to pass the sanitized message and ensure the original err
is preserved only in non-user-facing storage (SpvManager::last_error or process
logger).
---
Outside diff comments:
In `@src/spv/manager.rs`:
- Around line 1107-1158: Late SPV callbacks can overwrite Stopping/Stopped;
change the logic in spawn_progress_watcher()/spawn_sync_event_handler() around
writing to status (the block that sets SpvStatus::Running/Syncing/Error) to
first inspect the current *status_guard and refuse to promote/overwrite if it is
SpvStatus::Stopping or SpvStatus::Stopped. Concretely, when acquiring
status.write() in the code that computes new_status, add a guard like if
matches!(*status_guard, SpvStatus::Stopping | SpvStatus::Stopped) { new_status =
None; } before assigning Running/Syncing/Error so late callbacks no longer flip
the state; apply the same guard before calling cs.set_spv_status(...) so
ConnectionStatus is not updated from Stopping/Stopped either.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fb563dca-d3b1-41b1-bc28-5ff5e0884122
📒 Files selected for processing (4)
CLAUDE.mdsrc/context/connection_status.rssrc/spv/manager.rssrc/ui/tools/masternode_list_diff_screen.rs
✅ Files skipped from review due to trivial changes (1)
- CLAUDE.md
There was a problem hiding this comment.
Re-review complete. See inline comments and summary above.
📊 View full HTML review report
- Error banner now says "Go to Settings" and attaches the actual error message via with_details() for the collapsible details panel - Settings SPV status label shows the error message instead of just "Error" - Add public spv_last_error() getter on ConnectionStatus (DRY) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Validated 23 findings from 4 review agents against PR #776 source code. Both 'blocking' findings from codex-general are false positives: (1) shared ConnectionStatus across networks is by design with reset() handling switches, and (2) backend-e2e tests compile fine under documented --all-features commands. Remaining validated findings are suggestions and nitpicks around ConnectionStatus push consistency, lock patterns, and test robustness. No blocking issues found.
Reviewed commit: 9abc327
🟡 7 suggestion(s)
1 additional finding
🟡 suggestion: write_last_error() does not push to ConnectionStatus, leaving stale state after clear_data_and_reset()
src/spv/manager.rs (lines 232-238)
write_status() centralizes pushes to ConnectionStatus (lines 216-221) but write_last_error() does not — callers must push separately. In clear_data_and_reset(), write_last_error(None) at line 607 does not clear ConnectionStatus.spv_last_error, and connected_peers is reset locally (line 610-612) but not pushed to ConnectionStatus.spv_connected_peers. While overall_state correctly resolves to Disconnected (via write_status(Idle) → refresh_state()), tooltip display code reads spv_connected_peers and spv_last_error directly — so stale values may appear until SPV restarts. Consider either making write_last_error() push like write_status(), or adding explicit ConnectionStatus pushes in clear_data_and_reset().
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/spv/manager.rs`:
- [SUGGESTION] lines 232-238: write_last_error() does not push to ConnectionStatus, leaving stale state after clear_data_and_reset()
write_status() centralizes pushes to ConnectionStatus (lines 216-221) but write_last_error() does not — callers must push separately. In clear_data_and_reset(), write_last_error(None) at line 607 does not clear ConnectionStatus.spv_last_error, and connected_peers is reset locally (line 610-612) but not pushed to ConnectionStatus.spv_connected_peers. While overall_state correctly resolves to Disconnected (via write_status(Idle) → refresh_state()), tooltip display code reads spv_connected_peers and spv_last_error directly — so stale values may appear until SPV restarts. Consider either making write_last_error() push like write_status(), or adding explicit ConnectionStatus pushes in clear_data_and_reset().
- [SUGGESTION] lines 1107-1158: Progress watcher bypasses write_status() centralization
spawn_progress_watcher() writes directly to the status RwLock (lines 1107-1123) instead of calling write_status(), then separately pushes to ConnectionStatus (lines 1143-1158). This duplicates the logic from write_status(). The bypass is architecturally forced — the watcher runs in an async move closure that captures Arc<RwLock<SpvStatus>> and Option<Arc<ConnectionStatus>> separately because &self is not available. However, future changes to write_status() (e.g., adding logging or additional side-effects) must be manually replicated here. A comment noting this dependency would reduce maintenance risk.
- [SUGGESTION] line 178: connection_status field uses Mutex for set-once-read-many pattern
SpvManager.connection_status is Mutex<Option<Arc<ConnectionStatus>>> — set once via set_connection_status() during initialization, then only read via connection_status_snapshot(). Every status push acquires and releases this Mutex. OnceLock<Arc<ConnectionStatus>> (or std::sync::OnceLock) would be more idiomatic for this set-once-read-many pattern, eliminating repeated lock acquisition on the hot path.
In `src/context/connection_status.rs`:
- [SUGGESTION] lines 146-177: Inconsistent lock-poison handling in ConnectionStatus setters
set_spv_connected_peers() (line 172) uses unwrap_or_else(|e| e.into_inner()) to recover from poisoned Mutex locks, while set_spv_status() (line 151) and set_spv_last_error() (line 159) use if let Ok(...) which silently drops the write on poison. These are called from the same SPV event handlers, so the inconsistency is unnecessary. Pick one strategy — recover-from-poison is generally preferred for status/display fields since losing an update is worse than using slightly stale data.
In `src/database/initialization.rs`:
- [SUGGESTION] lines 933-938: Migration 29 uses InvalidParameterName for error wrapping
rename_network_dash_to_mainnet() wraps migration failures in rusqlite::Error::InvalidParameterName(format!(...)), which is semantically wrong — that variant is for SQLite named-parameter errors, not general migration failures. The same pattern exists in the outer apply_migrations wrapper (line 41-44), so this is a pre-existing convention. A custom error type or rusqlite::Error::QueryReturnedNoRows-style marker would be more correct, but this is cosmetic since the formatted string carries the real diagnostic.
In `tests/backend-e2e/framework/harness.rs`:
- [SUGGESTION] lines 261-263: Bloom-filter readiness gated by hard-coded sleep in tests
tokio::time::sleep(Duration::from_millis(200)) assumes the mempool manager tick interval remains 100ms. If the tick interval changes, this sleep may be too short, causing flaky test failures where the bloom filter hasn't been rebuilt before broadcasting. A poll-based approach (checking bloom filter state or wallet address registration) would be more robust, though the current 2x safety margin is reasonable for now. Test-only concern.
In `src/context/wallet_lifecycle.rs`:
- [SUGGESTION] line 384: wallet_network_key() wildcard maps unknown Network variants to Mainnet
The _ => WalletNetwork::Mainnet wildcard catches any future Network variants (the enum comes from dashcore and could gain new variants). Silently mapping an unknown network to Mainnet could cause wallet data to be stored/loaded under the wrong network key. Consider logging a warning in the wildcard arm, or returning an error/panic to surface the unhandled variant early during development.
With mempool support enabled, upstream wallet_transaction_history() can return the same txid twice — as both a mempool entry (no height) and a confirmed entry (with height). This caused a UNIQUE constraint violation on (seed_hash, txid, network). Deduplicate in replace_wallet_transactions(), preferring the confirmed version over unconfirmed when both exist for the same txid. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…acking Add TransactionStatus (Unconfirmed/InstantSendLocked/Confirmed/ChainLocked) to WalletTransaction model and database schema. - New TransactionStatus enum with from_height() heuristic for inferring status from block height (confirmed vs unconfirmed) - Migration 30: adds `status` column to wallet_transactions table (default 2=Confirmed for existing rows) - INSERT OR REPLACE handles duplicate txids from mempool + block - UI shows status labels: "Pending", "⚡ InstantSend", "Confirmed @n", "🔒 ChainLocked @n" InstantSendLocked and ChainLocked require upstream support (rust-dashcore#569) — currently only Unconfirmed/Confirmed are inferred. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/context/wallet_lifecycle.rs (1)
825-866:⚠️ Potential issue | 🟠 MajorDeduplicate SPV history by
txidbefore writing it back.
replace_wallet_transactions()now relies on confirmed entries coming after unconfirmed duplicates, but this code forwardswallet_transaction_history()unchanged and then stores the same raw vector in memory. When SPV returns both mempool and confirmed rows for onetxid, the persisted winner depends on iterator order, andwallet.set_transactions(...)can still keep both copies. Normalize the history here first, with a deterministic status precedence, before both calls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/wallet_lifecycle.rs` around lines 825 - 866, The SPV history must be deduplicated by txid with deterministic status precedence before persisting or storing in memory: after building wallet_transactions (from history and TransactionStatus::from_height), collapse entries with the same txid by choosing the single representative according to a fixed precedence (e.g., confirmed > unconfirmed/mempool) and preserving a deterministic order; then pass the deduped vector to self.db.replace_wallet_transactions(...) and wallet.set_transactions(...). Update the wallet_transactions variable to the deduped list and ensure the same collection is used for both replace_wallet_transactions and set_transactions so DB and in-memory state match.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/database/initialization.rs`:
- Around line 914-918: The migration adds status unconditionally in
add_wallet_transaction_status_column which causes a duplicate-column error if
initialize_wallet_transactions_table already created status for fresh databases;
make migration 30 idempotent by checking for the column before ALTER: use
pragma_table_info('wallet_transactions') (or equivalent query) in
add_wallet_transaction_status_column to return early if a "status" column
exists, or separate the fresh-schema helper
(initialize_wallet_transactions_table) from the historical migration path so the
ALTER runs only when upgrading from older schemas.
In `@src/model/wallet/mod.rs`:
- Around line 533-541: The confirmation state is duplicated between the numeric
height and the new TransactionStatus enum (constructed via
TransactionStatus::from_u8), causing two sources of truth; make status
authoritative by changing readers/helpers to derive confirmation solely from
TransactionStatus (e.g., update WalletTransaction::is_confirmed to check status
== TransactionStatus::Confirmed or ChainLocked/InstantSendLocked as appropriate)
and remove or ignore height-based checks where they conflict; ensure
deserialization uses TransactionStatus::from_u8 to populate status and any
callers expecting confirmation use the updated is_confirmed implementation so
all code paths rely on status rather than height.
In `@src/ui/wallets/wallets_screen/mod.rs`:
- Around line 958-969: In format_transaction_status, change the
fragment/positional strings to complete, localizable sentences: replace
TransactionStatus::Unconfirmed "Pending" with a full sentence like "This
transaction is pending."; for TransactionStatus::InstantSendLocked return a
sentence such as "This transaction is locked by InstantSend."; for
TransactionStatus::Confirmed use height.map(|h| format!("This transaction was
confirmed at {}.", h)).unwrap_or_else(|| "This transaction was
confirmed.".to_string()); and for TransactionStatus::ChainLocked use
height.map(|h| format!("This transaction is chain locked at {}.",
h)).unwrap_or_else(|| "This transaction is chain locked.".to_string()); ensure
you update the match arms in format_transaction_status and keep the
WalletTransaction.height usage unchanged.
---
Outside diff comments:
In `@src/context/wallet_lifecycle.rs`:
- Around line 825-866: The SPV history must be deduplicated by txid with
deterministic status precedence before persisting or storing in memory: after
building wallet_transactions (from history and TransactionStatus::from_height),
collapse entries with the same txid by choosing the single representative
according to a fixed precedence (e.g., confirmed > unconfirmed/mempool) and
preserving a deterministic order; then pass the deduped vector to
self.db.replace_wallet_transactions(...) and wallet.set_transactions(...).
Update the wallet_transactions variable to the deduped list and ensure the same
collection is used for both replace_wallet_transactions and set_transactions so
DB and in-memory state match.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bc6b16f1-8754-4b58-a17d-c8be20665211
📒 Files selected for processing (6)
src/backend_task/core/refresh_wallet_info.rssrc/context/wallet_lifecycle.rssrc/database/initialization.rssrc/database/wallet.rssrc/model/wallet/mod.rssrc/ui/wallets/wallets_screen/mod.rs
Both migrations were introduced in this PR — no need for users to run them separately. Migration 29 now does both: rename network "dash" to "mainnet" AND add the transaction status column. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…g sync When using the "System" color theme, dark_light::detect() was called every frame. During SPV sync (high repaint frequency), transient wrong values caused brief theme flips visible as white flashes. Changes: - Cache the resolved theme in AppState and only re-poll the OS every 2s - Guard font initialization with AtomicBool so set_fonts runs once - Immediately resolve and apply theme on preference change Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…sh during sync" This reverts commit 7f822bb.
The table-creation DEFAULT was Confirmed (2), which is wrong — transactions should be unconfirmed until proven otherwise. The migration ALTER TABLE correctly keeps DEFAULT 2 for pre-existing rows (all from confirmed RPC). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… poison handling) - Guard ALTER TABLE in migration 30 with pragma_table_info check to prevent duplicate column crash on DBs upgrading from version <14 - Make is_confirmed() authoritative via TransactionStatus instead of height - Consistent recover-from-poison in ConnectionStatus setters - Add TODO for write_last_error() ConnectionStatus push consolidation - Add maintenance comments for progress watcher write_status bypass - Add tracing::debug for unknown Network variant fallback Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Enable rust-dashcore's
MempoolManager(8th SPV sync manager) with BIP37 bloom filter strategy. This is the infrastructure layer — the SPV client now detects incoming mempool transactions and processes them internally viaWalletManager. UI-side improvements (instant balance refresh, unconfirmed tx indicators) are tracked separately in #780.What this PR does
MempoolStrategy::BloomFilterincreate_spv_client()— MempoolManager starts automatically after sync completesConnectionStatusas single source of truthprocess_mempool_transaction()signature: Updated for newis_instant_send: boolparam +MempoolTransactionResultreturn typewallet_addresses(no network column) — now usesidentity_token_balancesMainnetinstead ofTestnetSpvStatus::Runningbefore broadcasting (ensures bloom filter is active), 200ms sleep for filter rebuild after new wallet registrationkey_wallet_manager→key_wallet(upstream crate merge feat: update sdk v3.0.1 hotfix.1 #503/SPV sync indicator is unreliable #568)What this PR does NOT do (see #780)
WalletEvent::TransactionReceived/TransactionStatusChanged(consumed by wildcard match)Platform dependency
Updated through
dashpay/platformfeat/mempool-supportbranch → rust-dashcore450f72f.Test plan
cargo clippy --all-features --all-targets -- -D warnings— cleancargo +nightly fmt --all— clean🤖 Co-authored by Claudius the Magnificent AI Agent
Summary by CodeRabbit
New Features
Bug Fixes
Chores
Documentation