refactor: move event callback dispatch from FFI into DashSpvClient#572
refactor: move event callback dispatch from FFI into DashSpvClient#572xdustinface wants to merge 3 commits intov0.42-devfrom
DashSpvClient#572Conversation
Introduce an `EventHandler` trait in `dash-spv` with default no-op methods (`on_sync_event`, `on_network_event`, `on_progress`, `on_wallet_event`, `on_error`). The `run()` method now accepts an `Arc<H: EventHandler>`, subscribes to internal channels, and dispatches events to the handler via monitoring tasks. Add `TestEventHandler` in `test_utils` that bridges events back to tokio channels for ergonomic `select!`-based integration tests.
Implement the `EventHandler` trait with tracing-based logging for sync events, wallet events, progress updates, and errors in `main.rs`.
Pass all event callbacks as a single `FFIEventCallbacks` struct directly to `run()` instead of requiring separate set_*_callbacks() calls. This removes 5 Mutex fields from `FFIDashSpvClient` and 10 FFI functions.
📝 WalkthroughWalkthroughThis pull request consolidates event callback handling across the FFI and dash-spv layers. Individual callback setter/clearer functions are removed from the FFI API, and callbacks are now passed as a single aggregated Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## v0.42-dev #572 +/- ##
=============================================
+ Coverage 66.32% 66.53% +0.20%
=============================================
Files 311 312 +1
Lines 64976 65019 +43
=============================================
+ Hits 43097 43259 +162
+ Misses 21879 21760 -119
|
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
dash-spv-ffi/tests/unit/test_async_operations.rs (1)
295-305: Test now only validates struct construction, not callback dispatch.The updated test verifies that
FFIEventCallbackscan be constructed with sync callbacks and that the fields are correctly set. However, it no longer exercises the actual callback dispatch path through the client sincedash_spv_ffi_client_runis not called with these callbacks.Consider either:
- Calling
dash_spv_ffi_client_run(client, callbacks)to test actual dispatch (may require#[ignore]due to network requirements), or- Renaming the test to
test_sync_event_callbacks_constructionto clarify its scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/unit/test_async_operations.rs` around lines 295 - 305, The test only asserts that FFIEventCallbacks { sync } fields (sync.on_sync_start, sync.on_block_headers_stored, sync.on_sync_complete) are Some, but it no longer exercises the callback dispatch path; either invoke dash_spv_ffi_client_run(client, callbacks) to exercise and assert the callbacks are invoked (you may mark the test #[ignore] if it requires network or heavy setup) or rename the test to test_sync_event_callbacks_construction to clearly indicate it only validates struct construction; update the test function name and/or add the run call and necessary client setup to cover the dispatch path depending on which approach you choose.dash-spv-ffi/tests/test_client.rs (1)
51-52: Test coverage gap: Initial progress dispatch behavior is no longer tested.Per the relevant code snippets,
dash_spv_ffi_client_rundispatches initial progress whencallbacks.progress.on_progress.is_some(). The removedtest_set_progress_callback_emits_progresstest was the only explicit verification of this behavior.Consider adding a test that passes an
FFIEventCallbackswith a progress callback set and verifies the initial dispatch occurs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv-ffi/tests/test_client.rs` around lines 51 - 52, Add a test (e.g., test_set_progress_callback_emits_progress) that constructs an FFIEventCallbacks with progress.on_progress set, calls dash_spv_ffi_client_run(client, callbacks) and verifies the progress callback is invoked immediately for the initial dispatch; implement the verification by providing a thread-safe notifier (channel, AtomicBool/AtomicUsize, or Mutex) captured by the callback and assert the notifier reflects the initial progress after the call, then clean up the client as other tests do.dash-spv/src/test_utils/event_handler.rs (1)
78-80: Exposeon_errorto tests instead of only logging it.
DashSpvClient::run()reports start/tick failures throughon_error(). With only a log here, integration tests can only hang until timeout instead of failing fast on that path. A small error channel/subscription API would make the new handler contract fully testable.As per coding guidelines, "Write unit tests for new functionality in Rust code".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@dash-spv/src/test_utils/event_handler.rs` around lines 78 - 80, The TestEventHandler's on_error currently only logs errors; modify TestEventHandler to accept an optional test-facing error channel or callback (e.g., add a Sender<String> field or Fn(String) closure passed into its constructor) and change on_error(&self, error: &str) to forward the error into that channel/callback in addition to logging; update places that construct TestEventHandler so tests can pass a channel (tests create a Receiver to assert failures) and DashSpvClient::run consumers will now cause tests to receive and fail fast when on_error is invoked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@dash-spv-ffi/FFI_API.md`:
- Around line 600-607: Update the documentation generator so examples call the
new combined callback API: modify the generator script (generate_ffi_docs.py) to
emit examples that pass an FFIEventCallbacks struct into
dash_spv_ffi_client_run(client, callbacks) instead of calling
dash_spv_ffi_client_run(client) and then
dash_spv_ffi_client_set_event_callbacks(client, callbacks); locate the example
template that builds the run call and replace the old two-step pattern with a
single call that constructs/passes the FFIEventCallbacks value (ensuring the
generated safety note about callbacks lifetime remains intact).
In `@dash-spv-ffi/src/client.rs`:
- Around line 205-219: The code unconditionally overwrites client.run_task,
losing the existing handle; modify dash_spv_ffi_client_run to first check
client.run_task (the Mutex-protected Option) and if Some(_) return an
error/indicator that a run is already active instead of spawning a new task;
only spawn and set run_task to Some(task) when the slot is None. Use the same
symbols shown (client.run_task.lock().unwrap(), client.runtime.spawn,
shutdown_token, spv_client, handler) to locate the logic and ensure you preserve
the existing shutdown_token/spv_client/handler capture while avoiding replacing
an active task handle.
- Around line 199-203: The code currently pre-dispatches an initial SyncProgress
by calling client.runtime.block_on(client.inner.progress()) and
callbacks.progress.dispatch, which duplicates the first notification already
emitted by DashSpvClient::run() when spawn_progress_monitor() starts; remove
this pre-dispatch block (the if block that calls client.runtime.block_on(async {
client.inner.progress().await }) and callbacks.progress.dispatch) so only
spawn_progress_monitor() emits the initial progress, or alternatively add a
clear guard tied to spawn_progress_monitor() to ensure only a single initial
dispatch occurs.
- Around line 183-184: Update the safety doc comment to clarify that only the
targets of each FFIEventCallbacks.user_data pointer must remain valid until
dash_spv_ffi_client_stop or dash_spv_ffi_client_destroy is called, not the
FFIEventCallbacks struct itself: note that callbacks are taken by value and
converted via FFIEventHandler::from (which copies function pointers and
user_data pointers), so the caller need only ensure the memory referenced by
each user_data pointer remains valid for the lifetime, while the original
FFIEventCallbacks value may be dropped after the call.
- Around line 105-113: The current cancel_run_task() unconditionally calls
task.abort() which can interrupt DashSpvClient::run() cleanup (including
monitor_shutdown.cancel() and tokio::join!) and let monitor callbacks outlive
their user_data; change cancel_run_task() (and similar destroy() call sites) to:
after taking the run_task (self.run_task.lock().unwrap().take()), do not
immediately abort — instead block_on a cooperative wait for the task to finish
(await the JoinHandle) with a bounded timeout, and only call task.abort() if the
timeout elapses; ensure this uses the same shutdown_token/monitor_shutdown
cancellation already sent, and document the fallback timeout behavior so the FFI
caller enforces a safe max wait before aborting.
---
Nitpick comments:
In `@dash-spv-ffi/tests/test_client.rs`:
- Around line 51-52: Add a test (e.g.,
test_set_progress_callback_emits_progress) that constructs an FFIEventCallbacks
with progress.on_progress set, calls dash_spv_ffi_client_run(client, callbacks)
and verifies the progress callback is invoked immediately for the initial
dispatch; implement the verification by providing a thread-safe notifier
(channel, AtomicBool/AtomicUsize, or Mutex) captured by the callback and assert
the notifier reflects the initial progress after the call, then clean up the
client as other tests do.
In `@dash-spv-ffi/tests/unit/test_async_operations.rs`:
- Around line 295-305: The test only asserts that FFIEventCallbacks { sync }
fields (sync.on_sync_start, sync.on_block_headers_stored, sync.on_sync_complete)
are Some, but it no longer exercises the callback dispatch path; either invoke
dash_spv_ffi_client_run(client, callbacks) to exercise and assert the callbacks
are invoked (you may mark the test #[ignore] if it requires network or heavy
setup) or rename the test to test_sync_event_callbacks_construction to clearly
indicate it only validates struct construction; update the test function name
and/or add the run call and necessary client setup to cover the dispatch path
depending on which approach you choose.
In `@dash-spv/src/test_utils/event_handler.rs`:
- Around line 78-80: The TestEventHandler's on_error currently only logs errors;
modify TestEventHandler to accept an optional test-facing error channel or
callback (e.g., add a Sender<String> field or Fn(String) closure passed into its
constructor) and change on_error(&self, error: &str) to forward the error into
that channel/callback in addition to logging; update places that construct
TestEventHandler so tests can pass a channel (tests create a Receiver to assert
failures) and DashSpvClient::run consumers will now cause tests to receive and
fail fast when on_error is invoked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3cc3e739-3fc4-4604-bb5e-b4883f1bb8ad
📒 Files selected for processing (22)
dash-spv-ffi/FFI_API.mddash-spv-ffi/src/bin/ffi_cli.rsdash-spv-ffi/src/callbacks.rsdash-spv-ffi/src/client.rsdash-spv-ffi/tests/dashd_sync/context.rsdash-spv-ffi/tests/test_client.rsdash-spv-ffi/tests/unit/test_async_operations.rsdash-spv-ffi/tests/unit/test_client_lifecycle.rsdash-spv/examples/filter_sync.rsdash-spv/examples/simple_sync.rsdash-spv/examples/spv_with_wallet.rsdash-spv/src/client/event_handler.rsdash-spv/src/client/events.rsdash-spv/src/client/mod.rsdash-spv/src/client/sync_coordinator.rsdash-spv/src/lib.rsdash-spv/src/main.rsdash-spv/src/test_utils/event_handler.rsdash-spv/src/test_utils/mod.rsdash-spv/tests/dashd_sync/setup.rsdash-spv/tests/peer_test.rsdash-spv/tests/wallet_integration_test.rs
| dash_spv_ffi_client_run(client: *mut FFIDashSpvClient, callbacks: FFIEventCallbacks,) -> i32 | ||
| ``` | ||
|
|
||
| **Description:** | ||
| Start the SPV client and begin syncing in the background. Subscribes to events, spawns monitoring threads, then spawns a background thread that calls `run()` (which handles start + sync loop + stop internally). Returns immediately after spawning. Use event callbacks (set via `set_sync_event_callbacks`, `set_network_event_callbacks`, `set_wallet_event_callbacks`) to receive notifications. Configure callbacks before calling `run()`. # Safety - `client` must be a valid, non-null pointer to a created client. # Returns 0 on success, error code on failure. | ||
| Start the SPV client and begin syncing in the background. Accepts all event callbacks as a single struct and passes them to the library's `run()` method, which manages channel subscriptions and monitoring tasks internally. Returns immediately after spawning. # Safety - `client` must be a valid, non-null pointer to a created client. - The `callbacks` struct and its `user_data` pointers must remain valid until `dash_spv_ffi_client_stop` or `dash_spv_ffi_client_destroy` is called. # Returns 0 on success, error code on failure. | ||
|
|
||
| **Safety:** | ||
| - `client` must be a valid, non-null pointer to a created client. | ||
| - `client` must be a valid, non-null pointer to a created client. - The `callbacks` struct and its `user_data` pointers must remain valid until `dash_spv_ffi_client_stop` or `dash_spv_ffi_client_destroy` is called. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the FFI_API.md file and check its contents
find . -name "FFI_API.md" -type fRepository: dashpay/rust-dashcore
Length of output: 118
🏁 Script executed:
# Examine lines 600-607 (signature section)
sed -n '595,615p' dash-spv-ffi/FFI_API.md | cat -nRepository: dashpay/rust-dashcore
Length of output: 1217
🏁 Script executed:
# Examine lines 752-795 (examples section)
sed -n '745,800p' dash-spv-ffi/FFI_API.md | cat -nRepository: dashpay/rust-dashcore
Length of output: 1905
🏁 Script executed:
# Check file header to confirm auto-generated status
head -20 dash-spv-ffi/FFI_API.mdRepository: dashpay/rust-dashcore
Length of output: 712
🏁 Script executed:
# Search for references to the old API shapes mentioned in the review
rg "dash_spv_ffi_client_set_event_callbacks" dash-spv-ffi/Repository: dashpay/rust-dashcore
Length of output: 267
Update the example code in the generator to match the new callback API.
The signature at lines 600–607 shows dash_spv_ffi_client_run(client, callbacks) accepting callbacks directly, but the examples at lines 752–795 still demonstrate the old pattern: calling dash_spv_ffi_client_run(client) without callbacks, then separately calling dash_spv_ffi_client_set_event_callbacks(client, callbacks). Since this file is auto-generated, update dash-spv-ffi/scripts/generate_ffi_docs.py to emit examples with the new API signature.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-spv-ffi/FFI_API.md` around lines 600 - 607, Update the documentation
generator so examples call the new combined callback API: modify the generator
script (generate_ffi_docs.py) to emit examples that pass an FFIEventCallbacks
struct into dash_spv_ffi_client_run(client, callbacks) instead of calling
dash_spv_ffi_client_run(client) and then
dash_spv_ffi_client_set_event_callbacks(client, callbacks); locate the example
template that builds the run call and replace the old two-step pattern with a
single call that constructs/passes the FFIEventCallbacks value (ensuring the
generated safety note about callbacks lifetime remains intact).
| /// Cancel the run task and wait for it to finish. | ||
| fn cancel_run_task(&self) { | ||
| let task = self.run_task.lock().unwrap().take(); | ||
| if let Some(task) = task { | ||
| task.abort(); | ||
| } | ||
|
|
||
| // Wait for all tasks to finish | ||
| self.runtime.block_on(async { | ||
| for task in tasks { | ||
| self.runtime.block_on(async { | ||
| let _ = task.await; | ||
| } | ||
| }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Get the size of client.rs to read it appropriately
wc -l dash-spv-ffi/src/client.rsRepository: dashpay/rust-dashcore
Length of output: 95
🏁 Script executed:
# Read the flagged section in client.rs (lines 100-130 to see context)
sed -n '100,130p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 1001
🏁 Script executed:
# Read the other flagged section (lines 175-225 to see dash_spv_ffi_client_run)
sed -n '175,225p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 2172
🏁 Script executed:
# Read the third flagged section (lines 345-355)
sed -n '345,355p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 398
🏁 Script executed:
# Check the DashSpvClient::run() implementation in sync_coordinator
wc -l dash-spv/src/client/sync_coordinator.rsRepository: dashpay/rust-dashcore
Length of output: 108
🏁 Script executed:
# Search for DashSpvClient::run() method to understand cleanup behavior
rg "pub async fn run" dash-spv/src/client/sync_coordinator.rs -A 30Repository: dashpay/rust-dashcore
Length of output: 1101
🏁 Script executed:
# Read the full sync_coordinator.rs to see the monitor task lifecycle
cat dash-spv/src/client/sync_coordinator.rsRepository: dashpay/rust-dashcore
Length of output: 3964
🏁 Script executed:
# Search for all usages of cancel_run_task
rg "cancel_run_task" dash-spv-ffi/src/client.rsRepository: dashpay/rust-dashcore
Length of output: 160
🏁 Script executed:
# Read lines around dash_spv_ffi_client_destroy to see the full cleanup sequence
sed -n '330,360p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 1150
🏁 Script executed:
# Check if there's a shutdown_token in FFIDashSpvClient and how it's used
rg "shutdown_token" dash-spv-ffi/src/client.rs -B 2 -A 2Repository: dashpay/rust-dashcore
Length of output: 1180
🏁 Script executed:
# Look at the FFIEventHandler to understand the callback lifetime
rg -A 15 "struct FFIEventHandler" dash-spv-ffi/src/Repository: dashpay/rust-dashcore
Length of output: 1034
🏁 Script executed:
# Check if there's any timeout mechanism or reason for the abort
rg "task.abort\|timeout\|Duration" dash-spv-ffi/src/client.rsRepository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Look at the FFIEventCallbacks struct to understand callback ownership
rg -B 5 -A 10 "pub struct FFIEventCallbacks" dash-spv-ffi/src/Repository: dashpay/rust-dashcore
Length of output: 1111
🏁 Script executed:
# Check if there are any tests or documentation about the shutdown flow
rg "shutdown\|cleanup\|monitor" dash-spv-ffi/src/client.rs -B 1 -A 1Repository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Look at the full stop_client_internal function
sed -n '115,135p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 838
🏁 Script executed:
# Search for any timeout patterns or graceful shutdown discussions
rg "timeout\|graceful\|cooperative" dash-spv-ffi/src/ -iRepository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Let me trace through the exact execution flow: when shutdown_token is cancelled,
# does the run() loop exit before any cleanup needs to happen?
# Read the exact loop structure again
sed -n '70,100p' dash-spv/src/client/sync_coordinator.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 1397
🏁 Script executed:
# Verify what happens when abort() + await is called on a JoinHandle
# and whether the task code after the loop can still execute
# Let's look at the exact structure of the spawned task closure
sed -n '175,210p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 1699
🏁 Script executed:
# Let me see what happens after the spawned task completes
sed -n '35,50p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 833
🏁 Script executed:
# Check Tokio's abort semantics - does abort + await skip code after the await?
# Let's look at what the monitor tasks cleanup code looks like
rg "spawn_broadcast_monitor\|spawn_progress_monitor" dash-spv/src/client/event_handler.rs -A 3Repository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Let me verify the exact Tokio abort semantics by checking what happens
# when we abort a task that's in the middle of an await
# Look at the event_handler file to see how monitor tasks work
wc -l dash-spv/src/client/event_handler.rsRepository: dashpay/rust-dashcore
Length of output: 105
🏁 Script executed:
# Search for the monitor task implementations
rg "fn spawn_broadcast_monitor|fn spawn_progress_monitor" dash-spv/src/client/event_handler.rs -A 10Repository: dashpay/rust-dashcore
Length of output: 717
🏁 Script executed:
# Let me understand the exact semantics: when abort() is called on a task
# that is currently executing cleanup code, does the cleanup complete?
# Let's also check if there's a pattern for graceful shutdown elsewhere
rg "shutdown_token\|abort\|await" dash-spv-ffi/src/ | head -30Repository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Verify the exact order and flow in stop_client_internal
sed -n '115,128p' dash-spv-ffi/src/client.rs | cat -nRepository: dashpay/rust-dashcore
Length of output: 473
🏁 Script executed:
# Most importantly: check if there's any test or documentation about
# what should happen when the run task is aborted vs naturally exiting
rg "test\|Test\|abort" dash-spv-ffi/src/client.rs -iRepository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Let me verify the critical point: what exactly happens when abort() is called
# on a task that is in the middle of awaiting a tokio::join! call
# The key question is whether abort can interrupt the join
# Let me check Tokio documentation behavior by looking at the actual implementation
# First, let's see if there are any comments about this in the codebase
rg "abort\|race\|cleanup" dash-spv-ffi/ -i -B 2 -A 2Repository: dashpay/rust-dashcore
Length of output: 47
Aborting the outer run() task can interrupt monitor task cleanup, violating FFI callback lifetime contracts.
When task.abort() is called, it immediately cancels the task even if cleanup code (the monitor_shutdown.cancel() and tokio::join!() in DashSpvClient::run()) is in progress. This creates a race condition where the monitor tasks may not complete properly, allowing them to outlive the callback and user_data pointers they hold.
Although shutdown_token.cancel() is called before cancel_run_task(), this signals the run loop to exit cooperatively, but it does not guarantee the subsequent cleanup executes. The abort() call can interrupt the cleanup mid-sequence (e.g., during tokio::join!()), leaving monitors still running.
Replace abort() with a cooperative wait, falling back to abort() only after a timeout:
- fn cancel_run_task(&self) {
+ fn wait_for_run_task(&self) {
let task = self.run_task.lock().unwrap().take();
if let Some(task) = task {
- task.abort();
self.runtime.block_on(async {
let _ = task.await;
});
}
}The shutdown token is already cancelled before this helper is called; the task should exit cooperatively. Add a timeout wrapper in the FFI caller to fall back to abort() if the wait exceeds an acceptable duration.
Also applies to: lines 118-124 (destroy() calls), 349-350 (comment).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-spv-ffi/src/client.rs` around lines 105 - 113, The current
cancel_run_task() unconditionally calls task.abort() which can interrupt
DashSpvClient::run() cleanup (including monitor_shutdown.cancel() and
tokio::join!) and let monitor callbacks outlive their user_data; change
cancel_run_task() (and similar destroy() call sites) to: after taking the
run_task (self.run_task.lock().unwrap().take()), do not immediately abort —
instead block_on a cooperative wait for the task to finish (await the
JoinHandle) with a bounded timeout, and only call task.abort() if the timeout
elapses; ensure this uses the same shutdown_token/monitor_shutdown cancellation
already sent, and document the fallback timeout behavior so the FFI caller
enforces a safe max wait before aborting.
| /// - The `callbacks` struct and its `user_data` pointers must remain valid | ||
| /// until `dash_spv_ffi_client_stop` or `dash_spv_ffi_client_destroy` is called. |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the location of client.rs and examine the function
fd -e rs dash-spv-ffi/src/client.rsRepository: dashpay/rust-dashcore
Length of output: 47
🏁 Script executed:
# Look at the context around lines 183-184 and find dash_spv_ffi_client_run
cat -n dash-spv-ffi/src/client.rs | head -250 | tail -100Repository: dashpay/rust-dashcore
Length of output: 3875
🏁 Script executed:
# Find FFIEventHandler::from implementation
rg -A 10 "impl.*From.*FFIEventCallbacks" dash-spv-ffi/src/Repository: dashpay/rust-dashcore
Length of output: 684
🏁 Script executed:
# Check the test file for usage patterns
cat -n dash-spv-ffi/tests/dashd_sync/context.rs | head -100Repository: dashpay/rust-dashcore
Length of output: 4242
🏁 Script executed:
# Find FFIEventCallbacks definition
rg -B 5 -A 15 "struct FFIEventCallbacks" dash-spv-ffi/src/Repository: dashpay/rust-dashcore
Length of output: 1557
🏁 Script executed:
# Find the definitions of FFISyncEventCallbacks, FFINetworkEventCallbacks, etc.
rg -B 2 -A 10 "struct FFISyncEventCallbacks|struct FFINetworkEventCallbacks|struct FFIProgressCallback|struct FFIWalletEventCallbacks|struct FFIClientErrorCallback" dash-spv-ffi/src/callbacks.rs | head -100Repository: dashpay/rust-dashcore
Length of output: 2255
Clarify in the safety documentation that only user_data pointer targets must remain valid, not the FFIEventCallbacks struct itself.
The function takes callbacks by value and immediately moves it into FFIEventHandler::from(callbacks), which simply copies the callback function pointers and user_data pointers. The struct itself need not persist beyond the function call; only the memory referenced by each user_data pointer must remain valid until dash_spv_ffi_client_stop or dash_spv_ffi_client_destroy is called. Update the doc comment to make this distinction clear.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-spv-ffi/src/client.rs` around lines 183 - 184, Update the safety doc
comment to clarify that only the targets of each FFIEventCallbacks.user_data
pointer must remain valid until dash_spv_ffi_client_stop or
dash_spv_ffi_client_destroy is called, not the FFIEventCallbacks struct itself:
note that callbacks are taken by value and converted via FFIEventHandler::from
(which copies function pointers and user_data pointers), so the caller need only
ensure the memory referenced by each user_data pointer remains valid for the
lifetime, while the original FFIEventCallbacks value may be dropped after the
call.
| // Dispatch initial progress before spawning the background task | ||
| if callbacks.progress.on_progress.is_some() { | ||
| let progress = client.runtime.block_on(async { client.inner.progress().await }); | ||
| callbacks.progress.dispatch(&progress); | ||
| } |
There was a problem hiding this comment.
This pre-dispatch doubles the first progress notification.
DashSpvClient::run() already emits the current SyncProgress once when spawn_progress_monitor() starts. Keeping this block makes FFI callers see two initial progress callbacks on every start.
Suggested fix
- // Dispatch initial progress before spawning the background task
- if callbacks.progress.on_progress.is_some() {
- let progress = client.runtime.block_on(async { client.inner.progress().await });
- callbacks.progress.dispatch(&progress);
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Dispatch initial progress before spawning the background task | |
| if callbacks.progress.on_progress.is_some() { | |
| let progress = client.runtime.block_on(async { client.inner.progress().await }); | |
| callbacks.progress.dispatch(&progress); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-spv-ffi/src/client.rs` around lines 199 - 203, The code currently
pre-dispatches an initial SyncProgress by calling
client.runtime.block_on(client.inner.progress()) and
callbacks.progress.dispatch, which duplicates the first notification already
emitted by DashSpvClient::run() when spawn_progress_monitor() starts; remove
this pre-dispatch block (the if block that calls client.runtime.block_on(async {
client.inner.progress().await }) and callbacks.progress.dispatch) so only
spawn_progress_monitor() emits the initial progress, or alternatively add a
clear guard tied to spawn_progress_monitor() to ensure only a single initial
dispatch occurs.
| let handler = Arc::new(FFIEventHandler::from(callbacks)); | ||
| let shutdown_token = client.shutdown_token.clone(); | ||
| let spv_client = client.inner.clone(); | ||
| tasks.push(client.runtime.spawn(async move { | ||
|
|
||
| let task = client.runtime.spawn(async move { | ||
| tracing::debug!("Sync task: starting run"); | ||
|
|
||
| if let Err(e) = spv_client.run(shutdown_token).await { | ||
| if let Err(e) = spv_client.run(shutdown_token, handler).await { | ||
| tracing::error!("Sync task: error: {}", e); | ||
| let cb = error_callback.lock().unwrap().clone(); | ||
| if let Some(ref cb) = cb { | ||
| cb.dispatch(&e.to_string()); | ||
| } | ||
| } | ||
|
|
||
| tracing::debug!("Sync task: exiting"); | ||
| })); | ||
| }); | ||
|
|
||
| drop(tasks); | ||
| *client.run_task.lock().unwrap() = Some(task); |
There was a problem hiding this comment.
Reject a second dash_spv_ffi_client_run() while one is still active.
run_task is overwritten unconditionally. If the client is already running, the second call still returns success, can immediately fail inside the spawned task, and drops the only handle to the real active run, so later stop/destroy cannot reliably wait for it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@dash-spv-ffi/src/client.rs` around lines 205 - 219, The code unconditionally
overwrites client.run_task, losing the existing handle; modify
dash_spv_ffi_client_run to first check client.run_task (the Mutex-protected
Option) and if Some(_) return an error/indicator that a run is already active
instead of spawning a new task; only spawn and set run_task to Some(task) when
the slot is None. Use the same symbols shown (client.run_task.lock().unwrap(),
client.runtime.spawn, shutdown_token, spv_client, handler) to locate the logic
and ensure you preserve the existing shutdown_token/spv_client/handler capture
while avoiding replacing an active task handle.
EventHandlertrait indash-spvwith default no-op implementations, replacing per-callback FFI function pointersspawn_broadcast_monitorandspawn_progress_monitorhelpersrun()to accept anArc<H: EventHandler>managing all channel subscriptions and monitoring tasks internallydash-spv-ffito a thin wrapper:FFIEventHandlerimplements the trait,dash_spv_ffi_client_runpasses it throughLoggingEventHandlerto the CLI binarySummary by CodeRabbit
Release Notes
Breaking Changes
FFIEventCallbacksstructure, reducing API complexity.dash_spv_ffi_client_runsignature to accept anFFIEventCallbacksparameter containing sync, network, wallet, progress, and error callbacks.API Improvements