BIP68 fast path + haveinputs#181
Draft
willcl-ark wants to merge 46 commits into
Draft
Conversation
Adds build configuration, benchmarking CI workflows, Python dependencies, plotting tools, and documentation for benchcoin. Co-authored-by: David Gumberg <davidzgumberg@gmail.com> Co-authored-by: Lőrinc <pap.lorinc@gmail.com>
- Fix empty chart: use get_chart_data() instead of to_dict() so JS
filters can match config strings ("450", "32000") instead of objects
- Capture machine specs on self-hosted runner during build job and pass
via --machine-specs flag to nightly append, instead of detecting on
the ubuntu-latest publish runner
Run LogParser + PlotGenerator from bench/analyze.py during artifact
copying to produce static PNG charts from debug.log files. This
pre-generates the same 11 chart types that were previously rendered
client-side via JavaScript.
Changes to report.py:
- Import HAS_MATPLOTLIB, LogParser, PlotGenerator from bench.analyze
- _copy_network_artifacts: generate plots after each debug.log with
"{network}-{name}" prefix (e.g. "450-uninstrumented-pr")
- _copy_artifacts: generate plots for single-directory mode, including
when input_dir == output_dir
- _prepare_graphs_data: add "plots" key with relative paths to PNGs
- generate(): reorder to copy artifacts before HTML rendering so
_prepare_graphs_data can find the generated plot files
Plot generation is guarded by HAS_MATPLOTLIB for graceful fallback
when matplotlib is unavailable.
The pr-report.html template previously included debug-log-charts.html which fetched multi-hundred-MB debug.log.gz files in the browser, decompressed them with pako.js, parsed every line, and rendered 11 Plotly charts client-side. This made report pages unresponsive. Now that report.py pre-generates the charts as static PNGs: - pr-report.html: replace the debug-log-charts.html include with an img loop over graph.plots, using loading="lazy" - debug-log-charts.html: delete (344 lines of client-side JS) - base.html: remove pako.js and Plotly CDN scripts (both are independently included by pr-chart.html and nightly-chart.html via their own script tags) The debug.log download link is preserved.
Rewrite to document the TOML config + matrix entry workflow, removing stale references to the old two-commit comparison CLI, --datadir requirement, profiles, and BENCH_DATADIR env var.
Debug logs were consuming 388MB on gh-pages. They are already uploaded as CI artifacts with 90-day retention during benchmark runs. - Remove gzip compression and copying of debug logs in report generation - Remove debug log extraction in publish-results workflow - Replace per-graph "Download debug.log" links with a single link to the CI run page where artifacts can be downloaded - Keep matplotlib plot generation from debug logs (plots are still generated during report phase, just the raw logs aren't published)
The PR comment with result links was posted before GitHub Pages finished deploying, leading to broken links. Add a wait-for-pages job that polls for the pages-build-deployment run matching our exact gh-pages commit, then blocks until it completes.
Previously, prune=10000 was causing flushes of the UTXO set when block pruning was taking please, resulting in logs like: ❯ zcat 32000-instrumented-pr-debug.log.gz | rg UTXO 2026-02-12T07:22:57Z * Using 31990.0 MiB for in-memory UTXO set (plus up to 286.1 MiB of unused mempool space) 2026-02-12T07:28:51Z [warning] Flushing large (2 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:33:10Z [warning] Flushing large (3 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:37:23Z [warning] Flushing large (4 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:42:03Z [warning] Flushing large (4 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:46:34Z [warning] Flushing large (5 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:51:10Z [warning] Flushing large (6 GiB) UTXO set to disk, it may take several minutes 2026-02-12T07:55:57Z [warning] Flushing large (7 GiB) UTXO set to disk, it may take several minutes 2026-02-12T08:00:35Z [warning] Flushing large (8 GiB) UTXO set to disk, it may take several minutes 2026-02-12T08:05:16Z [warning] Flushing large (8 GiB) UTXO set to disk, it may take several minutes 2026-02-12T08:10:00Z [warning] Flushing large (8 GiB) UTXO set to disk, it may take several minutes 2026-02-12T08:14:36Z [warning] Flushing large (8 GiB) UTXO set to disk, it may take several minutes 2026-02-12T08:16:47Z [warning] Flushing large (8 GiB) UTXO set to disk, it may take several minutes and generally interrupting benchmarking. Remove this effect by setting prune to such a high value it will never trigger. Prune is **required** to permit us to continue syncing from a pruned datadir.
ec93954 removed debug log copying to stop publishing raw logs to gh-pages (388MB). But it also removed the extraction step that makes debug logs available during report generation, so matplotlib had no input files and PNG charts silently stopped appearing in PR reports. Restore the copy of debug-logs-${network} artifacts into the results directory before report generation. The raw logs are still not committed to gh-pages — only the small pre-rendered PNGs in plots/ are.
Replace the matrix strategy with explicit sequential jobs so the 450 benchmark always runs first in a consistent cache state, and the 32000 benchmark always runs second.
Weekly fstrim.timer on the runner caused a ~25% speedup every Monday (Sunday night run). Running fstrim in the prepare script before each benchmark ensures consistent write performance regardless of when the system timer last ran. Follows the same suid wrapper pattern as drop-caches.
This reverts commit fd8bdf6.
FITRIM ioctl requires the filesystem mount point. Resolve it from the tmp_datadir path by walking up to the mount boundary.
Manual (workflow_dispatch) runs are now stored separately from scheduled nightly runs. Scheduled runs still dedup by (date, commit, dbcache) to handle retries. Manual runs always append, appearing as diamond markers on the chart alongside the nightly trend line. Also ruff format.
Manual (workflow_dispatch) runs no longer get a separate "(manual)" legend entry with diamond markers. They appear as regular points in the same series trace as scheduled runs.
Adds a separate benchmark job (benchmark-noav) that runs IBD with -assumevalid=0 to measure full script verification performance. Uses a dedicated TOML config with uninstrumented-only matrix, and prefixes artifacts with noav- so the publish workflow can handle them alongside existing runs.
…ks when CSV not active (early blocks / regtest) Port the optimization from pi-autoresearch commit 9090b37ca73a2bfe90bb0aaf5c90f05971553442 into this tree, excluding the benchmark metadata file.
…-input check into the existing AccessCoin loop Port the optimization from pi-autoresearch commit 6dcb7f2527f8aaf92330936321bd9d21e93cd6e6 into this tree, excluding the benchmark metadata file. This keeps the missing-input handling in the same loop that already reads each coin and always seeds PrecomputedTransactionData in ConnectBlock so CheckInputScripts can reuse the fetched spent outputs even when CSV is inactive.
Benchmark ResultsComparison to nightly master (median of last 7 runs):
|
b7b327a to
60f6797
Compare
11770ad to
a8c8f9f
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
.