From 42b41155436462eabb7b2a0341c946058315edab Mon Sep 17 00:00:00 2001 From: "google-labs-jules[bot]" <161369871+google-labs-jules[bot]@users.noreply.github.com> Date: Sat, 4 Apr 2026 05:29:13 +0000 Subject: [PATCH] refactor(memory): Use sliding window iterator for `withTaskGroup` in `ProcessMemoryScanner` Migrate from static array chunking to a dynamic iterator approach for PID scanning concurrency. This resolves tail latency issues where execution halts waiting for the slowest task in a batch before processing the next chunk. Overall throughput is improved by keeping the worker pool saturated. Co-authored-by: acebytes <2820910+acebytes@users.noreply.github.com> --- .jules/bolt.md | 3 ++ .../Memory/ProcessMemoryScanner.swift | 38 +++++++++++-------- 2 files changed, 25 insertions(+), 16 deletions(-) create mode 100644 .jules/bolt.md diff --git a/.jules/bolt.md b/.jules/bolt.md new file mode 100644 index 0000000..4b79790 --- /dev/null +++ b/.jules/bolt.md @@ -0,0 +1,3 @@ +## 2024-06-18 - Sliding Window withTaskGroup over Static Chunking +**Learning:** In Swift structured concurrency, processing high-volume tasks using `withTaskGroup` with static chunking (e.g. creating chunks of array elements) limits throughput due to tail latency. The execution pauses while waiting for the slowest task in the current chunk before starting the next chunk. +**Action:** Use a sliding window approach with an iterator instead of static chunking. Populate the `withTaskGroup` with the initial `maxConcurrency` tasks, then continuously loop over `group` results with `for await`, adding a new task from the iterator each time one completes to maintain maximum parallel execution. diff --git a/Sources/Cacheout/Memory/ProcessMemoryScanner.swift b/Sources/Cacheout/Memory/ProcessMemoryScanner.swift index 3f8e728..ea77da6 100644 --- a/Sources/Cacheout/Memory/ProcessMemoryScanner.swift +++ b/Sources/Cacheout/Memory/ProcessMemoryScanner.swift @@ -97,29 +97,35 @@ actor ProcessMemoryScanner { /// /// Returns the collected entries and the count of EPERM failures. private func scanPIDs(_ pids: [pid_t]) async -> (entries: [ProcessEntryDTO], epermCount: Int) { - // Chunk PIDs to cap concurrency at maxConcurrency. - let chunks = stride(from: 0, to: pids.count, by: maxConcurrency).map { - Array(pids[$0..