Skip to content

refactor: extract shared TUI terminal module and slim workspace deps#35

Merged
WomB0ComB0 merged 5 commits intomasterfrom
refactor/workspace-efficiency
Apr 5, 2026
Merged

refactor: extract shared TUI terminal module and slim workspace deps#35
WomB0ComB0 merged 5 commits intomasterfrom
refactor/workspace-efficiency

Conversation

@WomB0ComB0
Copy link
Copy Markdown
Member

@WomB0ComB0 WomB0ComB0 commented Apr 4, 2026

Summary

  • resq_tui::terminal module: New TuiApp trait + init()/restore()/run_loop() helpers eliminate duplicated terminal lifecycle boilerplate across all 6 TUI apps
  • Re-export ratatui/crossterm from resq-tui — consumers no longer need direct deps on both crates
  • Slim tokio features: workspace ["full"]["rt", "macros", "rt-multi-thread", "sync"]; only health-checker adds "time", only cli adds "process"
  • Standardize error handling: all TUI apps now use anyhow::Result (replaced Box<dyn Error> in deploy-cli, log-viewer, flame-graph)

Test plan

  • cargo check --workspace passes
  • cargo test --workspace passes (all 14 doc-tests + unit tests)
  • Each migrated crate individually verified with cargo check -p
  • Manual smoke test of each TUI app

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added a local settings file to allow specific plugin actions and a small set of command patterns.
    • Added a shared terminal UI framework to standardize init/run/restore and key-driven workflows.
  • Refactor

    • Consolidated terminal UI handling across CLI tools for a consistent user experience.
    • Simplified TUI dependency usage to reduce duplication and centralize UI behavior.

WomB0ComB0 and others added 2 commits April 3, 2026 03:05
…dencies

- Add `resq_tui::terminal` module with `init()`, `restore()`, `run_loop()`,
  and `TuiApp` trait — eliminates duplicated terminal lifecycle boilerplate
  across all 6 TUI apps
- Re-export `ratatui` and `crossterm` from `resq-tui` so consumers no longer
  need direct dependencies on both
- Slim tokio from `features = ["full"]` to `["rt", "macros", "rt-multi-thread",
  "sync"]` at workspace level; only health-checker adds "time", only cli adds
  "process"
- Standardize all TUI apps to `anyhow::Result` (replacing `Box<dyn Error>` in
  deploy-cli, log-viewer, flame-graph)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

📝 Walkthrough

Walkthrough

Centralizes TUI lifecycle into a new shared resq_tui::terminal module (with TuiApp and run_loop) and refactors multiple crates to use it; adds resq-tui re-exports, updates tokio feature flags per-crate, standardizes error types to anyhow::Result, and adds a local .claude/settings.local.json allowlist.

Changes

Cohort / File(s) Summary
Local config
.claude/settings.local.json
Added local AI settings file with permissions.allow entries for two MCP plugin context actions and three Bash command patterns.
Workspace / tokio
Cargo.toml, cli/Cargo.toml
Revised workspace tokio features: root Cargo.toml narrows tokio features; cli enables process.
Core TUI Infrastructure
resq-tui/Cargo.toml, resq-tui/src/lib.rs, resq-tui/src/terminal.rs
Added anyhow dependency; re-exported crossterm and ratatui; introduced terminal module with init(), restore(), run_loop(), Term alias, and TuiApp trait.
Crate TUI refactors
cleanup/src/main.rs, deploy-cli/src/main.rs, flame-graph/src/main.rs, health-checker/src/main.rs, log-viewer/src/main.rs, perf-monitor/src/main.rs
Replaced direct crossterm/ratatui lifecycle and event loops with resq_tui::terminal::{init, run_loop, restore}; moved rendering and key handling into impl TuiApp for App (draw / handle_key) and adjusted key-handling semantics to return a boolean for loop control.
Per-crate manifest tweaks
cleanup/Cargo.toml, deploy-cli/Cargo.toml, flame-graph/Cargo.toml, health-checker/Cargo.toml, log-viewer/Cargo.toml, perf-monitor/Cargo.toml
Removed direct ratatui/crossterm deps where applicable; added or moved anyhow; adjusted tokio feature usage per crate.
CLI & deploy changes
deploy-cli/Cargo.toml, deploy-cli/src/main.rs
Replaced direct TUI deps with resq-tui; added anyhow; refactored App to own log/output channels and to dispatch actions via channels; updated main/helper signatures to anyhow::Result.
Log ingestion API
log-viewer/src/main.rs
App::new now accepts a mpsc::UnboundedReceiver<LogEntry>; log draining moved into draw() with per-frame cap; test updated accordingly.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant Term as resq_tui::Term
  participant App
  participant CT as crossterm::event

  User->>Term: terminal::init()
  Term->>App: run_loop(poll_ms, app) calls draw()
  App->>Term: draw() renders frame
  Term->>CT: poll(poll_ms)
  CT-->>Term: KeyEvent
  Term->>App: handle_key(KeyEvent)
  App-->>Term: return bool (continue?)
  alt continue == false
    Term->>Term: exit run_loop
    Term->>User: terminal::restore()
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇 I hopped through crates both near and far,
I stitched a shared terminal, bright as a star.
draw() and handle_key() now dance in line,
Channels hum softly, logs flowing fine.
A rabbit’s nudge—TUI gardens align. 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 77.42% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and accurately summarizes the main change: extracting a shared TUI terminal module and reducing workspace dependencies as evidenced by the resq-tui refactoring and dependency removals across multiple crates.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor/workspace-efficiency

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request centralizes TUI lifecycle management by introducing a terminal module in resq-tui and refactoring several CLI tools to use a shared run_loop and TuiApp trait. Feedback highlights a performance concern in perf-monitor where blocking network I/O is performed during rendering, a regression in event handling within the new run_loop that ignores non-key events, and unhandled errors when executing deployment actions in deploy-cli.

Comment thread perf-monitor/src/main.rs Outdated
Comment on lines +839 to +841
if self.last_fetch.elapsed() >= self.tick_rate() {
self.update();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Performing blocking network I/O (self.update()) inside the draw method is a serious performance issue. The draw closure is intended for rendering logic; blocking it will cause the TUI to freeze and become unresponsive during network requests. Consider moving the update logic to a separate background task or a dedicated update hook in the event loop.

Comment thread resq-tui/src/terminal.rs
loop {
terminal.draw(|f| app.draw(f))?;
if event::poll(timeout)? {
if let Event::Key(key) = event::read()? {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The run_loop function currently only processes Event::Key and discards all other event types (e.g., Event::Mouse, Event::Resize, Event::FocusGained). This is a regression for apps that might need mouse support or specific handling for other events. Consider updating the TuiApp trait to handle a generic crossterm::event::Event instead of just KeyEvent.

Comment thread deploy-cli/src/main.rs Outdated
Comment on lines +224 to +231
let _ = k8s::run_action(&root, &env, &action, svc, tx_clone);
} else {
let svc = if action == "down" {
None
} else {
Some(service.as_str())
};
let _ = docker::run_action(&root, &env, &action, svc, tx_clone);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The results of k8s::run_action and docker::run_action are being ignored. If these commands fail to start (e.g., binary not found or permission error), the UI will show an 'EXECUTING' message but provide no feedback about the failure. It is better to handle the error and push a descriptive message to output_lines.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/settings.local.json:
- Around line 3-7: The committed settings include overly broad permissions:
remove mcp__plugin_context-mode_context-mode__ctx_execute_file and the "Bash(git
commit:*)" entry from the "allow" array in settings.local.json and keep only
read/test-oriented permissions (e.g.,
mcp__plugin_context-mode_context-mode__ctx_search and "Bash(cargo test:*)");
instead, document or add those write/commit permissions to an untracked local
override file (e.g., local settings not checked into VCS) so write/commit
capabilities are granted only in developer-specific, uncommitted configs.

In `@cleanup/src/main.rs`:
- Around line 174-177: The Up navigation branch currently selects Some(0) even
when self.entries is empty, which leads toggle_selected() to panic when Space
later indexes self.entries[0]; modify the KeyCode::Up | KeyCode::Char('k')
branch to first check if self.entries.is_empty() and if so do nothing (or return
Ok(false)), otherwise proceed to compute i =
self.list_state.selected().unwrap_or(0) and call
self.list_state.select(Some(i.saturating_sub(1))); ensure the guard prevents
selecting an index when self.entries is empty and reference the KeyCode::Up
handling, self.list_state.selected()/select, toggle_selected(), and self.entries
symbols.

In `@deploy-cli/src/main.rs`:
- Around line 202-231: In the KeyCode::Enter handler, don't discard the Result
returned by k8s::run_action and docker::run_action; instead capture the Result,
match on it, and call self.push_output with a clear failure message (including
action, service, env and the error) when Err occurs; use the existing variables
(action, service, env, root, tx_clone) and the run_action return value to
surface startup failures to the operator rather than silently ignoring them.

In `@flame-graph/src/main.rs`:
- Around line 121-146: The TUI lost the ability to start profiling via
Enter—update App::handle_key (the TuiApp impl) to handle KeyCode::Enter (or
KeyCode::Char('\n')) in the match alongside navigation and quit, read the
selected index from self.list_state and self.services, and invoke the existing
start-profiling path used elsewhere in the app (e.g. call the same routine or
state transition that begins profiling for the selected service) so pressing
Enter starts profiling the selected target again.

In `@health-checker/src/main.rs`:
- Around line 166-168: After calling terminal::init() and execute!(io::stdout(),
EnableMouseCapture) ensure cleanup is unconditional: don't let any later `?`
skip the disable/restore. Capture the loop/result into a variable (e.g., let
result = run_main_loop();), then always call the cleanup code
(execute!(io::stdout(), DisableMouseCapture) and the terminal restore/cleanup)
before returning the result, or better create an RAII guard struct (e.g.,
MouseCaptureGuard) that calls execute!(..., EnableMouseCapture) on creation and
DisableMouseCapture + terminal restore in Drop so cleanup happens even on early
returns; apply the same pattern around the other EnableMouseCapture/restore pair
noted at the other location.

In `@log-viewer/src/main.rs`:
- Around line 154-160: The draw loop in App::draw currently drains self.rx (the
unbounded channel) until empty which can stall the UI; change the channel to a
bounded one when creating self.rx (replace the unbounded channel creation at the
location noted around line 234 with a bounded channel of a reasonable capacity)
and add a per-frame drain limit in App::draw (e.g. define a constant
MAX_INGEST_PER_FRAME and replace the while-let try_recv loop with a for loop
that attempts up to MAX_INGEST_PER_FRAME iterations calling self.rx.try_recv()
and self.push_entry(entry), breaking when empty); this enforces backpressure and
keeps rendering/input responsive while leaving remaining entries in the channel
for later frames.

In `@perf-monitor/src/main.rs`:
- Around line 836-843: The draw path currently calls App::update (in impl TuiApp
for App::draw) which does blocking reqwest::blocking HTTP I/O with a 5s timeout
and freezes rendering; remove the call to self.update() from draw and move the
polling into a background worker (either a dedicated std::thread or a Tokio
task) that periodically checks App::tick_rate() and invokes App::update() off
the render thread, writing results into shared cached state (e.g., an
Arc<Mutex<AppState>> or send updates over a channel) and updating
App::last_fetch from that worker; ensure draw(frame, self) only reads the cached
state and never performs blocking I/O.

In `@resq-tui/src/terminal.rs`:
- Around line 38-50: Wrap terminal setup in an RAII guard so partial
initialization can't leave the terminal wedged: change init() to return a guard
type (e.g., Terminal or TerminalGuard) that owns the state and implements Drop
to call the existing restore() logic (or a private restore_internal()) to leave
the alternate screen and disable raw mode; ensure init() performs cleanup of any
steps already done when an error occurs during setup so callers always receive a
guard that will restore on Drop, and keep the restore() function delegating to
the same cleanup routine so Drop and manual restore share the same
implementation.
- Around line 58-59: Add a "# Errors" section to the doc comment for the trait
method handle_key to document its error contract: state that handle_key returns
Err(anyhow::Error) when key handling fails (e.g., IO, terminal state, or
downstream handler errors) and that callers should propagate or handle the
anyhow::Error; update the doc comment immediately above the fn handle_key(&mut
self, key: event::KeyEvent) -> anyhow::Result<bool> declaration so Clippy's
requirement for Result error docs is satisfied.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2200a6d4-4a72-403e-bc39-fc8dbf9bd972

📥 Commits

Reviewing files that changed from the base of the PR and between 37dbd6f and bbe6b85.

📒 Files selected for processing (18)
  • .claude/settings.local.json
  • Cargo.toml
  • cleanup/Cargo.toml
  • cleanup/src/main.rs
  • cli/Cargo.toml
  • deploy-cli/Cargo.toml
  • deploy-cli/src/main.rs
  • flame-graph/Cargo.toml
  • flame-graph/src/main.rs
  • health-checker/Cargo.toml
  • health-checker/src/main.rs
  • log-viewer/Cargo.toml
  • log-viewer/src/main.rs
  • perf-monitor/Cargo.toml
  • perf-monitor/src/main.rs
  • resq-tui/Cargo.toml
  • resq-tui/src/lib.rs
  • resq-tui/src/terminal.rs
💤 Files with no reviewable changes (1)
  • perf-monitor/Cargo.toml

Comment thread .claude/settings.local.json Outdated
Comment on lines +3 to +7
"allow": [
"mcp__plugin_context-mode_context-mode__ctx_search",
"mcp__plugin_context-mode_context-mode__ctx_execute_file",
"Bash(cargo test:*)",
"Bash(git commit:*)"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reduce committed Claude permissions to least privilege

ctx_execute_file and Bash(git commit:*) are overly permissive in a tracked repo config and can allow unintended file mutation/commit actions. Prefer keeping committed settings read-only/test-oriented, and put write/commit permissions in untracked local overrides instead.

Proposed hardening diff
 {
   "permissions": {
     "allow": [
       "mcp__plugin_context-mode_context-mode__ctx_search",
-      "mcp__plugin_context-mode_context-mode__ctx_execute_file",
-      "Bash(cargo test:*)",
-      "Bash(git commit:*)"
+      "Bash(cargo test:*)"
     ]
   }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"allow": [
"mcp__plugin_context-mode_context-mode__ctx_search",
"mcp__plugin_context-mode_context-mode__ctx_execute_file",
"Bash(cargo test:*)",
"Bash(git commit:*)"
"allow": [
"mcp__plugin_context-mode_context-mode__ctx_search",
"Bash(cargo test:*)"
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/settings.local.json around lines 3 - 7, The committed settings
include overly broad permissions: remove
mcp__plugin_context-mode_context-mode__ctx_execute_file and the "Bash(git
commit:*)" entry from the "allow" array in settings.local.json and keep only
read/test-oriented permissions (e.g.,
mcp__plugin_context-mode_context-mode__ctx_search and "Bash(cargo test:*)");
instead, document or add those write/commit permissions to an untracked local
override file (e.g., local settings not checked into VCS) so write/commit
capabilities are granted only in developer-specific, uncommitted configs.

Comment thread cleanup/src/main.rs
Comment thread deploy-cli/src/main.rs Outdated
Comment on lines +202 to +231
KeyCode::Enter => {
if let (Some(action), Some(service)) =
(self.selected_action(), self.selected_service())
{
let action = action.to_string();
let service = service.to_string();
let env = self.env.clone();
let root = self.project_root.clone();
let tx_clone = self.tx.clone();
self.push_output(format!(
"=== EXECUTING {} ON {} [{}] ===",
action.to_uppercase(),
service.to_uppercase(),
env.to_uppercase()
));
if self.use_k8s {
let svc =
if ["deploy", "destroy", "status"].contains(&action.as_str()) {
None
} else {
Some(service.as_str())
};
let _ = k8s::run_action(&root, &env, &action, svc, tx_clone);
} else {
let svc = if action == "down" {
None
} else {
Some(service.as_str())
};
let _ = docker::run_action(&root, &env, &action, svc, tx_clone);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Surface action startup failures instead of discarding them.

Both branches throw away the Result from run_action(). If dispatch fails, the operator only sees the === EXECUTING ... === banner and never learns that nothing actually started.

🩹 Minimal fix
                     if self.use_k8s {
                         let svc =
                             if ["deploy", "destroy", "status"].contains(&action.as_str()) {
                                 None
                             } else {
                                 Some(service.as_str())
                             };
-                        let _ = k8s::run_action(&root, &env, &action, svc, tx_clone);
+                        if let Err(err) = k8s::run_action(&root, &env, &action, svc, tx_clone) {
+                            self.push_output(format!("ERROR: {err}"));
+                        }
                     } else {
                         let svc = if action == "down" {
                             None
                         } else {
                             Some(service.as_str())
                         };
-                        let _ = docker::run_action(&root, &env, &action, svc, tx_clone);
+                        if let Err(err) =
+                            docker::run_action(&root, &env, &action, svc, tx_clone)
+                        {
+                            self.push_output(format!("ERROR: {err}"));
+                        }
                     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
KeyCode::Enter => {
if let (Some(action), Some(service)) =
(self.selected_action(), self.selected_service())
{
let action = action.to_string();
let service = service.to_string();
let env = self.env.clone();
let root = self.project_root.clone();
let tx_clone = self.tx.clone();
self.push_output(format!(
"=== EXECUTING {} ON {} [{}] ===",
action.to_uppercase(),
service.to_uppercase(),
env.to_uppercase()
));
if self.use_k8s {
let svc =
if ["deploy", "destroy", "status"].contains(&action.as_str()) {
None
} else {
Some(service.as_str())
};
let _ = k8s::run_action(&root, &env, &action, svc, tx_clone);
} else {
let svc = if action == "down" {
None
} else {
Some(service.as_str())
};
let _ = docker::run_action(&root, &env, &action, svc, tx_clone);
KeyCode::Enter => {
if let (Some(action), Some(service)) =
(self.selected_action(), self.selected_service())
{
let action = action.to_string();
let service = service.to_string();
let env = self.env.clone();
let root = self.project_root.clone();
let tx_clone = self.tx.clone();
self.push_output(format!(
"=== EXECUTING {} ON {} [{}] ===",
action.to_uppercase(),
service.to_uppercase(),
env.to_uppercase()
));
if self.use_k8s {
let svc =
if ["deploy", "destroy", "status"].contains(&action.as_str()) {
None
} else {
Some(service.as_str())
};
if let Err(err) = k8s::run_action(&root, &env, &action, svc, tx_clone) {
self.push_output(format!("ERROR: {err}"));
}
} else {
let svc = if action == "down" {
None
} else {
Some(service.as_str())
};
if let Err(err) =
docker::run_action(&root, &env, &action, svc, tx_clone)
{
self.push_output(format!("ERROR: {err}"));
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy-cli/src/main.rs` around lines 202 - 231, In the KeyCode::Enter
handler, don't discard the Result returned by k8s::run_action and
docker::run_action; instead capture the Result, match on it, and call
self.push_output with a clear failure message (including action, service, env
and the error) when Err occurs; use the existing variables (action, service,
env, root, tx_clone) and the run_action return value to surface startup failures
to the operator rather than silently ignoring them.

Comment thread flame-graph/src/main.rs
Comment on lines +121 to +146
impl TuiApp for App {
fn draw(&mut self, frame: &mut Frame) {
draw_ui(frame, self);
}

fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool> {
if key.kind != KeyEventKind::Press {
return Ok(true);
}
match key.code {
KeyCode::Char('q') | KeyCode::Esc => Ok(false),
KeyCode::Down | KeyCode::Char('j') => {
let i = self.list_state.selected().unwrap_or(0);
self.list_state
.select(Some((i + 1).min(self.services.len() - 1)));
Ok(true)
}
KeyCode::Up | KeyCode::Char('k') => {
let i = self.list_state.selected().unwrap_or(0);
self.list_state.select(Some(i.saturating_sub(1)));
Ok(true)
}
_ => Ok(true),
}
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Wire Enter back up or stop advertising it.

The new handle_key() implementation only supports quit/navigation, so the TUI no longer has any path that starts profiling from the selected target. That's a functional regression on the primary screen.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame-graph/src/main.rs` around lines 121 - 146, The TUI lost the ability to
start profiling via Enter—update App::handle_key (the TuiApp impl) to handle
KeyCode::Enter (or KeyCode::Char('\n')) in the match alongside navigation and
quit, read the selected index from self.list_state and self.services, and invoke
the existing start-profiling path used elsewhere in the app (e.g. call the same
routine or state transition that begins profiling for the selected service) so
pressing Enter starts profiling the selected target again.

Comment on lines +166 to 168
let mut terminal = terminal::init()?;
execute!(io::stdout(), EnableMouseCapture)?;

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make mouse-capture cleanup unconditional.

Once EnableMouseCapture succeeds, any later ? can skip both cleanup calls here and leave the shell stuck in mouse/raw-screen mode. Capture the loop result first, then release mouse capture and restore the terminal from a guard or a finally-style block.

Also applies to: 227-228

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@health-checker/src/main.rs` around lines 166 - 168, After calling
terminal::init() and execute!(io::stdout(), EnableMouseCapture) ensure cleanup
is unconditional: don't let any later `?` skip the disable/restore. Capture the
loop/result into a variable (e.g., let result = run_main_loop();), then always
call the cleanup code (execute!(io::stdout(), DisableMouseCapture) and the
terminal restore/cleanup) before returning the result, or better create an RAII
guard struct (e.g., MouseCaptureGuard) that calls execute!(...,
EnableMouseCapture) on creation and DisableMouseCapture + terminal restore in
Drop so cleanup happens even on early returns; apply the same pattern around the
other EnableMouseCapture/restore pair noted at the other location.

Comment thread log-viewer/src/main.rs
Comment on lines +154 to +160
impl TuiApp for App {
fn draw(&mut self, frame: &mut resq_tui::ratatui::Frame) {
// Drain incoming log entries before each render.
while let Ok(entry) = self.rx.try_recv() {
self.push_entry(entry);
}
draw_ui(frame, self);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -path "*/log-viewer/src/*.rs" -type f | head -20

Repository: resq-software/crates

Length of output: 143


🏁 Script executed:

wc -l log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 90


🏁 Script executed:

sed -n '150,165p' log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 511


🏁 Script executed:

sed -n '230,240p' log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 393


🏁 Script executed:

grep -n "unbounded_channel\|bounded_channel\|MAX_LOG_LINES\|ring buffer" log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 331


🏁 Script executed:

grep -n "struct App" log-viewer/src/main.rs | head -5

Repository: resq-software/crates

Length of output: 79


🏁 Script executed:

sed -n '100,115p' log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 492


🏁 Script executed:

grep -n "fn push_entry" log-viewer/src/main.rs -A 10

Repository: resq-software/crates

Length of output: 466


🏁 Script executed:

sed -n '72,95p' log-viewer/src/main.rs

Repository: resq-software/crates

Length of output: 696


Cap ingestion per frame and add backpressure.

draw() drains an unbounded channel until it's empty without any per-frame budget. Under sustained log bursts, the UI thread can spend an entire frame ingesting the backlog while unprocessed entries accumulate in the unbounded queue—delaying responsiveness and potentially degrading performance during high-throughput streaming. Use a bounded channel and/or implement a per-frame drain limit so that rendering and input handling remain responsive.

Per coding guidelines: log-viewer/src/{main,sources}.rs must "Ensure the 10,000-line ring buffer remains performant during high-throughput streaming."

Also applies to line 234, where the unbounded channel is created.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@log-viewer/src/main.rs` around lines 154 - 160, The draw loop in App::draw
currently drains self.rx (the unbounded channel) until empty which can stall the
UI; change the channel to a bounded one when creating self.rx (replace the
unbounded channel creation at the location noted around line 234 with a bounded
channel of a reasonable capacity) and add a per-frame drain limit in App::draw
(e.g. define a constant MAX_INGEST_PER_FRAME and replace the while-let try_recv
loop with a for loop that attempts up to MAX_INGEST_PER_FRAME iterations calling
self.rx.try_recv() and self.push_entry(entry), breaking when empty); this
enforces backpressure and keeps rendering/input responsive while leaving
remaining entries in the channel for later frames.

Comment thread perf-monitor/src/main.rs Outdated
Comment on lines +836 to +843
impl TuiApp for App {
fn draw(&mut self, frame: &mut Frame) {
// Tick-based update: poll the service when the refresh interval has elapsed.
if self.last_fetch.elapsed() >= self.tick_rate() {
self.update();
}
draw(frame, self);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd perf-monitor/src && ls -la

Repository: resq-software/crates

Length of output: 235


🏁 Script executed:

wc -l perf-monitor/src/main.rs

Repository: resq-software/crates

Length of output: 93


🏁 Script executed:

sed -n '830,880p' perf-monitor/src/main.rs

Repository: resq-software/crates

Length of output: 1612


🏁 Script executed:

# Search for the update() method implementation
rg -A 20 "fn update\(" perf-monitor/src/main.rs

Repository: resq-software/crates

Length of output: 822


🏁 Script executed:

# Search for HTTP client usage and blocking patterns
rg -B 5 -A 10 "reqwest|blocking|Client" perf-monitor/src/main.rs

Repository: resq-software/crates

Length of output: 1147


Don't poll the service from draw().

self.update() does blocking HTTP I/O with a 5s timeout via reqwest::blocking::Client, so putting it on the render path freezes repaint and key handling whenever the endpoint is slow. Move polling to a worker thread or Tokio task and let draw() render cached state only.

As per coding guidelines, perf-monitor/src/main.rs: "Polling logic must stay non-blocking using Tokio or dedicated threads."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@perf-monitor/src/main.rs` around lines 836 - 843, The draw path currently
calls App::update (in impl TuiApp for App::draw) which does blocking
reqwest::blocking HTTP I/O with a 5s timeout and freezes rendering; remove the
call to self.update() from draw and move the polling into a background worker
(either a dedicated std::thread or a Tokio task) that periodically checks
App::tick_rate() and invokes App::update() off the render thread, writing
results into shared cached state (e.g., an Arc<Mutex<AppState>> or send updates
over a channel) and updating App::last_fetch from that worker; ensure
draw(frame, self) only reads the cached state and never performs blocking I/O.

Comment thread resq-tui/src/terminal.rs
Comment on lines +38 to +50
pub fn init() -> anyhow::Result<Term> {
enable_raw_mode()?;
execute!(io::stdout(), EnterAlternateScreen)?;
let terminal = Terminal::new(CrosstermBackend::new(io::stdout()))?;
Ok(terminal)
}

/// Leave the alternate screen and disable raw mode.
///
/// Safe to call even if the terminal is in a partially-initialised state.
pub fn restore() {
let _ = execute!(io::stdout(), LeaveAlternateScreen);
let _ = disable_raw_mode();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make terminal restoration exception-safe.

init() can fail after raw mode or alternate screen is already enabled, and every caller currently has to remember to hit restore() on every exit path. Any early ? after initialization leaves the user's terminal wedged. Returning a guard/wrapper that restores in Drop would fix this at the root instead of repeating fragile cleanup at each call site.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@resq-tui/src/terminal.rs` around lines 38 - 50, Wrap terminal setup in an
RAII guard so partial initialization can't leave the terminal wedged: change
init() to return a guard type (e.g., Terminal or TerminalGuard) that owns the
state and implements Drop to call the existing restore() logic (or a private
restore_internal()) to leave the alternate screen and disable raw mode; ensure
init() performs cleanup of any steps already done when an error occurs during
setup so callers always receive a guard that will restore on Drop, and keep the
restore() function delegating to the same cleanup routine so Drop and manual
restore share the same implementation.

Comment thread resq-tui/src/terminal.rs
Comment on lines +58 to +59
/// Handle a key event. Return `false` to exit the loop.
fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add the missing # Errors docs for handle_key.

Clippy is already failing on this trait method because the Result return doesn't document its error contract.

📝 Minimal fix
     /// Handle a key event. Return `false` to exit the loop.
+    ///
+    /// # Errors
+    /// Returns any application-specific error that should terminate the TUI loop.
     fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Handle a key event. Return `false` to exit the loop.
fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;
/// Handle a key event. Return `false` to exit the loop.
///
/// # Errors
/// Returns any application-specific error that should terminate the TUI loop.
fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;
🧰 Tools
🪛 GitHub Actions: Rust Clippy

[error] 59-59: Clippy (missing-errors-doc) error: docs for function returning Result are missing a # Errors section for fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@resq-tui/src/terminal.rs` around lines 58 - 59, Add a "# Errors" section to
the doc comment for the trait method handle_key to document its error contract:
state that handle_key returns Err(anyhow::Error) when key handling fails (e.g.,
IO, terminal state, or downstream handler errors) and that callers should
propagate or handle the anyhow::Error; update the doc comment immediately above
the fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>
declaration so Clippy's requirement for Result error docs is satisfied.

WomB0ComB0 and others added 3 commits April 4, 2026 00:02
- terminal.rs: add missing `# Errors` doc on TuiApp::handle_key (clippy)
- cleanup: guard empty-list panic on Up key navigation
- deploy-cli: surface run_action() errors instead of discarding Results
- flame-graph: restore Enter key handler to select profiling target
- perf-monitor: move blocking HTTP I/O out of draw() into manual tick loop
- log-viewer: cap channel drain to 256 entries per frame to prevent UI stall

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
flame-graph/src/main.rs (2)

174-182: TODO indicates incomplete profiling dispatch.

The placeholder on line 181 shows that the profiling backend dispatch is not yet implemented. The TUI correctly captures user selection, but the actual profiling functionality is pending.

Would you like me to help scaffold the profiling dispatch logic based on target.cmd_type? I can generate a basic implementation that routes to different profiling backends (hce, api, python, perf) or open an issue to track this work.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame-graph/src/main.rs` around lines 174 - 182, Implement the missing
profiling dispatch by matching on target.cmd_type (from app.selected_target /
app.services[idx]) and calling the appropriate backend entrypoints—e.g.,
profile_with_hce(target, &cli.output), profile_with_api(target, &cli.output),
profile_with_python(target, &cli.output), profile_with_perf(target, &cli.output)
—and ensure each branch handles errors (log via println! or processLogger
equivalent) and returns/propagates failures; add a default arm that logs an
unsupported engine error. Use the existing target and cli.output values and
create thin wrapper functions named profile_with_hce, profile_with_api,
profile_with_python, profile_with_perf if they don’t yet exist to keep the
dispatch singular and testable.

135-145: Navigation could panic on empty services list.

While services is currently hardcoded with 4 items, the navigation logic assumes the list is non-empty:

  • Line 138: self.services.len() - 1 would underflow if services is empty
  • Line 143: saturating_sub(1) is safe

Consider defensive bounds checking for future-proofing.

🛡️ Optional: Defensive bounds check
             KeyCode::Down | KeyCode::Char('j') => {
-                let i = self.list_state.selected().unwrap_or(0);
-                self.list_state
-                    .select(Some((i + 1).min(self.services.len() - 1)));
+                if !self.services.is_empty() {
+                    let i = self.list_state.selected().unwrap_or(0);
+                    self.list_state
+                        .select(Some((i + 1).min(self.services.len() - 1)));
+                }
                 Ok(true)
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@flame-graph/src/main.rs` around lines 135 - 145, The KeyCode Up/Down handlers
assume self.services is non-empty and calling self.services.len() - 1 can
underflow; add a defensive check at the start of those handlers (or a small
helper) to return Ok(true) or set self.list_state.select(None) when
self.services.is_empty(); when non-empty compute the new index using safe
arithmetic (e.g., clamp to 0..=self.services.len()-1 or use
checked_sub/saturating_sub) before calling self.list_state.select(Some(...)) so
navigation never panics (references: KeyCode::Down / KeyCode::Up handlers,
self.list_state, self.services).
deploy-cli/src/main.rs (1)

434-441: Error conversion uses anyhow! macro correctly.

The conversion from Result<(), String> to anyhow::Result<()> using map_err(|e| anyhow::anyhow!(e)) is idiomatic. Consider using .map_err(anyhow::Error::msg) for slightly cleaner code, but this is a minor style preference.

♻️ Optional: Slightly cleaner error conversion
-        k8s::run_action(project_root, env, action, service, tx).map_err(|e| anyhow::anyhow!(e))?;
+        k8s::run_action(project_root, env, action, service, tx).map_err(anyhow::Error::msg)?;
     } else {
-        docker::run_action(project_root, env, action, service, tx)
-            .map_err(|e| anyhow::anyhow!(e))?;
+        docker::run_action(project_root, env, action, service, tx).map_err(anyhow::Error::msg)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deploy-cli/src/main.rs` around lines 434 - 441, The error conversions for the
branches calling k8s::run_action and docker::run_action currently use
.map_err(|e| anyhow::anyhow!(e)); replace those with
.map_err(anyhow::Error::msg) to simplify the conversion from Result<_, String>
to anyhow::Result<()>—update both calls (k8s::run_action(...) and
docker::run_action(...)) accordingly while keeping the existing channel setup
(let (tx, rx) = mpsc::unbounded_channel::<String>()) and behavior unchanged.
resq-tui/src/terminal.rs (1)

73-89: Consider using event::poll timeout of 0 for responsive Ctrl+C handling.

The current implementation only checks for Ctrl+C when an event is received within the poll window. If no key events arrive, Ctrl+C detection is delayed by up to poll_ms. This is minor but worth noting for responsiveness.

Additionally, the &mut dyn TuiApp parameter uses dynamic dispatch. If performance becomes a concern in high-frequency loops, consider making this generic (app: &mut T where T: TuiApp).

♻️ Optional: Generic version for potential performance improvement
-pub fn run_loop(terminal: &mut Term, poll_ms: u64, app: &mut dyn TuiApp) -> anyhow::Result<()> {
+pub fn run_loop<T: TuiApp>(terminal: &mut Term, poll_ms: u64, app: &mut T) -> anyhow::Result<()> {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@resq-tui/src/terminal.rs` around lines 73 - 89, run_loop currently uses a
single event::poll(timeout) which can delay Ctrl+C handling up to poll_ms; to
fix, add a quick non-blocking check using event::poll(Duration::from_millis(0))
and event::read() at the top of the loop to immediately detect Ctrl+C (check
KeyModifiers::CONTROL and KeyCode::Char('c')) before falling back to the
existing blocking event::poll(timeout) path and calling app.handle_key(key);
optionally consider changing the signature from app: &mut dyn TuiApp to a
generic app: &mut T where T: TuiApp if you later need to avoid dynamic dispatch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@deploy-cli/src/main.rs`:
- Around line 434-441: The error conversions for the branches calling
k8s::run_action and docker::run_action currently use .map_err(|e|
anyhow::anyhow!(e)); replace those with .map_err(anyhow::Error::msg) to simplify
the conversion from Result<_, String> to anyhow::Result<()>—update both calls
(k8s::run_action(...) and docker::run_action(...)) accordingly while keeping the
existing channel setup (let (tx, rx) = mpsc::unbounded_channel::<String>()) and
behavior unchanged.

In `@flame-graph/src/main.rs`:
- Around line 174-182: Implement the missing profiling dispatch by matching on
target.cmd_type (from app.selected_target / app.services[idx]) and calling the
appropriate backend entrypoints—e.g., profile_with_hce(target, &cli.output),
profile_with_api(target, &cli.output), profile_with_python(target, &cli.output),
profile_with_perf(target, &cli.output) —and ensure each branch handles errors
(log via println! or processLogger equivalent) and returns/propagates failures;
add a default arm that logs an unsupported engine error. Use the existing target
and cli.output values and create thin wrapper functions named profile_with_hce,
profile_with_api, profile_with_python, profile_with_perf if they don’t yet exist
to keep the dispatch singular and testable.
- Around line 135-145: The KeyCode Up/Down handlers assume self.services is
non-empty and calling self.services.len() - 1 can underflow; add a defensive
check at the start of those handlers (or a small helper) to return Ok(true) or
set self.list_state.select(None) when self.services.is_empty(); when non-empty
compute the new index using safe arithmetic (e.g., clamp to
0..=self.services.len()-1 or use checked_sub/saturating_sub) before calling
self.list_state.select(Some(...)) so navigation never panics (references:
KeyCode::Down / KeyCode::Up handlers, self.list_state, self.services).

In `@resq-tui/src/terminal.rs`:
- Around line 73-89: run_loop currently uses a single event::poll(timeout) which
can delay Ctrl+C handling up to poll_ms; to fix, add a quick non-blocking check
using event::poll(Duration::from_millis(0)) and event::read() at the top of the
loop to immediately detect Ctrl+C (check KeyModifiers::CONTROL and
KeyCode::Char('c')) before falling back to the existing blocking
event::poll(timeout) path and calling app.handle_key(key); optionally consider
changing the signature from app: &mut dyn TuiApp to a generic app: &mut T where
T: TuiApp if you later need to avoid dynamic dispatch.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 611d5c14-e87c-4910-ad80-2260b189a5b0

📥 Commits

Reviewing files that changed from the base of the PR and between 459d0e3 and 19d863c.

📒 Files selected for processing (4)
  • deploy-cli/src/main.rs
  • flame-graph/src/main.rs
  • perf-monitor/src/main.rs
  • resq-tui/src/terminal.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • perf-monitor/src/main.rs

@WomB0ComB0 WomB0ComB0 merged commit 4c4a637 into master Apr 5, 2026
13 checks passed
@WomB0ComB0 WomB0ComB0 deleted the refactor/workspace-efficiency branch April 5, 2026 13:07
@github-actions github-actions bot mentioned this pull request Apr 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant