refactor: extract shared TUI terminal module and slim workspace deps#35
refactor: extract shared TUI terminal module and slim workspace deps#35WomB0ComB0 merged 5 commits intomasterfrom
Conversation
…dencies - Add `resq_tui::terminal` module with `init()`, `restore()`, `run_loop()`, and `TuiApp` trait — eliminates duplicated terminal lifecycle boilerplate across all 6 TUI apps - Re-export `ratatui` and `crossterm` from `resq-tui` so consumers no longer need direct dependencies on both - Slim tokio from `features = ["full"]` to `["rt", "macros", "rt-multi-thread", "sync"]` at workspace level; only health-checker adds "time", only cli adds "process" - Standardize all TUI apps to `anyhow::Result` (replacing `Box<dyn Error>` in deploy-cli, log-viewer, flame-graph) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughCentralizes TUI lifecycle into a new shared Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Term as resq_tui::Term
participant App
participant CT as crossterm::event
User->>Term: terminal::init()
Term->>App: run_loop(poll_ms, app) calls draw()
App->>Term: draw() renders frame
Term->>CT: poll(poll_ms)
CT-->>Term: KeyEvent
Term->>App: handle_key(KeyEvent)
App-->>Term: return bool (continue?)
alt continue == false
Term->>Term: exit run_loop
Term->>User: terminal::restore()
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request centralizes TUI lifecycle management by introducing a terminal module in resq-tui and refactoring several CLI tools to use a shared run_loop and TuiApp trait. Feedback highlights a performance concern in perf-monitor where blocking network I/O is performed during rendering, a regression in event handling within the new run_loop that ignores non-key events, and unhandled errors when executing deployment actions in deploy-cli.
| if self.last_fetch.elapsed() >= self.tick_rate() { | ||
| self.update(); | ||
| } |
There was a problem hiding this comment.
Performing blocking network I/O (self.update()) inside the draw method is a serious performance issue. The draw closure is intended for rendering logic; blocking it will cause the TUI to freeze and become unresponsive during network requests. Consider moving the update logic to a separate background task or a dedicated update hook in the event loop.
| loop { | ||
| terminal.draw(|f| app.draw(f))?; | ||
| if event::poll(timeout)? { | ||
| if let Event::Key(key) = event::read()? { |
There was a problem hiding this comment.
The run_loop function currently only processes Event::Key and discards all other event types (e.g., Event::Mouse, Event::Resize, Event::FocusGained). This is a regression for apps that might need mouse support or specific handling for other events. Consider updating the TuiApp trait to handle a generic crossterm::event::Event instead of just KeyEvent.
| let _ = k8s::run_action(&root, &env, &action, svc, tx_clone); | ||
| } else { | ||
| let svc = if action == "down" { | ||
| None | ||
| } else { | ||
| Some(service.as_str()) | ||
| }; | ||
| let _ = docker::run_action(&root, &env, &action, svc, tx_clone); |
There was a problem hiding this comment.
The results of k8s::run_action and docker::run_action are being ignored. If these commands fail to start (e.g., binary not found or permission error), the UI will show an 'EXECUTING' message but provide no feedback about the failure. It is better to handle the error and push a descriptive message to output_lines.
There was a problem hiding this comment.
Actionable comments posted: 9
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/settings.local.json:
- Around line 3-7: The committed settings include overly broad permissions:
remove mcp__plugin_context-mode_context-mode__ctx_execute_file and the "Bash(git
commit:*)" entry from the "allow" array in settings.local.json and keep only
read/test-oriented permissions (e.g.,
mcp__plugin_context-mode_context-mode__ctx_search and "Bash(cargo test:*)");
instead, document or add those write/commit permissions to an untracked local
override file (e.g., local settings not checked into VCS) so write/commit
capabilities are granted only in developer-specific, uncommitted configs.
In `@cleanup/src/main.rs`:
- Around line 174-177: The Up navigation branch currently selects Some(0) even
when self.entries is empty, which leads toggle_selected() to panic when Space
later indexes self.entries[0]; modify the KeyCode::Up | KeyCode::Char('k')
branch to first check if self.entries.is_empty() and if so do nothing (or return
Ok(false)), otherwise proceed to compute i =
self.list_state.selected().unwrap_or(0) and call
self.list_state.select(Some(i.saturating_sub(1))); ensure the guard prevents
selecting an index when self.entries is empty and reference the KeyCode::Up
handling, self.list_state.selected()/select, toggle_selected(), and self.entries
symbols.
In `@deploy-cli/src/main.rs`:
- Around line 202-231: In the KeyCode::Enter handler, don't discard the Result
returned by k8s::run_action and docker::run_action; instead capture the Result,
match on it, and call self.push_output with a clear failure message (including
action, service, env and the error) when Err occurs; use the existing variables
(action, service, env, root, tx_clone) and the run_action return value to
surface startup failures to the operator rather than silently ignoring them.
In `@flame-graph/src/main.rs`:
- Around line 121-146: The TUI lost the ability to start profiling via
Enter—update App::handle_key (the TuiApp impl) to handle KeyCode::Enter (or
KeyCode::Char('\n')) in the match alongside navigation and quit, read the
selected index from self.list_state and self.services, and invoke the existing
start-profiling path used elsewhere in the app (e.g. call the same routine or
state transition that begins profiling for the selected service) so pressing
Enter starts profiling the selected target again.
In `@health-checker/src/main.rs`:
- Around line 166-168: After calling terminal::init() and execute!(io::stdout(),
EnableMouseCapture) ensure cleanup is unconditional: don't let any later `?`
skip the disable/restore. Capture the loop/result into a variable (e.g., let
result = run_main_loop();), then always call the cleanup code
(execute!(io::stdout(), DisableMouseCapture) and the terminal restore/cleanup)
before returning the result, or better create an RAII guard struct (e.g.,
MouseCaptureGuard) that calls execute!(..., EnableMouseCapture) on creation and
DisableMouseCapture + terminal restore in Drop so cleanup happens even on early
returns; apply the same pattern around the other EnableMouseCapture/restore pair
noted at the other location.
In `@log-viewer/src/main.rs`:
- Around line 154-160: The draw loop in App::draw currently drains self.rx (the
unbounded channel) until empty which can stall the UI; change the channel to a
bounded one when creating self.rx (replace the unbounded channel creation at the
location noted around line 234 with a bounded channel of a reasonable capacity)
and add a per-frame drain limit in App::draw (e.g. define a constant
MAX_INGEST_PER_FRAME and replace the while-let try_recv loop with a for loop
that attempts up to MAX_INGEST_PER_FRAME iterations calling self.rx.try_recv()
and self.push_entry(entry), breaking when empty); this enforces backpressure and
keeps rendering/input responsive while leaving remaining entries in the channel
for later frames.
In `@perf-monitor/src/main.rs`:
- Around line 836-843: The draw path currently calls App::update (in impl TuiApp
for App::draw) which does blocking reqwest::blocking HTTP I/O with a 5s timeout
and freezes rendering; remove the call to self.update() from draw and move the
polling into a background worker (either a dedicated std::thread or a Tokio
task) that periodically checks App::tick_rate() and invokes App::update() off
the render thread, writing results into shared cached state (e.g., an
Arc<Mutex<AppState>> or send updates over a channel) and updating
App::last_fetch from that worker; ensure draw(frame, self) only reads the cached
state and never performs blocking I/O.
In `@resq-tui/src/terminal.rs`:
- Around line 38-50: Wrap terminal setup in an RAII guard so partial
initialization can't leave the terminal wedged: change init() to return a guard
type (e.g., Terminal or TerminalGuard) that owns the state and implements Drop
to call the existing restore() logic (or a private restore_internal()) to leave
the alternate screen and disable raw mode; ensure init() performs cleanup of any
steps already done when an error occurs during setup so callers always receive a
guard that will restore on Drop, and keep the restore() function delegating to
the same cleanup routine so Drop and manual restore share the same
implementation.
- Around line 58-59: Add a "# Errors" section to the doc comment for the trait
method handle_key to document its error contract: state that handle_key returns
Err(anyhow::Error) when key handling fails (e.g., IO, terminal state, or
downstream handler errors) and that callers should propagate or handle the
anyhow::Error; update the doc comment immediately above the fn handle_key(&mut
self, key: event::KeyEvent) -> anyhow::Result<bool> declaration so Clippy's
requirement for Result error docs is satisfied.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2200a6d4-4a72-403e-bc39-fc8dbf9bd972
📒 Files selected for processing (18)
.claude/settings.local.jsonCargo.tomlcleanup/Cargo.tomlcleanup/src/main.rscli/Cargo.tomldeploy-cli/Cargo.tomldeploy-cli/src/main.rsflame-graph/Cargo.tomlflame-graph/src/main.rshealth-checker/Cargo.tomlhealth-checker/src/main.rslog-viewer/Cargo.tomllog-viewer/src/main.rsperf-monitor/Cargo.tomlperf-monitor/src/main.rsresq-tui/Cargo.tomlresq-tui/src/lib.rsresq-tui/src/terminal.rs
💤 Files with no reviewable changes (1)
- perf-monitor/Cargo.toml
| "allow": [ | ||
| "mcp__plugin_context-mode_context-mode__ctx_search", | ||
| "mcp__plugin_context-mode_context-mode__ctx_execute_file", | ||
| "Bash(cargo test:*)", | ||
| "Bash(git commit:*)" |
There was a problem hiding this comment.
Reduce committed Claude permissions to least privilege
ctx_execute_file and Bash(git commit:*) are overly permissive in a tracked repo config and can allow unintended file mutation/commit actions. Prefer keeping committed settings read-only/test-oriented, and put write/commit permissions in untracked local overrides instead.
Proposed hardening diff
{
"permissions": {
"allow": [
"mcp__plugin_context-mode_context-mode__ctx_search",
- "mcp__plugin_context-mode_context-mode__ctx_execute_file",
- "Bash(cargo test:*)",
- "Bash(git commit:*)"
+ "Bash(cargo test:*)"
]
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "allow": [ | |
| "mcp__plugin_context-mode_context-mode__ctx_search", | |
| "mcp__plugin_context-mode_context-mode__ctx_execute_file", | |
| "Bash(cargo test:*)", | |
| "Bash(git commit:*)" | |
| "allow": [ | |
| "mcp__plugin_context-mode_context-mode__ctx_search", | |
| "Bash(cargo test:*)" | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.claude/settings.local.json around lines 3 - 7, The committed settings
include overly broad permissions: remove
mcp__plugin_context-mode_context-mode__ctx_execute_file and the "Bash(git
commit:*)" entry from the "allow" array in settings.local.json and keep only
read/test-oriented permissions (e.g.,
mcp__plugin_context-mode_context-mode__ctx_search and "Bash(cargo test:*)");
instead, document or add those write/commit permissions to an untracked local
override file (e.g., local settings not checked into VCS) so write/commit
capabilities are granted only in developer-specific, uncommitted configs.
| KeyCode::Enter => { | ||
| if let (Some(action), Some(service)) = | ||
| (self.selected_action(), self.selected_service()) | ||
| { | ||
| let action = action.to_string(); | ||
| let service = service.to_string(); | ||
| let env = self.env.clone(); | ||
| let root = self.project_root.clone(); | ||
| let tx_clone = self.tx.clone(); | ||
| self.push_output(format!( | ||
| "=== EXECUTING {} ON {} [{}] ===", | ||
| action.to_uppercase(), | ||
| service.to_uppercase(), | ||
| env.to_uppercase() | ||
| )); | ||
| if self.use_k8s { | ||
| let svc = | ||
| if ["deploy", "destroy", "status"].contains(&action.as_str()) { | ||
| None | ||
| } else { | ||
| Some(service.as_str()) | ||
| }; | ||
| let _ = k8s::run_action(&root, &env, &action, svc, tx_clone); | ||
| } else { | ||
| let svc = if action == "down" { | ||
| None | ||
| } else { | ||
| Some(service.as_str()) | ||
| }; | ||
| let _ = docker::run_action(&root, &env, &action, svc, tx_clone); |
There was a problem hiding this comment.
Surface action startup failures instead of discarding them.
Both branches throw away the Result from run_action(). If dispatch fails, the operator only sees the === EXECUTING ... === banner and never learns that nothing actually started.
🩹 Minimal fix
if self.use_k8s {
let svc =
if ["deploy", "destroy", "status"].contains(&action.as_str()) {
None
} else {
Some(service.as_str())
};
- let _ = k8s::run_action(&root, &env, &action, svc, tx_clone);
+ if let Err(err) = k8s::run_action(&root, &env, &action, svc, tx_clone) {
+ self.push_output(format!("ERROR: {err}"));
+ }
} else {
let svc = if action == "down" {
None
} else {
Some(service.as_str())
};
- let _ = docker::run_action(&root, &env, &action, svc, tx_clone);
+ if let Err(err) =
+ docker::run_action(&root, &env, &action, svc, tx_clone)
+ {
+ self.push_output(format!("ERROR: {err}"));
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| KeyCode::Enter => { | |
| if let (Some(action), Some(service)) = | |
| (self.selected_action(), self.selected_service()) | |
| { | |
| let action = action.to_string(); | |
| let service = service.to_string(); | |
| let env = self.env.clone(); | |
| let root = self.project_root.clone(); | |
| let tx_clone = self.tx.clone(); | |
| self.push_output(format!( | |
| "=== EXECUTING {} ON {} [{}] ===", | |
| action.to_uppercase(), | |
| service.to_uppercase(), | |
| env.to_uppercase() | |
| )); | |
| if self.use_k8s { | |
| let svc = | |
| if ["deploy", "destroy", "status"].contains(&action.as_str()) { | |
| None | |
| } else { | |
| Some(service.as_str()) | |
| }; | |
| let _ = k8s::run_action(&root, &env, &action, svc, tx_clone); | |
| } else { | |
| let svc = if action == "down" { | |
| None | |
| } else { | |
| Some(service.as_str()) | |
| }; | |
| let _ = docker::run_action(&root, &env, &action, svc, tx_clone); | |
| KeyCode::Enter => { | |
| if let (Some(action), Some(service)) = | |
| (self.selected_action(), self.selected_service()) | |
| { | |
| let action = action.to_string(); | |
| let service = service.to_string(); | |
| let env = self.env.clone(); | |
| let root = self.project_root.clone(); | |
| let tx_clone = self.tx.clone(); | |
| self.push_output(format!( | |
| "=== EXECUTING {} ON {} [{}] ===", | |
| action.to_uppercase(), | |
| service.to_uppercase(), | |
| env.to_uppercase() | |
| )); | |
| if self.use_k8s { | |
| let svc = | |
| if ["deploy", "destroy", "status"].contains(&action.as_str()) { | |
| None | |
| } else { | |
| Some(service.as_str()) | |
| }; | |
| if let Err(err) = k8s::run_action(&root, &env, &action, svc, tx_clone) { | |
| self.push_output(format!("ERROR: {err}")); | |
| } | |
| } else { | |
| let svc = if action == "down" { | |
| None | |
| } else { | |
| Some(service.as_str()) | |
| }; | |
| if let Err(err) = | |
| docker::run_action(&root, &env, &action, svc, tx_clone) | |
| { | |
| self.push_output(format!("ERROR: {err}")); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-cli/src/main.rs` around lines 202 - 231, In the KeyCode::Enter
handler, don't discard the Result returned by k8s::run_action and
docker::run_action; instead capture the Result, match on it, and call
self.push_output with a clear failure message (including action, service, env
and the error) when Err occurs; use the existing variables (action, service,
env, root, tx_clone) and the run_action return value to surface startup failures
to the operator rather than silently ignoring them.
| impl TuiApp for App { | ||
| fn draw(&mut self, frame: &mut Frame) { | ||
| draw_ui(frame, self); | ||
| } | ||
|
|
||
| fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool> { | ||
| if key.kind != KeyEventKind::Press { | ||
| return Ok(true); | ||
| } | ||
| match key.code { | ||
| KeyCode::Char('q') | KeyCode::Esc => Ok(false), | ||
| KeyCode::Down | KeyCode::Char('j') => { | ||
| let i = self.list_state.selected().unwrap_or(0); | ||
| self.list_state | ||
| .select(Some((i + 1).min(self.services.len() - 1))); | ||
| Ok(true) | ||
| } | ||
| KeyCode::Up | KeyCode::Char('k') => { | ||
| let i = self.list_state.selected().unwrap_or(0); | ||
| self.list_state.select(Some(i.saturating_sub(1))); | ||
| Ok(true) | ||
| } | ||
| _ => Ok(true), | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Wire Enter back up or stop advertising it.
The new handle_key() implementation only supports quit/navigation, so the TUI no longer has any path that starts profiling from the selected target. That's a functional regression on the primary screen.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@flame-graph/src/main.rs` around lines 121 - 146, The TUI lost the ability to
start profiling via Enter—update App::handle_key (the TuiApp impl) to handle
KeyCode::Enter (or KeyCode::Char('\n')) in the match alongside navigation and
quit, read the selected index from self.list_state and self.services, and invoke
the existing start-profiling path used elsewhere in the app (e.g. call the same
routine or state transition that begins profiling for the selected service) so
pressing Enter starts profiling the selected target again.
| let mut terminal = terminal::init()?; | ||
| execute!(io::stdout(), EnableMouseCapture)?; | ||
|
|
There was a problem hiding this comment.
Make mouse-capture cleanup unconditional.
Once EnableMouseCapture succeeds, any later ? can skip both cleanup calls here and leave the shell stuck in mouse/raw-screen mode. Capture the loop result first, then release mouse capture and restore the terminal from a guard or a finally-style block.
Also applies to: 227-228
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@health-checker/src/main.rs` around lines 166 - 168, After calling
terminal::init() and execute!(io::stdout(), EnableMouseCapture) ensure cleanup
is unconditional: don't let any later `?` skip the disable/restore. Capture the
loop/result into a variable (e.g., let result = run_main_loop();), then always
call the cleanup code (execute!(io::stdout(), DisableMouseCapture) and the
terminal restore/cleanup) before returning the result, or better create an RAII
guard struct (e.g., MouseCaptureGuard) that calls execute!(...,
EnableMouseCapture) on creation and DisableMouseCapture + terminal restore in
Drop so cleanup happens even on early returns; apply the same pattern around the
other EnableMouseCapture/restore pair noted at the other location.
| impl TuiApp for App { | ||
| fn draw(&mut self, frame: &mut resq_tui::ratatui::Frame) { | ||
| // Drain incoming log entries before each render. | ||
| while let Ok(entry) = self.rx.try_recv() { | ||
| self.push_entry(entry); | ||
| } | ||
| draw_ui(frame, self); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -path "*/log-viewer/src/*.rs" -type f | head -20Repository: resq-software/crates
Length of output: 143
🏁 Script executed:
wc -l log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 90
🏁 Script executed:
sed -n '150,165p' log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 511
🏁 Script executed:
sed -n '230,240p' log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 393
🏁 Script executed:
grep -n "unbounded_channel\|bounded_channel\|MAX_LOG_LINES\|ring buffer" log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 331
🏁 Script executed:
grep -n "struct App" log-viewer/src/main.rs | head -5Repository: resq-software/crates
Length of output: 79
🏁 Script executed:
sed -n '100,115p' log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 492
🏁 Script executed:
grep -n "fn push_entry" log-viewer/src/main.rs -A 10Repository: resq-software/crates
Length of output: 466
🏁 Script executed:
sed -n '72,95p' log-viewer/src/main.rsRepository: resq-software/crates
Length of output: 696
Cap ingestion per frame and add backpressure.
draw() drains an unbounded channel until it's empty without any per-frame budget. Under sustained log bursts, the UI thread can spend an entire frame ingesting the backlog while unprocessed entries accumulate in the unbounded queue—delaying responsiveness and potentially degrading performance during high-throughput streaming. Use a bounded channel and/or implement a per-frame drain limit so that rendering and input handling remain responsive.
Per coding guidelines: log-viewer/src/{main,sources}.rs must "Ensure the 10,000-line ring buffer remains performant during high-throughput streaming."
Also applies to line 234, where the unbounded channel is created.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@log-viewer/src/main.rs` around lines 154 - 160, The draw loop in App::draw
currently drains self.rx (the unbounded channel) until empty which can stall the
UI; change the channel to a bounded one when creating self.rx (replace the
unbounded channel creation at the location noted around line 234 with a bounded
channel of a reasonable capacity) and add a per-frame drain limit in App::draw
(e.g. define a constant MAX_INGEST_PER_FRAME and replace the while-let try_recv
loop with a for loop that attempts up to MAX_INGEST_PER_FRAME iterations calling
self.rx.try_recv() and self.push_entry(entry), breaking when empty); this
enforces backpressure and keeps rendering/input responsive while leaving
remaining entries in the channel for later frames.
| impl TuiApp for App { | ||
| fn draw(&mut self, frame: &mut Frame) { | ||
| // Tick-based update: poll the service when the refresh interval has elapsed. | ||
| if self.last_fetch.elapsed() >= self.tick_rate() { | ||
| self.update(); | ||
| } | ||
| draw(frame, self); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd perf-monitor/src && ls -laRepository: resq-software/crates
Length of output: 235
🏁 Script executed:
wc -l perf-monitor/src/main.rsRepository: resq-software/crates
Length of output: 93
🏁 Script executed:
sed -n '830,880p' perf-monitor/src/main.rsRepository: resq-software/crates
Length of output: 1612
🏁 Script executed:
# Search for the update() method implementation
rg -A 20 "fn update\(" perf-monitor/src/main.rsRepository: resq-software/crates
Length of output: 822
🏁 Script executed:
# Search for HTTP client usage and blocking patterns
rg -B 5 -A 10 "reqwest|blocking|Client" perf-monitor/src/main.rsRepository: resq-software/crates
Length of output: 1147
Don't poll the service from draw().
self.update() does blocking HTTP I/O with a 5s timeout via reqwest::blocking::Client, so putting it on the render path freezes repaint and key handling whenever the endpoint is slow. Move polling to a worker thread or Tokio task and let draw() render cached state only.
As per coding guidelines, perf-monitor/src/main.rs: "Polling logic must stay non-blocking using Tokio or dedicated threads."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@perf-monitor/src/main.rs` around lines 836 - 843, The draw path currently
calls App::update (in impl TuiApp for App::draw) which does blocking
reqwest::blocking HTTP I/O with a 5s timeout and freezes rendering; remove the
call to self.update() from draw and move the polling into a background worker
(either a dedicated std::thread or a Tokio task) that periodically checks
App::tick_rate() and invokes App::update() off the render thread, writing
results into shared cached state (e.g., an Arc<Mutex<AppState>> or send updates
over a channel) and updating App::last_fetch from that worker; ensure
draw(frame, self) only reads the cached state and never performs blocking I/O.
| pub fn init() -> anyhow::Result<Term> { | ||
| enable_raw_mode()?; | ||
| execute!(io::stdout(), EnterAlternateScreen)?; | ||
| let terminal = Terminal::new(CrosstermBackend::new(io::stdout()))?; | ||
| Ok(terminal) | ||
| } | ||
|
|
||
| /// Leave the alternate screen and disable raw mode. | ||
| /// | ||
| /// Safe to call even if the terminal is in a partially-initialised state. | ||
| pub fn restore() { | ||
| let _ = execute!(io::stdout(), LeaveAlternateScreen); | ||
| let _ = disable_raw_mode(); |
There was a problem hiding this comment.
Make terminal restoration exception-safe.
init() can fail after raw mode or alternate screen is already enabled, and every caller currently has to remember to hit restore() on every exit path. Any early ? after initialization leaves the user's terminal wedged. Returning a guard/wrapper that restores in Drop would fix this at the root instead of repeating fragile cleanup at each call site.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@resq-tui/src/terminal.rs` around lines 38 - 50, Wrap terminal setup in an
RAII guard so partial initialization can't leave the terminal wedged: change
init() to return a guard type (e.g., Terminal or TerminalGuard) that owns the
state and implements Drop to call the existing restore() logic (or a private
restore_internal()) to leave the alternate screen and disable raw mode; ensure
init() performs cleanup of any steps already done when an error occurs during
setup so callers always receive a guard that will restore on Drop, and keep the
restore() function delegating to the same cleanup routine so Drop and manual
restore share the same implementation.
| /// Handle a key event. Return `false` to exit the loop. | ||
| fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>; |
There was a problem hiding this comment.
Add the missing # Errors docs for handle_key.
Clippy is already failing on this trait method because the Result return doesn't document its error contract.
📝 Minimal fix
/// Handle a key event. Return `false` to exit the loop.
+ ///
+ /// # Errors
+ /// Returns any application-specific error that should terminate the TUI loop.
fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Handle a key event. Return `false` to exit the loop. | |
| fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>; | |
| /// Handle a key event. Return `false` to exit the loop. | |
| /// | |
| /// # Errors | |
| /// Returns any application-specific error that should terminate the TUI loop. | |
| fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>; |
🧰 Tools
🪛 GitHub Actions: Rust Clippy
[error] 59-59: Clippy (missing-errors-doc) error: docs for function returning Result are missing a # Errors section for fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@resq-tui/src/terminal.rs` around lines 58 - 59, Add a "# Errors" section to
the doc comment for the trait method handle_key to document its error contract:
state that handle_key returns Err(anyhow::Error) when key handling fails (e.g.,
IO, terminal state, or downstream handler errors) and that callers should
propagate or handle the anyhow::Error; update the doc comment immediately above
the fn handle_key(&mut self, key: event::KeyEvent) -> anyhow::Result<bool>
declaration so Clippy's requirement for Result error docs is satisfied.
- terminal.rs: add missing `# Errors` doc on TuiApp::handle_key (clippy) - cleanup: guard empty-list panic on Up key navigation - deploy-cli: surface run_action() errors instead of discarding Results - flame-graph: restore Enter key handler to select profiling target - perf-monitor: move blocking HTTP I/O out of draw() into manual tick loop - log-viewer: cap channel drain to 256 entries per frame to prevent UI stall Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
🧹 Nitpick comments (4)
flame-graph/src/main.rs (2)
174-182: TODO indicates incomplete profiling dispatch.The placeholder on line 181 shows that the profiling backend dispatch is not yet implemented. The TUI correctly captures user selection, but the actual profiling functionality is pending.
Would you like me to help scaffold the profiling dispatch logic based on
target.cmd_type? I can generate a basic implementation that routes to different profiling backends (hce,api,python,perf) or open an issue to track this work.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@flame-graph/src/main.rs` around lines 174 - 182, Implement the missing profiling dispatch by matching on target.cmd_type (from app.selected_target / app.services[idx]) and calling the appropriate backend entrypoints—e.g., profile_with_hce(target, &cli.output), profile_with_api(target, &cli.output), profile_with_python(target, &cli.output), profile_with_perf(target, &cli.output) —and ensure each branch handles errors (log via println! or processLogger equivalent) and returns/propagates failures; add a default arm that logs an unsupported engine error. Use the existing target and cli.output values and create thin wrapper functions named profile_with_hce, profile_with_api, profile_with_python, profile_with_perf if they don’t yet exist to keep the dispatch singular and testable.
135-145: Navigation could panic on empty services list.While
servicesis currently hardcoded with 4 items, the navigation logic assumes the list is non-empty:
- Line 138:
self.services.len() - 1would underflow ifservicesis empty- Line 143:
saturating_sub(1)is safeConsider defensive bounds checking for future-proofing.
🛡️ Optional: Defensive bounds check
KeyCode::Down | KeyCode::Char('j') => { - let i = self.list_state.selected().unwrap_or(0); - self.list_state - .select(Some((i + 1).min(self.services.len() - 1))); + if !self.services.is_empty() { + let i = self.list_state.selected().unwrap_or(0); + self.list_state + .select(Some((i + 1).min(self.services.len() - 1))); + } Ok(true) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@flame-graph/src/main.rs` around lines 135 - 145, The KeyCode Up/Down handlers assume self.services is non-empty and calling self.services.len() - 1 can underflow; add a defensive check at the start of those handlers (or a small helper) to return Ok(true) or set self.list_state.select(None) when self.services.is_empty(); when non-empty compute the new index using safe arithmetic (e.g., clamp to 0..=self.services.len()-1 or use checked_sub/saturating_sub) before calling self.list_state.select(Some(...)) so navigation never panics (references: KeyCode::Down / KeyCode::Up handlers, self.list_state, self.services).deploy-cli/src/main.rs (1)
434-441: Error conversion usesanyhow!macro correctly.The conversion from
Result<(), String>toanyhow::Result<()>usingmap_err(|e| anyhow::anyhow!(e))is idiomatic. Consider using.map_err(anyhow::Error::msg)for slightly cleaner code, but this is a minor style preference.♻️ Optional: Slightly cleaner error conversion
- k8s::run_action(project_root, env, action, service, tx).map_err(|e| anyhow::anyhow!(e))?; + k8s::run_action(project_root, env, action, service, tx).map_err(anyhow::Error::msg)?; } else { - docker::run_action(project_root, env, action, service, tx) - .map_err(|e| anyhow::anyhow!(e))?; + docker::run_action(project_root, env, action, service, tx).map_err(anyhow::Error::msg)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deploy-cli/src/main.rs` around lines 434 - 441, The error conversions for the branches calling k8s::run_action and docker::run_action currently use .map_err(|e| anyhow::anyhow!(e)); replace those with .map_err(anyhow::Error::msg) to simplify the conversion from Result<_, String> to anyhow::Result<()>—update both calls (k8s::run_action(...) and docker::run_action(...)) accordingly while keeping the existing channel setup (let (tx, rx) = mpsc::unbounded_channel::<String>()) and behavior unchanged.resq-tui/src/terminal.rs (1)
73-89: Consider usingevent::polltimeout of 0 for responsive Ctrl+C handling.The current implementation only checks for Ctrl+C when an event is received within the poll window. If no key events arrive, Ctrl+C detection is delayed by up to
poll_ms. This is minor but worth noting for responsiveness.Additionally, the
&mut dyn TuiAppparameter uses dynamic dispatch. If performance becomes a concern in high-frequency loops, consider making this generic (app: &mut T where T: TuiApp).♻️ Optional: Generic version for potential performance improvement
-pub fn run_loop(terminal: &mut Term, poll_ms: u64, app: &mut dyn TuiApp) -> anyhow::Result<()> { +pub fn run_loop<T: TuiApp>(terminal: &mut Term, poll_ms: u64, app: &mut T) -> anyhow::Result<()> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@resq-tui/src/terminal.rs` around lines 73 - 89, run_loop currently uses a single event::poll(timeout) which can delay Ctrl+C handling up to poll_ms; to fix, add a quick non-blocking check using event::poll(Duration::from_millis(0)) and event::read() at the top of the loop to immediately detect Ctrl+C (check KeyModifiers::CONTROL and KeyCode::Char('c')) before falling back to the existing blocking event::poll(timeout) path and calling app.handle_key(key); optionally consider changing the signature from app: &mut dyn TuiApp to a generic app: &mut T where T: TuiApp if you later need to avoid dynamic dispatch.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@deploy-cli/src/main.rs`:
- Around line 434-441: The error conversions for the branches calling
k8s::run_action and docker::run_action currently use .map_err(|e|
anyhow::anyhow!(e)); replace those with .map_err(anyhow::Error::msg) to simplify
the conversion from Result<_, String> to anyhow::Result<()>—update both calls
(k8s::run_action(...) and docker::run_action(...)) accordingly while keeping the
existing channel setup (let (tx, rx) = mpsc::unbounded_channel::<String>()) and
behavior unchanged.
In `@flame-graph/src/main.rs`:
- Around line 174-182: Implement the missing profiling dispatch by matching on
target.cmd_type (from app.selected_target / app.services[idx]) and calling the
appropriate backend entrypoints—e.g., profile_with_hce(target, &cli.output),
profile_with_api(target, &cli.output), profile_with_python(target, &cli.output),
profile_with_perf(target, &cli.output) —and ensure each branch handles errors
(log via println! or processLogger equivalent) and returns/propagates failures;
add a default arm that logs an unsupported engine error. Use the existing target
and cli.output values and create thin wrapper functions named profile_with_hce,
profile_with_api, profile_with_python, profile_with_perf if they don’t yet exist
to keep the dispatch singular and testable.
- Around line 135-145: The KeyCode Up/Down handlers assume self.services is
non-empty and calling self.services.len() - 1 can underflow; add a defensive
check at the start of those handlers (or a small helper) to return Ok(true) or
set self.list_state.select(None) when self.services.is_empty(); when non-empty
compute the new index using safe arithmetic (e.g., clamp to
0..=self.services.len()-1 or use checked_sub/saturating_sub) before calling
self.list_state.select(Some(...)) so navigation never panics (references:
KeyCode::Down / KeyCode::Up handlers, self.list_state, self.services).
In `@resq-tui/src/terminal.rs`:
- Around line 73-89: run_loop currently uses a single event::poll(timeout) which
can delay Ctrl+C handling up to poll_ms; to fix, add a quick non-blocking check
using event::poll(Duration::from_millis(0)) and event::read() at the top of the
loop to immediately detect Ctrl+C (check KeyModifiers::CONTROL and
KeyCode::Char('c')) before falling back to the existing blocking
event::poll(timeout) path and calling app.handle_key(key); optionally consider
changing the signature from app: &mut dyn TuiApp to a generic app: &mut T where
T: TuiApp if you later need to avoid dynamic dispatch.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 611d5c14-e87c-4910-ad80-2260b189a5b0
📒 Files selected for processing (4)
deploy-cli/src/main.rsflame-graph/src/main.rsperf-monitor/src/main.rsresq-tui/src/terminal.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- perf-monitor/src/main.rs
Summary
resq_tui::terminalmodule: NewTuiApptrait +init()/restore()/run_loop()helpers eliminate duplicated terminal lifecycle boilerplate across all 6 TUI appsratatui/crosstermfromresq-tui— consumers no longer need direct deps on both cratestokiofeatures: workspace["full"]→["rt", "macros", "rt-multi-thread", "sync"]; onlyhealth-checkeradds"time", onlycliadds"process"anyhow::Result(replacedBox<dyn Error>in deploy-cli, log-viewer, flame-graph)Test plan
cargo check --workspacepassescargo test --workspacepasses (all 14 doc-tests + unit tests)cargo check -p🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Refactor