NETOBSERV-2660 Add server performance benchmark improvements#1444
NETOBSERV-2660 Add server performance benchmark improvements#1444Amoghrd wants to merge 1 commit into
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
8152d50 to
324e90a
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
pkg/server/server_perf_test.go (1)
21-27:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winHandle
json.Marshalfailures in benchmark setup.Line 21 and Line 46 ignore marshal errors. If fixture types change, setup can silently break and skew benchmark behavior.
Suggested fix
- matrixResponse, _ := json.Marshal(model.QueryResponse{ + matrixResponse, err := json.Marshal(model.QueryResponse{ Status: "", Data: model.QueryResponseData{ ResultType: model.ResultTypeMatrix, Result: model.Matrix{}, }, }) + if err != nil { + panic("failed to marshal matrix benchmark response: " + err.Error()) + } ... - streamResponse, _ := json.Marshal(model.QueryResponse{ + streamResponse, err := json.Marshal(model.QueryResponse{ Status: "", Data: model.QueryResponseData{ ResultType: model.ResultTypeStream, Result: streams, }, }) + if err != nil { + panic("failed to marshal stream benchmark response: " + err.Error()) + }As per coding guidelines,
**/*.{ts,tsx,go}: Handle errors with proper error messages and consistent error handling patterns.Also applies to: 46-52
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/server/server_perf_test.go` around lines 21 - 27, The json.Marshal calls that create matrixResponse (and the similar streamResponse block) currently ignore errors; update both marshals to capture the error and fail fast with a clear message (e.g., if err != nil { log.Fatalf("marshal %s response: %v", "matrix"/"stream", err) } or panic/ t.Fatalf in test context) so setup doesn't silently proceed when fixture types change; modify the code around the matrixResponse and the streamResponse marshal sites to check err and include the actual error in the fatal message.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@pkg/server/server_perf_overview_test.go`:
- Around line 121-122: The test currently ignores errors returned by
http.NewRequest (e.g., the line creating req from
backendSvc.URL+"/api/flow/metrics?"+params); update both occurrences to capture
and handle the error (err := http.NewRequest(...)), and fail the test
deterministically (e.g., t.Fatalf or require.NoError) with a clear message if
err != nil before calling client.Do(req); ensure the error handling uses the
same pattern both where req is created and where the second request is built so
failures are explicit and triageable.
In `@pkg/server/server_perf_test.go`:
- Around line 143-145: The code currently ignores errors returned by
http.NewRequest in runFlowRecordsQuery (and the other benchmark helpers around
the same file), so update each call to check the error, log or fail the
test/benchmark immediately with clear endpoint context (include the URL/query
string and function name) and return/stop the benchmark—e.g., in
runFlowRecordsQuery and the other helper functions that call http.NewRequest,
replace the discarded error with proper error handling that fails fast (t.Fatalf
or b.Fatalf or equivalent) and includes the endpoint being requested for
actionable diagnostics.
In `@pkg/server/server_perf_topology_test.go`:
- Around line 64-65: The test currently ignores errors returned by
http.NewRequest; update both occurrences where req, _ := http.NewRequest(...) is
used to check the error and fail immediately with context (e.g. if err != nil {
b.Fatalf("creating request to %s failed: %v",
backendSvc.URL+"/api/flow/metrics?"+params, err) } or use t.Fatalf if inside a
unit test), so replace the blank identifier with err, validate err, and include
the request URL/params and the error in the Fatalf message; apply the same
change for the second occurrence around the client.Do call as well.
---
Outside diff comments:
In `@pkg/server/server_perf_test.go`:
- Around line 21-27: The json.Marshal calls that create matrixResponse (and the
similar streamResponse block) currently ignore errors; update both marshals to
capture the error and fail fast with a clear message (e.g., if err != nil {
log.Fatalf("marshal %s response: %v", "matrix"/"stream", err) } or panic/
t.Fatalf in test context) so setup doesn't silently proceed when fixture types
change; modify the code around the matrixResponse and the streamResponse marshal
sites to check err and include the actual error in the fatal message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2e51b39e-0e5d-48b2-bc83-8c6677959a6a
📒 Files selected for processing (6)
Makefilepkg/server/BENCHMARKS.mdpkg/server/server_perf_overview_test.gopkg/server/server_perf_table_test.gopkg/server/server_perf_test.gopkg/server/server_perf_topology_test.go
Adds comprehensive Go benchmarks for server performance testing across all major views (Overview, Topology, Table) and scenarios. ## New Benchmark Files - server_perf_overview_test.go - Overview page scenarios (Basic, DNS, RTT, Dropped, Full) - server_perf_topology_test.go - Topology view with all scopes (cluster, zone, host, namespace, owner, resource) - server_perf_table_test.go - Table view with varying result sizes (100, 1K, 10K records) - server_perf_test.go - Shared helpers and export benchmarks ## Test Coverage **Overview Tab:** - Basic metrics (namespace/app aggregation for Bytes/Packets) - DNS latency scenarios (avg, p90, by name/code/app) - RTT scenarios (min, avg, p90) - Packet drop scenarios (by state/cause) - Full scenario (all features enabled - worst case) - Filter complexity (1, 2, 4 filters) - Concurrent user load **Topology Tab:** - All metric types (Bytes, Packets, DNS, RTT, Dropped) - BytesWithDrops combined scenario - All 6 scopes (cluster, zone, host, namespace, owner, resource) - Filter complexity (1, 2, 4 filters) - Concurrent user load **Table Tab:** - Basic table view - With histogram (2 queries) - Large result sets (100, 1K, 10K records) - Filter complexity (1, 2, 4 filters) - Concurrent user load **Export:** - Basic CSV export - With filters - With limit ## Code Quality Improvements - Constants for HTTP client connection pool values - Named filterTest type for clarity - Comprehensive error handling for json.Marshal and http.NewRequest - Defensive nil checks for response body cleanup - Shared helper functions (cleanupBenchmarkServers, runMetricsQuery, etc.) - DRY filter test data via getCommonFilterTests() - Optimized timestamp generation (reduced 10K calls to 1 per benchmark) ## Baseline File Updated benchmark-baseline.txt with CI results for regression detection. All benchmarks run with AMD EPYC 9V74 80-Core Processor baseline. ## Documentation Added pkg/server/BENCHMARKS.md with: - Quick start guide - Detailed benchmark descriptions - Performance baseline interpretation - CI integration instructions Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
c0728a7 to
af74df8
Compare
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1444 +/- ##
==========================================
- Coverage 52.53% 52.42% -0.12%
==========================================
Files 232 233 +1
Lines 12355 12452 +97
Branches 1551 1564 +13
==========================================
+ Hits 6491 6528 +37
- Misses 5269 5310 +41
- Partials 595 614 +19
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
Description
Implement future improvements listed in BENCHMARKS.md:
All benchmarks run successfully individually or in small groups. The concurrent benchmarks are separated to avoid macOS port exhaustion when running the full suite.
Dependencies
n/a
Checklist
Summary by CodeRabbit
Tests
Documentation
Chores