Skip to content

refactor: use pure gRPC instead of grpc-web wrapper on the provider side#2194

Open
nimrod-teich wants to merge 5 commits intomainfrom
refactor/provider-listener-pure-grpc
Open

refactor: use pure gRPC instead of grpc-web wrapper on the provider side#2194
nimrod-teich wants to merge 5 commits intomainfrom
refactor/provider-listener-pure-grpc

Conversation

@nimrod-teich
Copy link
Copy Markdown
Contributor

@nimrod-teich nimrod-teich commented Jan 20, 2026

User description

Summary

Replace the grpc-web/h2c/HTTP hybrid server with a cleaner architecture using pure gRPC and native compression.

Background

The previous implementation used grpc-web and h2c HTTP wrappers to support the Lava SDK (lava-sdk/lavajs), which allowed browser-based JavaScript clients to communicate with providers. Since SDK support has been removed (see #2186), this infrastructure is no longer needed.

Changes

1. Pure gRPC Provider Listener

Replace the hybrid server with a cleaner architecture:

  • Use cmux to multiplex HTTP health checks and native gRPC on the same port
  • Remove grpcweb.WrapServer and h2c.NewHandler complexity
  • Add gRPC health checking protocol support (grpc_health_v1)
  • Keep HTTP health endpoint for Kubernetes probes compatibility
  • Optimize cmux matching: HTTP1Fast() first, then Any() for gRPC (avoids expensive HTTP/2 header parsing at high scale)

Benefits:

  • Native gRPC connection management (keepalives, stream limits)
  • Proper graceful shutdown via grpcServer.GracefulStop()
  • Simpler, more maintainable code
  • Better performance at scale (no protocol translation overhead)

2. Native gRPC Compression

Replace manual application-level compression with native gRPC gzip:

  • Remove common.CompressData() / common.DecompressData() from relay path
  • Remove custom headers (LavaCompressionHeader, LavaCompressionSupportHeader)
  • Register gzip compressor on provider via import _ "google.golang.org/grpc/encoding/gzip"
  • Consumer already uses grpc.UseCompressor(gzip.Name) when flag is enabled

3. Flag Rename

  • --enable-application-level-compression--enable-grpc-compression
  • Updated description: "enable gzip compression for gRPC messages between consumer and provider (reduces bandwidth, adds CPU overhead)"

BREAKING CHANGES

gRPC-web Protocol Removed

gRPC-web protocol is no longer supported. Browser-based clients that relied on grpc-web to communicate directly with providers will no longer work. Only native gRPC clients are supported.

Who is affected: Only the deprecated lava-sdk (browser clients) which has already been removed.

Compression Flag Renamed

The --enable-application-level-compression flag has been renamed to --enable-grpc-compression.

Migration: Update any scripts, configurations, or Kubernetes manifests using the old flag name.

Description

Closes: #XXXX


Author Checklist

All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow up issues.

I have...

  • read the contribution guide
  • included the correct type prefix in the PR title, you can find examples of the prefixes below:
  • confirmed ! in the type prefix if API or client breaking change
  • targeted the main branch
  • provided a link to the relevant issue or specification
  • reviewed "Files changed" and left comments if necessary
  • included the necessary unit and integration tests
  • updated the relevant documentation or specification, including comments for documenting Go code
  • confirmed all CI checks have passed

Reviewers Checklist

All items are required. Please add a note if the item is not applicable and please add
your handle next to the items reviewed if you only reviewed selected items.

I have...

  • confirmed the correct type prefix in the PR title
  • confirmed all author checklist items have been addressed
  • reviewed state machine logic, API design and naming, documentation is accurate, tests and test coverage

Generated description

Below is a concise technical summary of the changes proposed in this PR:
Replace the provider listener with a cmux-managed grpc.Server that keeps TLS, HTTP health probes, and gRPC health checks running without grpc-web while ensuring graceful shutdown and registered health states. Align the relay path with native gzip compression across the consumer, smart router, and provider flows so the existing AllowGRPCCompression flag leverages gRPC’s built-in compressors instead of ad hoc headers.

TopicDetails
gRPC Compression Switch the relay path to rely on native gRPC gzip compression by renaming the compression flag, updating CLI messaging, removing manual compression headers/decompression in the consumer, smart router, and provider servers, and adding grpc-focused compression tests to prove the behavior.
Modified files (7)
  • protocol/lavasession/consumer_types.go
  • protocol/rpcconsumer/rpcconsumer.go
  • protocol/rpcconsumer/rpcconsumer_server.go
  • protocol/rpcprovider/grpc_compression_test.go
  • protocol/rpcprovider/rpcprovider_server.go
  • protocol/rpcsmartrouter/rpcsmartrouter.go
  • protocol/rpcsmartrouter/rpcsmartrouter_server.go
Latest Contributors(2)
UserCommitDate
NadavLevifeat-smart-router-impl...March 11, 2026
nimrod.teich@gmail.comfix-make-relay-retry-l...March 01, 2026
Other Other files
Modified files (1)
  • protocol/rpcsmartrouter/rpcsmartrouter_compression_test.go
Latest Contributors(1)
UserCommitDate
NadavLevifeat-smart-router-impl...March 11, 2026
Provider Listener Refactor the provider listener to run a pure grpc.Server multiplexed by cmux, register the gRPC health service, keep the HTTP health probe for Kubernetes, and handle TLS/graceful shutdown without grpc-web while including the new dependencies (cmux upgrade, gzip registration) and expanded provider listener tests.
Modified files (4)
  • go.mod
  • go.sum
  • protocol/rpcprovider/provider_listener.go
  • protocol/rpcprovider/provider_listener_test.go
Latest Contributors(2)
UserCommitDate
anna@magmadevs.comfeat-provideroptimizer...February 22, 2026
nimrod.teich@gmail.comchore-remove-lava-sdk-...January 19, 2026
This pull request is reviewed by Baz. Review like a pro on (Baz).

@codecov
Copy link
Copy Markdown

codecov bot commented Jan 20, 2026

Codecov Report

❌ Patch coverage is 83.05085% with 10 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
protocol/rpcprovider/provider_listener.go 89.09% 4 Missing and 2 partials ⚠️
protocol/rpcconsumer/rpcconsumer.go 0.00% 2 Missing ⚠️
protocol/rpcsmartrouter/rpcsmartrouter.go 0.00% 2 Missing ⚠️
Flag Coverage Δ
consensus 8.71% <ø> (ø)
protocol 34.87% <83.05%> (+0.25%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
protocol/lavasession/consumer_types.go 76.08% <ø> (ø)
protocol/rpcconsumer/rpcconsumer_server.go 33.10% <ø> (+0.22%) ⬆️
protocol/rpcprovider/rpcprovider_server.go 9.89% <ø> (+0.18%) ⬆️
protocol/rpcsmartrouter/rpcsmartrouter_server.go 33.33% <ø> (-0.39%) ⬇️
protocol/rpcconsumer/rpcconsumer.go 0.00% <0.00%> (ø)
protocol/rpcsmartrouter/rpcsmartrouter.go 4.30% <0.00%> (ø)
protocol/rpcprovider/provider_listener.go 88.28% <89.09%> (+53.76%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@nimrod-teich nimrod-teich force-pushed the refactor/provider-listener-pure-grpc branch from 603a3fc to cadf97a Compare January 20, 2026 11:22
@github-actions
Copy link
Copy Markdown

github-actions bot commented Jan 20, 2026

Test Results

0 tests  ±0   0 ✅ ±0   0s ⏱️ ±0s
0 suites ±0   0 💤 ±0 
7 files   ±0   0 ❌ ±0 

Results for commit 975a583. ± Comparison against base commit 9861746.

♻️ This comment has been updated with latest results.

@NadavLevi NadavLevi force-pushed the refactor/provider-listener-pure-grpc branch from df807cf to 88bb60e Compare January 29, 2026 07:42
BREAKING CHANGE: gRPC-web protocol is no longer supported. Browser-based
clients that relied on grpc-web to communicate directly with providers
will no longer work. Only native gRPC clients are supported.

The previous implementation used grpc-web and h2c HTTP wrappers to support
the Lava SDK (lava-sdk/lavajs) which allowed browser-based JavaScript clients
to communicate with providers. Since SDK support has been removed (see #2186),
this infrastructure is no longer needed.

Replace the grpc-web/h2c/HTTP hybrid server with a cleaner architecture:

- Use cmux to multiplex HTTP health checks and native gRPC on same port
- Remove grpcweb.WrapServer and h2c.NewHandler complexity
- Add gRPC health checking protocol support (grpc_health_v1)
- Keep HTTP health endpoint for Kubernetes probes compatibility
- Optimize cmux matching: HTTP1Fast() first, then Any() for gRPC
  (avoids expensive HTTP/2 header parsing at high scale)

Benefits:
- Native gRPC connection management (keepalives, stream limits)
- Proper graceful shutdown via grpcServer.GracefulStop()
- Simpler, more maintainable code
- Better performance at scale (no protocol translation overhead)

Also adds comprehensive test suite for provider_listener.go with
14 tests and 2 benchmarks covering HTTP health, gRPC health,
relay/probe RPCs, concurrent requests, and error handling.
gRPC handles compression automatically when the gzip compressor is
registered (via import) and the client uses grpc.UseCompressor(gzip.Name).

Removed:
- Manual compression in provider (common.CompressData)
- Manual decompression in consumer/smartrouter (common.DecompressData)
- Custom LavaCompressionSupportHeader and LavaCompressionHeader headers
- Obsolete rpcsmartrouter_compression_test.go

The --enable-application-level-compression flag still works, but now
uses native gRPC compression instead of manual application-level
compression. This is simpler, more efficient, and follows gRPC best
practices.

Note: common.CompressData/DecompressData utilities are kept as they're
still used for cache storage compression.
- Renamed from --enable-application-level-compression to
  --enable-grpc-compression (more accurate now that we use native gRPC
  compression)
- Updated flag description to explain the trade-off (bandwidth vs CPU)
- Improved log message when compression is enabled
Add comprehensive tests verifying native gRPC compression works
correctly between consumer and provider:

- TestGRPCCompressionEnabled: verifies data is compressed (99%+ reduction)
- TestGRPCCompressionDisabled: verifies no compression without flag
- TestGRPCCompressionBidirectional: verifies both request and response compression
- TestGRPCCompressionWithConnectGRPCClient: tests via actual ConnectGRPCClient function
- TestGRPCCompressionSmallPayload: verifies small payload handling
- BenchmarkGRPCWithCompression/WithoutCompression: performance comparison

Uses gRPC stats handler to measure actual wire vs logical payload sizes.
@nimrod-teich nimrod-teich force-pushed the refactor/provider-listener-pure-grpc branch from 88bb60e to 975a583 Compare March 3, 2026 11:42
@nimrod-teich nimrod-teich reopened this Mar 3, 2026
@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Replace grpc-web with pure gRPC and native compression

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Replace grpc-web/h2c hybrid server with pure gRPC using cmux multiplexing
  - Use native gRPC compression instead of manual application-level compression
  - Add gRPC health checking protocol support (grpc_health_v1)
  - Keep HTTP health endpoint for Kubernetes compatibility
• Remove custom compression headers and manual compress/decompress logic
  - Compression now handled automatically by gRPC layer via gzip compressor
  - Simplifies relay path in consumer and provider implementations
• Rename flag from enable-application-level-compression to enable-grpc-compression
  - Update flag descriptions and log messages for clarity
• Add comprehensive test suite with 14 tests and 2 benchmarks
  - Cover HTTP health checks, gRPC health service, relay/probe RPCs, concurrent requests
Diagram
flowchart LR
  A["grpc-web/h2c<br/>Hybrid Server"] -->|refactor| B["Pure gRPC<br/>with cmux"]
  C["Manual<br/>Compression"] -->|replace| D["Native gRPC<br/>gzip"]
  B --> E["HTTP Health<br/>Listener"]
  B --> F["gRPC<br/>Listener"]
  E -->|multiplexed| G["Single Port"]
  F -->|multiplexed| G
  D --> H["Automatic<br/>Compression"]
Loading

Grey Divider

File Changes

1. protocol/lavasession/consumer_types.go ⚙️ Configuration changes +1/-1

Rename compression flag constant

protocol/lavasession/consumer_types.go


2. protocol/rpcconsumer/rpcconsumer.go ✨ Enhancement +2/-2

Update flag name and remove compression headers

protocol/rpcconsumer/rpcconsumer.go


3. protocol/rpcconsumer/rpcconsumer_server.go ✨ Enhancement +4/-22

Remove manual compression/decompression logic

protocol/rpcconsumer/rpcconsumer_server.go


View more (9)
4. protocol/rpcprovider/provider_listener.go ✨ Enhancement +103/-44

Implement pure gRPC with cmux multiplexing

protocol/rpcprovider/provider_listener.go


5. protocol/rpcprovider/provider_listener_test.go 🧪 Tests +750/-0

Add comprehensive provider listener test suite

protocol/rpcprovider/provider_listener_test.go


6. protocol/rpcprovider/grpc_compression_test.go 🧪 Tests +545/-0

Add gRPC compression functionality tests

protocol/rpcprovider/grpc_compression_test.go


7. protocol/rpcprovider/rpcprovider_server.go ✨ Enhancement +2/-28

Remove manual compression header logic

protocol/rpcprovider/rpcprovider_server.go


8. protocol/rpcsmartrouter/rpcsmartrouter.go ✨ Enhancement +2/-2

Update flag name and remove compression headers

protocol/rpcsmartrouter/rpcsmartrouter.go


9. protocol/rpcsmartrouter/rpcsmartrouter_server.go ✨ Enhancement +4/-15

Remove manual decompression logic

protocol/rpcsmartrouter/rpcsmartrouter_server.go


10. protocol/rpcsmartrouter/rpcsmartrouter_compression_test.go 🧪 Tests +0/-200

Remove obsolete compression test file

protocol/rpcsmartrouter/rpcsmartrouter_compression_test.go


11. go.mod Dependencies +1/-0

Add cmux dependency for connection multiplexing

go.mod


12. go.sum Dependencies +3/-0

Update checksums for cmux dependency

go.sum


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review bot commented Mar 3, 2026

Code Review by Qodo

🐞 Bugs (3) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. TLS breaks HTTP healthcheck 🐞 Bug ✓ Correctness
Description
When TLS is enabled (DisableTLS=false), the listener is wrapped with TLS before cmux matching, so
cmux.HTTP1Fast() cannot detect HTTP health probes and routes them to the gRPC listener. This breaks
Kubernetes HTTP probes (and any HTTPS GET /lava/health) on TLS-enabled providers.
Code

protocol/rpcprovider/provider_listener.go[R90-104]

+	// Wrap with TLS if enabled
+	if !networkAddress.DisableTLS {
+		tlsConfig := lavasession.GetTlsConfig(networkAddress)
+		lis = tls.NewListener(lis, tlsConfig)
+	} else {
+		utils.LavaFormatInfo("Running with disabled TLS configuration")
	}

-	pl.httpServer = http.Server{
-		Handler: h2c.NewHandler(http.HandlerFunc(handler), &http2.Server{}),
-	}
+	// Create connection multiplexer to handle both HTTP and gRPC on same port
+	mux := cmux.New(lis)
+	// Match HTTP/1.1 first for health checks (fast prefix match)
+	httpListener := mux.Match(cmux.HTTP1Fast())
+	// Everything else goes to gRPC (avoids expensive header parsing)
+	grpcListener := mux.Match(cmux.Any())
+	pl.cmux = mux
Evidence
cmux performs protocol sniffing by reading the first bytes on the connection; after wrapping with
tls.NewListener, those bytes are TLS handshake records, not an HTTP method prefix, so HTTP1Fast()
won’t match and health traffic won’t reach the HTTP server. Since DisableTLS is a config boolean
defaulting to false when omitted, this affects typical TLS-enabled deployments.

protocol/rpcprovider/provider_listener.go[85-104]
protocol/lavasession/provider_types.go[22-27]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
TLS wrapping occurs before cmux matching, which prevents cmux from distinguishing HTTP health probes from gRPC traffic. This makes the HTTP health endpoint unreachable when TLS is enabled.

### Issue Context
- `NetworkAddressData.DisableTLS` defaults to `false` when omitted, so TLS is commonly enabled.
- cmux matchers like `HTTP1Fast()` require plaintext HTTP method bytes.

### Fix Focus Areas
- protocol/rpcprovider/provider_listener.go[85-104]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

2. Shutdown lacks explicit cmux close 🐞 Bug ⛯ Reliability
Description
ProviderListener.Shutdown stops gRPC/HTTP servers but never explicitly closes the cmux (or the
underlying base listener) and ignores Shutdown() errors. This makes shutdown behavior fragile and
can lead to lingering accepts/unclear port release semantics during restarts.
Code

protocol/rpcprovider/provider_listener.go[R78-82]

func (pl *ProviderListener) Shutdown(shutdownCtx context.Context) error {
-	if err := pl.httpServer.Shutdown(shutdownCtx); err != nil {
-		utils.LavaFormatFatal("Provider failed to shutdown", err)
-	}
+	pl.healthServer.Shutdown()
+	pl.httpServer.Shutdown(shutdownCtx)
+	pl.grpcServer.GracefulStop()
	return nil
Evidence
The listener/mux are created and mux.Serve() is started in a goroutine, but Shutdown() doesn’t close
the cmux instance (stored on the struct) nor does it propagate httpServer.Shutdown errors. Even if
current behavior works implicitly, it’s brittle and harder to reason about during graceful
shutdown/restart.

protocol/rpcprovider/provider_listener.go[48-55]
protocol/rpcprovider/provider_listener.go[78-82]
protocol/rpcprovider/provider_listener.go[163-170]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Shutdown does not explicitly close cmux/base listener and ignores shutdown errors, making graceful shutdown semantics fragile.

### Issue Context
- cmux is stored on the struct and mux.Serve() runs in a goroutine.
- Shutdown currently ignores `httpServer.Shutdown` error and does not close cmux/base listener.

### Fix Focus Areas
- protocol/rpcprovider/provider_listener.go[48-55]
- protocol/rpcprovider/provider_listener.go[78-82]
- protocol/rpcprovider/provider_listener.go[163-170]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Tests use sleeps/no timeouts 🐞 Bug ⛯ Reliability
Description
New grpc compression tests use fixed time.Sleep for readiness and use grpc.DialContext(...,
WithBlock()) with a context that has no deadline. If the listener fails to come up, failures can be
very slow (waiting until the global test timeout) and flaky across environments.
Code

protocol/rpcprovider/grpc_compression_test.go[R85-116]

+	ctx, cancel := context.WithCancel(context.Background())
+	defer cancel()
+
+	// Create provider listener
+	pl := NewProviderListener(ctx, networkAddr, "/lava/health")
+	require.NotNil(t, pl)
+
+	// Large, highly compressible response data (JSON-like repeated pattern)
+	largeResponseData := []byte(strings.Repeat(`{"blockNumber":"0x123456","result":"success","data":"`, 1000) +
+		strings.Repeat("a]", 5000) + `"}`)
+
+	// Register mock receiver that returns large compressible data
+	receiver := &mockRelayReceiver{
+		relayFunc: func(ctx context.Context, request *pairingtypes.RelayRequest) (*pairingtypes.RelayReply, error) {
+			return &pairingtypes.RelayReply{Data: largeResponseData}, nil
+		},
+	}
+	err := pl.RegisterReceiver(receiver, createTestEndpoint(addr))
+	require.NoError(t, err)
+
+	time.Sleep(100 * time.Millisecond)
+
+	// Create stats handler to track compression
+	statsHandler := &compressionStatsHandler{}
+
+	// Connect WITH compression enabled
+	conn, err := grpc.DialContext(ctx, addr,
+		grpc.WithTransportCredentials(insecure.NewCredentials()),
+		grpc.WithBlock(),
+		grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)),
+		grpc.WithStatsHandler(statsHandler),
+	)
Evidence
The tests create a cancel-only context and then perform blocking dials; because cancel() is deferred
until after the dial returns, a startup failure can block until the overall go test timeout.
Readiness is also assumed via sleep rather than a deterministic probe/retry loop.

protocol/rpcprovider/grpc_compression_test.go[85-116]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Tests rely on fixed sleeps and blocking dials without per-test deadlines, which can cause slow/flaky CI failures.

### Issue Context
`grpc.WithBlock()` will wait until connection succeeds or the provided context is done; currently the context has no deadline.

### Fix Focus Areas
- protocol/rpcprovider/grpc_compression_test.go[85-116]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +90 to +104
// Wrap with TLS if enabled
if !networkAddress.DisableTLS {
tlsConfig := lavasession.GetTlsConfig(networkAddress)
lis = tls.NewListener(lis, tlsConfig)
} else {
utils.LavaFormatInfo("Running with disabled TLS configuration")
}

pl.httpServer = http.Server{
Handler: h2c.NewHandler(http.HandlerFunc(handler), &http2.Server{}),
}
// Create connection multiplexer to handle both HTTP and gRPC on same port
mux := cmux.New(lis)
// Match HTTP/1.1 first for health checks (fast prefix match)
httpListener := mux.Match(cmux.HTTP1Fast())
// Everything else goes to gRPC (avoids expensive header parsing)
grpcListener := mux.Match(cmux.Any())
pl.cmux = mux
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Tls breaks http healthcheck 🐞 Bug ✓ Correctness

When TLS is enabled (DisableTLS=false), the listener is wrapped with TLS before cmux matching, so
cmux.HTTP1Fast() cannot detect HTTP health probes and routes them to the gRPC listener. This breaks
Kubernetes HTTP probes (and any HTTPS GET /lava/health) on TLS-enabled providers.
Agent Prompt
### Issue description
TLS wrapping occurs before cmux matching, which prevents cmux from distinguishing HTTP health probes from gRPC traffic. This makes the HTTP health endpoint unreachable when TLS is enabled.

### Issue Context
- `NetworkAddressData.DisableTLS` defaults to `false` when omitted, so TLS is commonly enabled.
- cmux matchers like `HTTP1Fast()` require plaintext HTTP method bytes.

### Fix Focus Areas
- protocol/rpcprovider/provider_listener.go[85-104]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +98 to +109
// Create connection multiplexer to handle both HTTP and gRPC on same port
mux := cmux.New(lis)
// Match HTTP/1.1 first for health checks (fast prefix match)
httpListener := mux.Match(cmux.HTTP1Fast())
// Everything else goes to gRPC (avoids expensive header parsing)
grpcListener := mux.Match(cmux.Any())
pl.cmux = mux

var serveExecutor func() error
if networkAddress.DisableTLS {
utils.LavaFormatInfo("Running with disabled TLS configuration")
serveExecutor = func() error {
return pl.httpServer.Serve(lis)
}
} else {
pl.httpServer.TLSConfig = lavasession.GetTlsConfig(networkAddress)
serveExecutor = func() error {
return pl.httpServer.ServeTLS(lis, "", "")
}
// Build gRPC server
opts := []grpc.ServerOption{
grpc.MaxRecvMsgSize(1024 * 1024 * 512), // 512MB for large debug responses
grpc.MaxSendMsgSize(1024 * 1024 * 512), // 512MB for large debug responses
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

grpc-web/h2c was removed from the provider listener so browser gRPC-Web clients now get plain gRPC with no CORS — should we restore grpcweb.WrapServer or add a dedicated grpc-web handler?

Finding type: Breaking Changes | Severity: 🔴 High


Want Baz to fix this for you? Activate Fixer

Other fix methods

Fix in Cursor

Prompt for AI Agents:

In protocol/rpcprovider/provider_listener.go around lines 98 to 161, the
NewProviderListener logic removed grpc-web/h2c support and now calls grpcServer.Serve
directly on the cmux gRPC listener, which breaks browser grpc-web clients (no CORS and
no grpc-web protocol handling). Reintroduce grpc-web handling by wrapping grpcServer
with grpcweb.WrapServer and creating an HTTP handler that: (1) sets CORS headers
(Access-Control-Allow-Origin: * and allowed headers including x-grpc-web and the LAVA
consumer header), (2) responds to the configured healthCheckPath GET with 200/"Healthy",
and (3) delegates other requests to the wrapped grpc-web server. Attach that handler to
an http.Server served on the cmux HTTP listener (use h2c.NewHandler if you need
cleartext HTTP/2 support), and keep grpcServer.Serve running on the grpcListener as-is.
Keep TLS wrapping and cmux matching unchanged.

Comment on lines 486 to +488
lavasession.AllowGRPCCompressionForConsumerProviderCommunication = viper.GetBool(lavasession.AllowGRPCCompressionFlag)
if lavasession.AllowGRPCCompressionForConsumerProviderCommunication {
utils.LavaFormatInfo("AllowGRPCCompressionForConsumerProviderCommunication is set to true, messages will be compressed", utils.Attribute{Key: lavasession.AllowGRPCCompressionFlag, Value: lavasession.AllowGRPCCompressionForConsumerProviderCommunication})
utils.LavaFormatInfo("gRPC compression enabled, relay messages will use gzip compression", utils.Attribute{Key: lavasession.AllowGRPCCompressionFlag, Value: lavasession.AllowGRPCCompressionForConsumerProviderCommunication})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new block that reads lavasession.AllowGRPCCompressionForConsumerProviderCommunication from Viper and logs that compression is enabled is duplicated verbatim in protocol/rpcsmartrouter/rpcsmartrouter.go (lines 690‑693), and the Flags().Bool(lavasession.AllowGRPCCompressionFlag, …) registration right after it is also identical to the one in the smart router cli (lines 994‑996). Any future change to the flag name/description, default, or logging would have to be applied in both files, which is easy to miss. Can we extract a shared helper to register this flag and apply its configuration (e.g., common.RegisterGrpcCompressionFlag(cmd) + common.ApplyGrpcCompressionFlag(viper)) so the behaviour lives in one place instead of being copy-pasted?

Finding type: Code Dedup and Conventions | Severity: 🟢 Low


Want Baz to fix this for you? Activate Fixer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant