add dual-apiserver awareness/control to cluster-olm-operator#204
add dual-apiserver awareness/control to cluster-olm-operator#204grokspawn wants to merge 1 commit intoopenshift:mainfrom
Conversation
WalkthroughThis PR adds HyperShift mode support, enabling the cluster-olm-operator to run in management clusters and manage OLMv1 components across hosted cluster API servers. Changes include documentation, example manifests, dual-API client routing based on ChangesHyperShift Mode Support
Sequence DiagramsequenceDiagram
actor Admin
participant MgmtCluster as Management Cluster<br/>(cluster-olm-operator)
participant Clients as Client Factory<br/>(pkg/clients)
participant Builder as Controller Builder<br/>(pkg/controller)
participant Deployments as Hosted Deployments<br/>(catalogd,<br/>operator-controller)
participant HostedAPI as Hosted Cluster API
participant Secret as Kubeconfig Secret<br/>(Hosted Cluster)
Admin->>MgmtCluster: Deploy with HYPERSHIFT_MODE=true<br/>HOSTED_KUBECONFIG_SECRET=admin-kubeconfig
MgmtCluster->>Clients: Initialize client factory
Clients->>Clients: Detect HYPERSHIFT_MODE=true
Clients->>Secret: Load hosted kubeconfig
Secret-->>Clients: kubeconfig bytes
Clients->>Clients: Create dual-client set<br/>(management + hosted)
MgmtCluster->>Builder: Build controllers
Builder->>Builder: Detect HyperShift mode
Builder->>Builder: Register InjectHostedClusterKubeconfigHook
Builder->>Deployments: Mutate deployment specs
Deployments->>Deployments: Add hosted-kubeconfig volume mount
Deployments->>Deployments: Append --kubeconfig & --system-namespace args
MgmtCluster->>HostedAPI: Use hosted kubeconfig for API calls
HostedAPI-->>MgmtCluster: Hosted cluster resources
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 9 | ❌ 3❌ Failed checks (3 warnings)
✅ Passed checks (9 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.12.1)Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
pkg/controller/builder.go (1)
399-421: 💤 Low valueInit containers are skipped; documentation says "all containers".
InjectHostedClusterKubeconfigHookonly iteratesdeployment.Spec.Template.Spec.Containers, butdocs/hypershift.mdstates "The kubeconfig is mounted into all containers." The current init containers (manifest copy) don't need the hosted kubeconfig, but the comment and doc are misleading. If a future init container does need it, this will silently not work.Either tighten the doc to say "all regular containers" or add the analogous loop over
InitContainers.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@pkg/controller/builder.go` around lines 399 - 421, The InjectHostedClusterKubeconfigHook currently only modifies deployment.Spec.Template.Spec.Containers, which skips InitContainers contrary to the docs; update InjectHostedClusterKubeconfigHook to also iterate deployment.Spec.Template.Spec.InitContainers and apply the same VolumeMount, Args additions and log call (mirror the block that handles Containers) so any init container that needs the hosted kubeconfig will get it; alternatively, if you prefer the documentation change, update docs/hypershift.md to say "all regular containers" and keep the code unchanged—pick one approach and make the corresponding change to InjectHostedClusterKubeconfigHook or the docs.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@docs/examples/hypershift-deployment.yaml`:
- Around line 86-91: clients.go currently expects HYPERSHIFT_MODE=true and
HOSTED_KUBECONFIG (file path) while builder.go uses HOSTED_KUBECONFIG_SECRET and
HOSTED_NAMESPACE; update clients.go to accept the same env vars builder.go uses
(HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE) or normalize both modules to a
single convention (e.g., HYPERSHIFT_MODE + HOSTED_KUBECONFIG path) so dual-API
routing uses the hosted kubeconfig; then update the example
hypershift-deployment.yaml to set HYPERSHIFT_MODE=true (if you choose that
convention) or to include HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE and add
a volume/volumeMount that mounts the admin-kubeconfig secret to the path
referenced by HOSTED_KUBECONFIG so DynamicClient/RESTMapper target the hosted
cluster. Ensure references to clients.go, builder.go, HOSTED_KUBECONFIG,
HOSTED_KUBECONFIG_SECRET, HOSTED_NAMESPACE, DynamicClient and RESTMapper are
aligned.
- Around line 36-63: The two initContainers copy-catalogd-manifests and
copy-operator-controller-manifests are missing hardened security settings;
update each init container's securityContext to include
allowPrivilegeEscalation: false and add capabilities.drop: [ "ALL" ] (matching
the main container's securityContext), ensuring these fields sit alongside
readOnlyRootFilesystem: true to prevent privilege escalation for those init
containers.
In `@docs/hypershift.md`:
- Around line 114-117: The two fenced code blocks that show log output (the
blocks starting with "I0312 10:15:23.123456 ..." and "I0312 10:15:23.234567
...") are missing a language specifier, which triggers markdownlint MD040;
update each triple-backtick fence to include a language identifier such as text
(e.g., ```text) so both log-output blocks in the hypershift.md section render
properly and silence the lint warning.
- Around line 106-107: Replace the placeholder PR links for catalogd and
operator-controller in the hypershift.md lines that list `**catalogd**:
--kubeconfig` and `**operator-controller**: --kubeconfig` (the entries
referencing PR `#xyz`) with the actual upstream PR numbers or working links
before merging; locate the two lines containing "catalogd" and
"operator-controller" in the docs and update the bracketed URLs to the real PR
URLs (or remove the PR link if none exists) so the documentation points to
verifiable upstream changes.
In `@pkg/clients/clients.go`:
- Around line 77-91: clients.go currently activates HyperShift using
HYPERSHIFT_MODE and HOSTED_KUBECONFIG (a file path), which is incompatible with
builder.go's HOSTED_KUBECONFIG_SECRET/HOSTED_NAMESPACE strategy; change the
HyperShift detection and loading in the dual-API block (the
mgmtConfig/hostedConfig logic) to mirror builder.go: detect presence of
HOSTED_KUBECONFIG_SECRET (and HOSTED_NAMESPACE if needed), read the mounted
secret file at the well-known kubeconfigMountPath (instead of expecting
HOSTED_KUBECONFIG on disk) to call clientcmd.BuildConfigFromFlags("",
kubeconfigMountPath), and return clear errors if the secret-mounted file is
missing or cannot be parsed; update log/error messages in that branch to
reference HOSTED_KUBECONFIG_SECRET and kubeconfigMountPath and keep
mgmtConfig/hostedConfig semantics intact so DynamicClient, RESTMapper,
ClusterCatalogClient, and ClusterExtensionClient are pointed at the hosted
cluster when configured.
In `@pkg/controller/builder_test.go`:
- Around line 160-173: The tests currently only call t.Setenv when tt.envValue
!= "" which lets pre-existing environment values leak into the empty-value
subtests; change each test (TestIsHyperShiftMode, TestGetHostedKubeconfigSecret,
TestGetHostedNamespace) to call t.Setenv unconditionally (e.g.
t.Setenv(HostedKubeconfigSecretEnv, tt.envValue) and
t.Setenv(HostedNamespaceEnv, tt.envValue) as appropriate) so the runtime
registers cleanup and each subtest is fully isolated regardless of external
environment values; update the Builder.IsHyperShiftMode table-driven test and
the other two table-driven tests to follow this pattern.
In `@pkg/controller/builder.go`:
- Around line 154-165: IsHyperShiftMode() can be true while GetHostedNamespace()
returns an empty string, causing InjectHostedClusterKubeconfigHook to add a
naked "--system-namespace=" flag; fix by validating hostedNamespace before
appending the hook: after calling b.GetHostedNamespace() check that
hostedNamespace is non-empty (and log a warning or error via log.Warn/Info if
empty) and only call deploymentHooks = append(...
InjectHostedClusterKubeconfigHook(kubeconfigSecret, hostedNamespace) ...) when
hostedNamespace != "", otherwise skip injection (or use a safe default) so
InjectHostedClusterKubeconfigHook is never invoked with an empty namespace.
---
Nitpick comments:
In `@pkg/controller/builder.go`:
- Around line 399-421: The InjectHostedClusterKubeconfigHook currently only
modifies deployment.Spec.Template.Spec.Containers, which skips InitContainers
contrary to the docs; update InjectHostedClusterKubeconfigHook to also iterate
deployment.Spec.Template.Spec.InitContainers and apply the same VolumeMount,
Args additions and log call (mirror the block that handles Containers) so any
init container that needs the hosted kubeconfig will get it; alternatively, if
you prefer the documentation change, update docs/hypershift.md to say "all
regular containers" and keep the code unchanged—pick one approach and make the
corresponding change to InjectHostedClusterKubeconfigHook or the docs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 4d423562-4dc2-4487-a3fc-d36952d7f553
📒 Files selected for processing (6)
README.mddocs/examples/hypershift-deployment.yamldocs/hypershift.mdpkg/clients/clients.gopkg/controller/builder.gopkg/controller/builder_test.go
| - name: copy-catalogd-manifests | ||
| image: quay.io/openshift/origin-olm-catalogd:latest | ||
| imagePullPolicy: IfNotPresent | ||
| command: | ||
| - /bin/sh | ||
| args: | ||
| - -c | ||
| - /cp-manifests /operand-assets | ||
| volumeMounts: | ||
| - mountPath: /operand-assets | ||
| name: operand-assets | ||
| securityContext: | ||
| readOnlyRootFilesystem: true | ||
| terminationMessagePolicy: FallbackToLogsOnError | ||
| - name: copy-operator-controller-manifests | ||
| image: quay.io/openshift/origin-olm-operator-controller:latest | ||
| imagePullPolicy: IfNotPresent | ||
| command: | ||
| - /bin/sh | ||
| args: | ||
| - -c | ||
| - /cp-manifests /operand-assets | ||
| volumeMounts: | ||
| - mountPath: /operand-assets | ||
| name: operand-assets | ||
| securityContext: | ||
| readOnlyRootFilesystem: true | ||
| terminationMessagePolicy: FallbackToLogsOnError |
There was a problem hiding this comment.
Init containers are missing allowPrivilegeEscalation: false (Checkov CKV_K8S_20).
Both copy-catalogd-manifests and copy-operator-controller-manifests set readOnlyRootFilesystem: true but omit allowPrivilegeEscalation: false and capabilities.drop: [ALL], which the main container already has. Users copy-pasting this example will deploy with insecure init container defaults.
🔒 Proposed fix for both init containers
securityContext:
readOnlyRootFilesystem: true
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop:
+ - ALL📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: copy-catalogd-manifests | |
| image: quay.io/openshift/origin-olm-catalogd:latest | |
| imagePullPolicy: IfNotPresent | |
| command: | |
| - /bin/sh | |
| args: | |
| - -c | |
| - /cp-manifests /operand-assets | |
| volumeMounts: | |
| - mountPath: /operand-assets | |
| name: operand-assets | |
| securityContext: | |
| readOnlyRootFilesystem: true | |
| terminationMessagePolicy: FallbackToLogsOnError | |
| - name: copy-operator-controller-manifests | |
| image: quay.io/openshift/origin-olm-operator-controller:latest | |
| imagePullPolicy: IfNotPresent | |
| command: | |
| - /bin/sh | |
| args: | |
| - -c | |
| - /cp-manifests /operand-assets | |
| volumeMounts: | |
| - mountPath: /operand-assets | |
| name: operand-assets | |
| securityContext: | |
| readOnlyRootFilesystem: true | |
| terminationMessagePolicy: FallbackToLogsOnError | |
| - name: copy-catalogd-manifests | |
| image: quay.io/openshift/origin-olm-catalogd:latest | |
| imagePullPolicy: IfNotPresent | |
| command: | |
| - /bin/sh | |
| args: | |
| - -c | |
| - /cp-manifests /operand-assets | |
| volumeMounts: | |
| - mountPath: /operand-assets | |
| name: operand-assets | |
| securityContext: | |
| readOnlyRootFilesystem: true | |
| allowPrivilegeEscalation: false | |
| capabilities: | |
| drop: | |
| - ALL | |
| terminationMessagePolicy: FallbackToLogsOnError | |
| - name: copy-operator-controller-manifests | |
| image: quay.io/openshift/origin-olm-operator-controller:latest | |
| imagePullPolicy: IfNotPresent | |
| command: | |
| - /bin/sh | |
| args: | |
| - -c | |
| - /cp-manifests /operand-assets | |
| volumeMounts: | |
| - mountPath: /operand-assets | |
| name: operand-assets | |
| securityContext: | |
| readOnlyRootFilesystem: true | |
| allowPrivilegeEscalation: false | |
| capabilities: | |
| drop: | |
| - ALL | |
| terminationMessagePolicy: FallbackToLogsOnError |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/examples/hypershift-deployment.yaml` around lines 36 - 63, The two
initContainers copy-catalogd-manifests and copy-operator-controller-manifests
are missing hardened security settings; update each init container's
securityContext to include allowPrivilegeEscalation: false and add
capabilities.drop: [ "ALL" ] (matching the main container's securityContext),
ensuring these fields sit alongside readOnlyRootFilesystem: true to prevent
privilege escalation for those init containers.
| # HyperShift mode configuration | ||
| # Setting these enables HyperShift mode | ||
| - name: HOSTED_KUBECONFIG_SECRET | ||
| value: admin-kubeconfig | ||
| - name: HOSTED_NAMESPACE | ||
| value: clusters-customer1 |
There was a problem hiding this comment.
Example deployment is missing the env vars required for clients.go dual-API routing.
HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE activate builder.go's hook but clients.go checks HYPERSHIFT_MODE=true and HOSTED_KUBECONFIG (a file path). Without those, the operator's DynamicClient/RESTMapper will watch the management cluster's API server, not the hosted cluster's. The operator pod also needs a volume mounting the admin-kubeconfig secret at the path specified by HOSTED_KUBECONFIG.
This is a downstream consequence of the root-cause mismatch documented in pkg/clients/clients.go lines 81–91. Fix the env var inconsistency there first; this example will need updating accordingly.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/examples/hypershift-deployment.yaml` around lines 86 - 91, clients.go
currently expects HYPERSHIFT_MODE=true and HOSTED_KUBECONFIG (file path) while
builder.go uses HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE; update clients.go
to accept the same env vars builder.go uses (HOSTED_KUBECONFIG_SECRET and
HOSTED_NAMESPACE) or normalize both modules to a single convention (e.g.,
HYPERSHIFT_MODE + HOSTED_KUBECONFIG path) so dual-API routing uses the hosted
kubeconfig; then update the example hypershift-deployment.yaml to set
HYPERSHIFT_MODE=true (if you choose that convention) or to include
HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE and add a volume/volumeMount that
mounts the admin-kubeconfig secret to the path referenced by HOSTED_KUBECONFIG
so DynamicClient/RESTMapper target the hosted cluster. Ensure references to
clients.go, builder.go, HOSTED_KUBECONFIG, HOSTED_KUBECONFIG_SECRET,
HOSTED_NAMESPACE, DynamicClient and RESTMapper are aligned.
| - **catalogd**: `--kubeconfig` flag support ([catalogd PR #xyz](https://github.com/operator-framework/catalogd/pull/xyz)) | ||
| - **operator-controller**: `--kubeconfig` flag support ([operator-controller PR #xyz](https://github.com/operator-framework/operator-controller/pull/xyz)) |
There was a problem hiding this comment.
Replace placeholder PR links before merging.
#xyz are non-functional placeholders; readers won't be able to verify upstream support status.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/hypershift.md` around lines 106 - 107, Replace the placeholder PR links
for catalogd and operator-controller in the hypershift.md lines that list
`**catalogd**: --kubeconfig` and `**operator-controller**: --kubeconfig` (the
entries referencing PR `#xyz`) with the actual upstream PR numbers or working
links before merging; locate the two lines containing "catalogd" and
"operator-controller" in the docs and update the bracketed URLs to the real PR
URLs (or remove the PR link if none exists) so the documentation points to
verifiable upstream changes.
| ``` | ||
| I0312 10:15:23.123456 1 builder.go:150] HyperShift mode detected, injecting kubeconfig configuration deployment="catalogd" kubeconfigSecret="admin-kubeconfig" hostedNamespace="clusters-customer1" | ||
| I0312 10:15:23.234567 1 builder.go:150] HyperShift mode detected, injecting kubeconfig configuration deployment="operator-controller" kubeconfigSecret="admin-kubeconfig" hostedNamespace="clusters-customer1" | ||
| ``` |
There was a problem hiding this comment.
Add language specifiers to fenced code blocks (markdownlint MD040).
Both log-output blocks at Lines 114 and 121 are missing a language identifier. Use text or log to suppress the warning and improve rendering.
📝 Proposed fix
-```
+```text
I0312 10:15:23.123456 ...-```
+```text
I0312 10:15:23.345678 ...Also applies to: 121-124
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 114-114: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@docs/hypershift.md` around lines 114 - 117, The two fenced code blocks that
show log output (the blocks starting with "I0312 10:15:23.123456 ..." and "I0312
10:15:23.234567 ...") are missing a language specifier, which triggers
markdownlint MD040; update each triple-backtick fence to include a language
identifier such as text (e.g., ```text) so both log-output blocks in the
hypershift.md section render properly and silence the lint warning.
| // Dual-API support for HyperShift: detect HYPERSHIFT_MODE and load hosted cluster config | ||
| mgmtConfig := cc.KubeConfig | ||
| hostedConfig := mgmtConfig | ||
|
|
||
| if hypershiftMode := os.Getenv("HYPERSHIFT_MODE"); hypershiftMode == "true" { | ||
| if hostedKubeconfigPath := os.Getenv("HOSTED_KUBECONFIG"); hostedKubeconfigPath != "" { | ||
| var err error | ||
| hostedConfig, err = clientcmd.BuildConfigFromFlags("", hostedKubeconfigPath) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("failed to load hosted kubeconfig from %s: %w", hostedKubeconfigPath, err) | ||
| } | ||
| } else { | ||
| return nil, fmt.Errorf("HYPERSHIFT_MODE is true but HOSTED_KUBECONFIG is not set") | ||
| } | ||
| } |
There was a problem hiding this comment.
Critical: clients.go uses entirely different env vars than builder.go, breaking HyperShift end-to-end.
clients.go activates dual-API routing via HYPERSHIFT_MODE=true + HOSTED_KUBECONFIG (a file path on disk). builder.go detects HyperShift via HOSTED_KUBECONFIG_SECRET (a Kubernetes secret name) + HOSTED_NAMESPACE. These are completely disjoint; there is no configuration that activates both simultaneously.
With the example deployment (docs/examples/hypershift-deployment.yaml), only HOSTED_KUBECONFIG_SECRET and HOSTED_NAMESPACE are set:
builder.gohook will inject the kubeconfig into catalogd/operator-controller pods.clients.goHyperShift block will not activate —DynamicClient,RESTMapper,ClusterCatalogClient, andClusterExtensionClientwill all remain pointed at the management cluster's API server.
The net effect is that the operator's informers for ClusterCatalog and ClusterExtension will watch the wrong cluster, so no reconciliation against the hosted cluster will occur despite the components being configured correctly.
Additionally, HOSTED_KUBECONFIG expects a file path, but the operator's own pod spec has no volume mounting the admin-kubeconfig secret, so even if HYPERSHIFT_MODE were set, the file wouldn't exist.
The fix should pick a single coherent detection strategy. The simplest path is to align clients.go to use HOSTED_KUBECONFIG_SECRET, mount the secret into the operator's pod (analogous to how the hook mounts it into catalogd/operator-controller), and derive the file path from the well-known kubeconfigMountPath:
🐛 Proposed alignment
-if hypershiftMode := os.Getenv("HYPERSHIFT_MODE"); hypershiftMode == "true" {
- if hostedKubeconfigPath := os.Getenv("HOSTED_KUBECONFIG"); hostedKubeconfigPath != "" {
- var err error
- hostedConfig, err = clientcmd.BuildConfigFromFlags("", hostedKubeconfigPath)
- if err != nil {
- return nil, fmt.Errorf("failed to load hosted kubeconfig from %s: %w", hostedKubeconfigPath, err)
- }
- } else {
- return nil, fmt.Errorf("HYPERSHIFT_MODE is true but HOSTED_KUBECONFIG is not set")
- }
-}
+if kubeconfigSecret := os.Getenv("HOSTED_KUBECONFIG_SECRET"); kubeconfigSecret != "" {
+ hostedKubeconfigPath := "/var/run/secrets/kubeconfig/kubeconfig" // matches kubeconfigFilePath in builder.go
+ var err error
+ hostedConfig, err = clientcmd.BuildConfigFromFlags("", hostedKubeconfigPath)
+ if err != nil {
+ return nil, fmt.Errorf("failed to load hosted kubeconfig from %s: %w", hostedKubeconfigPath, err)
+ }
+}The operator's own Deployment also needs the same volume/mount injected (ideally by the HyperShift management tooling or via a self-bootstrap step).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@pkg/clients/clients.go` around lines 77 - 91, clients.go currently activates
HyperShift using HYPERSHIFT_MODE and HOSTED_KUBECONFIG (a file path), which is
incompatible with builder.go's HOSTED_KUBECONFIG_SECRET/HOSTED_NAMESPACE
strategy; change the HyperShift detection and loading in the dual-API block (the
mgmtConfig/hostedConfig logic) to mirror builder.go: detect presence of
HOSTED_KUBECONFIG_SECRET (and HOSTED_NAMESPACE if needed), read the mounted
secret file at the well-known kubeconfigMountPath (instead of expecting
HOSTED_KUBECONFIG on disk) to call clientcmd.BuildConfigFromFlags("",
kubeconfigMountPath), and return clear errors if the secret-mounted file is
missing or cannot be parsed; update log/error messages in that branch to
reference HOSTED_KUBECONFIG_SECRET and kubeconfigMountPath and keep
mgmtConfig/hostedConfig semantics intact so DynamicClient, RESTMapper,
ClusterCatalogClient, and ClusterExtensionClient are pointed at the hosted
cluster when configured.
| for _, tt := range tests { | ||
| t.Run(tt.name, func(t *testing.T) { | ||
| // Set environment variable | ||
| if tt.envValue != "" { | ||
| t.Setenv(HostedKubeconfigSecretEnv, tt.envValue) | ||
| } | ||
|
|
||
| builder := &Builder{} | ||
| got := builder.IsHyperShiftMode() | ||
| if got != tt.expected { | ||
| t.Errorf("IsHyperShiftMode() = %v, want %v", got, tt.expected) | ||
| } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Conditional t.Setenv breaks test isolation when env var is pre-set in the process.
All three table-driven env-var tests (TestIsHyperShiftMode, TestGetHostedKubeconfigSecret, TestGetHostedNamespace) only call t.Setenv when tt.envValue != "". If HOSTED_KUBECONFIG_SECRET or HOSTED_NAMESPACE is already set in the test-runner environment (CI, local shell), the empty-value cases will silently inherit those values and report the wrong result.
Call t.Setenv unconditionally so Go registers the cleanup and the subtest is fully isolated:
🛡️ Proposed fix (same pattern applies to TestGetHostedKubeconfigSecret and TestGetHostedNamespace)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
- if tt.envValue != "" {
- t.Setenv(HostedKubeconfigSecretEnv, tt.envValue)
- }
+ t.Setenv(HostedKubeconfigSecretEnv, tt.envValue)
builder := &Builder{}
got := builder.IsHyperShiftMode()🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@pkg/controller/builder_test.go` around lines 160 - 173, The tests currently
only call t.Setenv when tt.envValue != "" which lets pre-existing environment
values leak into the empty-value subtests; change each test
(TestIsHyperShiftMode, TestGetHostedKubeconfigSecret, TestGetHostedNamespace) to
call t.Setenv unconditionally (e.g. t.Setenv(HostedKubeconfigSecretEnv,
tt.envValue) and t.Setenv(HostedNamespaceEnv, tt.envValue) as appropriate) so
the runtime registers cleanup and each subtest is fully isolated regardless of
external environment values; update the Builder.IsHyperShiftMode table-driven
test and the other two table-driven tests to follow this pattern.
| // Add HyperShift kubeconfig injection if in HyperShift mode | ||
| if b.IsHyperShiftMode() { | ||
| kubeconfigSecret := b.GetHostedKubeconfigSecret() | ||
| hostedNamespace := b.GetHostedNamespace() | ||
| log.Info("HyperShift mode detected, injecting kubeconfig configuration", | ||
| "deployment", manifest.GetName(), | ||
| "kubeconfigSecret", kubeconfigSecret, | ||
| "hostedNamespace", hostedNamespace) | ||
| deploymentHooks = append(deploymentHooks, | ||
| InjectHostedClusterKubeconfigHook(kubeconfigSecret, hostedNamespace), | ||
| ) | ||
| } |
There was a problem hiding this comment.
hostedNamespace is never validated: --system-namespace= will be injected when HOSTED_NAMESPACE is unset.
IsHyperShiftMode() only checks HOSTED_KUBECONFIG_SECRET. If HOSTED_KUBECONFIG_SECRET is set but HOSTED_NAMESPACE is not, GetHostedNamespace() returns "" and InjectHostedClusterKubeconfigHook("...", "") silently appends --system-namespace= to every container.
🐛 Proposed fix
if b.IsHyperShiftMode() {
kubeconfigSecret := b.GetHostedKubeconfigSecret()
hostedNamespace := b.GetHostedNamespace()
+ if hostedNamespace == "" {
+ return nil, nil, nil, nil, fmt.Errorf("%s is set but %s is not set", HostedKubeconfigSecretEnv, HostedNamespaceEnv)
+ }
log.Info("HyperShift mode detected, injecting kubeconfig configuration",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Add HyperShift kubeconfig injection if in HyperShift mode | |
| if b.IsHyperShiftMode() { | |
| kubeconfigSecret := b.GetHostedKubeconfigSecret() | |
| hostedNamespace := b.GetHostedNamespace() | |
| log.Info("HyperShift mode detected, injecting kubeconfig configuration", | |
| "deployment", manifest.GetName(), | |
| "kubeconfigSecret", kubeconfigSecret, | |
| "hostedNamespace", hostedNamespace) | |
| deploymentHooks = append(deploymentHooks, | |
| InjectHostedClusterKubeconfigHook(kubeconfigSecret, hostedNamespace), | |
| ) | |
| } | |
| // Add HyperShift kubeconfig injection if in HyperShift mode | |
| if b.IsHyperShiftMode() { | |
| kubeconfigSecret := b.GetHostedKubeconfigSecret() | |
| hostedNamespace := b.GetHostedNamespace() | |
| if hostedNamespace == "" { | |
| return nil, nil, nil, nil, fmt.Errorf("%s is set but %s is not set", HostedKubeconfigSecretEnv, HostedNamespaceEnv) | |
| } | |
| log.Info("HyperShift mode detected, injecting kubeconfig configuration", | |
| "deployment", manifest.GetName(), | |
| "kubeconfigSecret", kubeconfigSecret, | |
| "hostedNamespace", hostedNamespace) | |
| deploymentHooks = append(deploymentHooks, | |
| InjectHostedClusterKubeconfigHook(kubeconfigSecret, hostedNamespace), | |
| ) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@pkg/controller/builder.go` around lines 154 - 165, IsHyperShiftMode() can be
true while GetHostedNamespace() returns an empty string, causing
InjectHostedClusterKubeconfigHook to add a naked "--system-namespace=" flag; fix
by validating hostedNamespace before appending the hook: after calling
b.GetHostedNamespace() check that hostedNamespace is non-empty (and log a
warning or error via log.Warn/Info if empty) and only call deploymentHooks =
append(... InjectHostedClusterKubeconfigHook(kubeconfigSecret, hostedNamespace)
...) when hostedNamespace != "", otherwise skip injection (or use a safe
default) so InjectHostedClusterKubeconfigHook is never invoked with an empty
namespace.
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Summary by CodeRabbit
New Features
Documentation