Skip to content

OCPBUGS-62517: Scale to replicas=2 and enable PDB on HighlyAvailable topology#202

Merged
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
tmshort:ocpbugs-62517-ha-replicas
May 7, 2026
Merged

OCPBUGS-62517: Scale to replicas=2 and enable PDB on HighlyAvailable topology#202
openshift-merge-bot[bot] merged 1 commit intoopenshift:mainfrom
tmshort:ocpbugs-62517-ha-replicas

Conversation

@tmshort
Copy link
Copy Markdown
Contributor

@tmshort tmshort commented May 4, 2026

Rolling updates in HighlyAvailable clusters leave catalogd and operator-controller unavailable when the only running pod is evicted before its replacement is ready.

Fetch the cluster Infrastructure resource at startup and check ControlPlaneTopology. When a HighlyAvailable topology is detected (HighlyAvailable, HighlyAvailableArbiter, or DualReplica), override the Helm values to replicas=2 and podDisruptionBudget.enabled=true before rendering manifests. SingleReplica (SNO) and External topologies keep the static defaults of replicas=1 and PDB disabled.

When a topology change is observed at runtime via the infrastructure informer (exceedingly rare), the operator exits so its deployment controller restarts it, triggering a fresh Helm render with the correct values for the new topology.

Changes:

  • helmvalues: add SetIntValue and SetBoolValue helpers
  • clients: add InfrastructureClient backed by the config informer
  • controller/builder: add Infrastructure field to Builder
  • controller/helm: apply HA replica/PDB overrides in renderHelmTemplate; add isHighlyAvailableTopology helper
  • main: fetch infra at startup, pass to Builder, watch for topology changes and exit to trigger re-render

Summary by CodeRabbit

  • New Features
    • Infrastructure topology monitoring: The operator now watches control-plane topology and automatically restarts when topology changes are detected to ensure configuration is reapplied consistently.
    • High-availability adjustments: The system detects highly-available control planes and automatically adjusts deployments (replica counts and PodDisruptionBudgets) for key components to maintain redundancy.

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels May 4, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@tmshort: This pull request references Jira Issue OCPBUGS-62517, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

No GitHub users were found matching the public email listed for the QA contact in Jira (jiazha@redhat.com), skipping review request.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Rolling updates in HighlyAvailable clusters leave catalogd and operator-controller unavailable when the only running pod is evicted before its replacement is ready.

Fetch the cluster Infrastructure resource at startup and check ControlPlaneTopology. When a HighlyAvailable topology is detected (HighlyAvailable, HighlyAvailableArbiter, or DualReplica), override the Helm values to replicas=2 and podDisruptionBudget.enabled=true before rendering manifests. SingleReplica (SNO) and External topologies keep the static defaults of replicas=1 and PDB disabled.

When a topology change is observed at runtime via the infrastructure informer (exceedingly rare), the operator exits so its deployment controller restarts it, triggering a fresh Helm render with the correct values for the new topology.

Changes:

  • helmvalues: add SetIntValue and SetBoolValue helpers
  • clients: add InfrastructureClient backed by the config informer
  • controller/builder: add Infrastructure field to Builder
  • controller/helm: apply HA replica/PDB overrides in renderHelmTemplate; add isHighlyAvailableTopology helper
  • main: fetch infra at startup, pass to Builder, watch for topology changes and exit to trigger re-render

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 4, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 2b78fda3-8b4f-4bba-b980-b643af2ce0ad

📥 Commits

Reviewing files that changed from the base of the PR and between 0196c65 and ebc3002.

📒 Files selected for processing (5)
  • cmd/cluster-olm-operator/main.go
  • pkg/clients/clients.go
  • pkg/controller/builder.go
  • pkg/controller/helm.go
  • pkg/helmvalues/helmvalues.go
✅ Files skipped from review due to trivial changes (3)
  • pkg/controller/builder.go
  • pkg/helmvalues/helmvalues.go
  • pkg/controller/helm.go

Walkthrough

Operator now fetches the cluster Infrastructure at startup, stores it in the controller builder, watches for Infrastructure Add/Update events, and exits the process if Status.ControlPlaneTopology changes. Helm rendering checks the stored Infrastructure and, for highly-available topologies, overrides replicas and enables PDBs.

Changes

Infrastructure Topology Detection and Response

Layer / File(s) Summary
Data / Client Shape
pkg/clients/clients.go
Adds InfrastructureClient type, InfrastructureClientInterface, NewInfrastructureClient, and a InfrastructureClient *InfrastructureClient field on Clients.
Builder Surface
pkg/controller/builder.go
Adds exported Infrastructure *configv1.Infrastructure field to Builder.
Startup Wiring
cmd/cluster-olm-operator/main.go
At startup, fetches configv1.Infrastructure named cluster, assigns it to controller.Builder.Infrastructure, and registers an informer event handler that compares Status.ControlPlaneTopology on Add/Update and calls os.Exit(0) if it changes.
Rendering Logic
pkg/controller/helm.go, pkg/helmvalues/helmvalues.go
renderHelmTemplate uses Builder.Infrastructure and isHighlyAvailableTopology to detect HA topologies and override Helm values (set replicas=2 and enable PDBs). Adds HelmValues.SetIntValue and SetBoolValue helpers for nested value assignment.

Sequence Diagram

sequenceDiagram
    participant Operator as Operator (process)
    participant API as Kubernetes API (Infrastructure)
    participant Informer as Informer Cache
    participant Manager as Process Manager

    Operator->>API: GET Infrastructure "cluster"
    API-->>Operator: Infrastructure (with ControlPlaneTopology)
    Operator->>Operator: store initial topology in Builder.Infrastructure
    Operator->>Informer: register Add/Update handler
    Operator->>Operator: continue running / render Helm with infra

    Note over API,Manager: Cluster control plane topology changes
    API->>Informer: Infrastructure object updated
    Informer->>Operator: handler invoked with new Infrastructure
    Operator->>Operator: compare new vs initial topology
    alt topology changed
        Operator->>Operator: log change
        Operator->>Manager: os.Exit(0)
        Manager->>Operator: restart process (fresh render)
    else unchanged
        Operator->>Operator: no action
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes


Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 warning, 2 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Structure And Quality ❓ Inconclusive No Ginkgo test files were found in the repository despite an exhaustive search. The PR describes changes to production code files but provides no evidence of corresponding test files being added or modified. Please provide the paths to any test files included in this PR, or confirm whether test coverage is expected for these production code changes.
Ote Binary Stdout Contract ❓ Inconclusive Cannot execute shell scripts directly; unable to locate and inspect the specified file. Provide the file contents or execute the inspection in your environment and share results for analysis.
✅ Passed checks (9 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly describes the main objective of the PR: enabling HighlyAvailable topology detection to scale replicas to 2 and enable PDB, which aligns with all file changes.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed Pull request contains no Ginkgo tests; all tests use standard Go testing framework with static deterministic names.
Microshift Test Compatibility ✅ Passed This PR does not add any new Ginkgo e2e tests. All changes are controller code modifications.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This repository does not use Ginkgo e2e testing framework—it uses standard Go unit testing with the testing package.
Topology-Aware Scheduling Compatibility ✅ Passed PR properly implements topology-aware scheduling by fetching Infrastructure topology at startup, detecting runtime topology changes, and applying topology-appropriate replica and PDB settings (replicas=2 with PDB for HA topologies, replicas=1 with PDB disabled for single-replica/external).
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR does not add new Ginkgo e2e tests. It only modifies operator code and infrastructure client logic.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Review rate limit: 9/10 reviews remaining, refill in 6 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from grokspawn and oceanc80 May 4, 2026 19:50
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 4, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: tmshort

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 4, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@cmd/cluster-olm-operator/main.go`:
- Line 382: The UpdateFunc handler is using the parameter name `new`, which
shadows Go's built-in new function; rename that parameter (e.g., to `updated` or
`newObj`) in the UpdateFunc signature and update all references inside the
closure (for example the cast `new.(*configv1.Infrastructure)`) to use the new
parameter name so the shadowing is removed and the cast still targets the same
object type.
- Around line 238-241: The informer only registers an UpdateFunc, so topology
changes that occur as Add events between capturing initialTopology (from infra
:= cl.ConfigClient.ConfigV1().Infrastructures().Get(...)) and the informer’s
initial LIST are missed; register an AddFunc alongside the existing UpdateFunc
on the same informer (the handler that currently processes Update events) to
detect and process added Infrastructure objects the same way as updates,
ensuring the operator re-renders manifests for that topology change; also rename
the UpdateFunc parameter currently named "new" to something like "newObj" or
"newInfra" to avoid shadowing the built-in new() and satisfy the linter.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 1443d7f9-1505-4594-b512-d3799aa05c3c

📥 Commits

Reviewing files that changed from the base of the PR and between d131450 and 0196c65.

📒 Files selected for processing (5)
  • cmd/cluster-olm-operator/main.go
  • pkg/clients/clients.go
  • pkg/controller/builder.go
  • pkg/controller/helm.go
  • pkg/helmvalues/helmvalues.go

Comment thread cmd/cluster-olm-operator/main.go
Comment thread cmd/cluster-olm-operator/main.go Outdated
…topology

Rolling updates in HighlyAvailable clusters leave catalogd and
operator-controller unavailable when the only running pod is evicted
before its replacement is ready.

Fetch the cluster Infrastructure resource at startup and check
ControlPlaneTopology. When a HighlyAvailable topology is detected
(HighlyAvailable, HighlyAvailableArbiter, or DualReplica), override the
Helm values to replicas=2 and podDisruptionBudget.enabled=true before
rendering manifests. SingleReplica (SNO) and External topologies keep the
static defaults of replicas=1 and PDB disabled.

When a topology change is observed at runtime via the infrastructure
informer (exceedingly rare), the operator exits so its deployment
controller restarts it, triggering a fresh Helm render with the correct
values for the new topology.

Changes:
- helmvalues: add SetIntValue and SetBoolValue helpers
- clients: add InfrastructureClient backed by the config informer
- controller/builder: add Infrastructure field to Builder
- controller/helm: apply HA replica/PDB overrides in renderHelmTemplate;
  add isHighlyAvailableTopology helper
- main: fetch infra at startup, pass to Builder, watch for topology
  changes and exit to trigger re-render

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Todd Short <tshort@redhat.com>
@tmshort tmshort force-pushed the ocpbugs-62517-ha-replicas branch from 0196c65 to ebc3002 Compare May 4, 2026 20:07
@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

/retest

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

/test openshift-e2e-aws-customnoupgrade

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

/payload-aggregate periodic-ci-openshift-release-main-ci-5.0-e2e-aws-upgrade-ovn-single-node 10

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 5, 2026

@tmshort: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-5.0-e2e-aws-upgrade-ovn-single-node

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0c1d0f30-4884-11f1-99af-496e1ffa1ecc-0

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

/payload-aggregate periodic-ci-openshift-release-main-ci-5.0-e2e-aws-ovn-upgrade 10

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 5, 2026

@tmshort: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-5.0-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/140b8be0-4884-11f1-8a23-d6ca6fc82744-0

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

/test openshift-e2e-aws-customnoupgrade

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

customnoupgrade has failed a number of times, and seems to be unrelated to this change; other components have failing tests.

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 5, 2026

All aggregate jobs succeeded.

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 6, 2026

/test openshift-e2e-aws-customnoupgrade
If this doesn't pass, I might consider an override; given everything else is passing.

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 6, 2026

/payload-aggregate periodic-ci-openshift-release-main-ci-5.0-e2e-aws-ovn-upgrade 10
One more time to be sure.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

@tmshort: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-ci-5.0-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/86ce31c0-48fa-11f1-82c1-76bc25872c9e-0

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 6, 2026

customnoupgrade is also failing in #203

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 6, 2026

The aggregate payload tests passed

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 7, 2026

/test openshift-e2e-aws-customnoupgrade

1 similar comment
@dtfranz
Copy link
Copy Markdown
Contributor

dtfranz commented May 7, 2026

/test openshift-e2e-aws-customnoupgrade

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 7, 2026

/override openshift-e2e-aws-customnoupgrade
It last succeeded a month ago.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

@tmshort: /override requires failed status contexts, check run or a prowjob name to operate on.
The following unknown contexts/checkruns were given:

  • openshift-e2e-aws-customnoupgrade

Only the following failed contexts/checkruns were expected:

  • CodeRabbit
  • ci/prow/deps
  • ci/prow/images
  • ci/prow/lint
  • ci/prow/okd-scos-images
  • ci/prow/openshift-e2e-aws
  • ci/prow/openshift-e2e-aws-customnoupgrade
  • ci/prow/openshift-e2e-aws-devpreview
  • ci/prow/openshift-e2e-aws-techpreview
  • ci/prow/unit
  • ci/prow/verify-deps
  • pull-ci-openshift-cluster-olm-operator-main-deps
  • pull-ci-openshift-cluster-olm-operator-main-images
  • pull-ci-openshift-cluster-olm-operator-main-lint
  • pull-ci-openshift-cluster-olm-operator-main-okd-scos-images
  • pull-ci-openshift-cluster-olm-operator-main-openshift-e2e-aws
  • pull-ci-openshift-cluster-olm-operator-main-openshift-e2e-aws-customnoupgrade
  • pull-ci-openshift-cluster-olm-operator-main-openshift-e2e-aws-devpreview
  • pull-ci-openshift-cluster-olm-operator-main-openshift-e2e-aws-techpreview
  • pull-ci-openshift-cluster-olm-operator-main-unit
  • pull-ci-openshift-cluster-olm-operator-main-verify-deps
  • tide

If you are trying to override a checkrun that has a space in it, you must put a double quote on the context.

Details

In response to this:

/override openshift-e2e-aws-customnoupgrade
It last succeeded a month ago.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 7, 2026

/override ci/prow/openshift-e2e-aws-customnoupgrade
It last succeeded a month ago.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

@tmshort: Overrode contexts on behalf of tmshort: ci/prow/openshift-e2e-aws-customnoupgrade

Details

In response to this:

/override ci/prow/openshift-e2e-aws-customnoupgrade
It last succeeded a month ago.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

@tmshort: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@grokspawn
Copy link
Copy Markdown
Contributor

Finally got a cluster-bot up with this PR in it. I consider the existing tests for HA kit and consistent component reporting to be sufficient basis to avoid writing new OTE scenarios to cover this post-merge.

I have performed pre-merge verification for HA scenarios as

  • Cluster topology is HighlyAvailable
  • catalogd-controller-manager has exactly 2 replicas
  • operator-controller-manager has exactly 2 replicas
  • PodDisruptionBudgets exist for both controllers
  • Pod anti-affinity configured for both controllers
  • Pods for each controller run on different nodes
  • All pods are in Running state

/verified by @grokspawn

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label May 7, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@grokspawn: This PR has been marked as verified by @grokspawn.

Details

In response to this:

Finally got a cluster-bot up with this PR in it. I consider the existing tests for HA kit and consistent component reporting to be sufficient basis to avoid writing new OTE scenarios to cover this post-merge.

I have performed pre-merge verification for HA scenarios as

  • Cluster topology is HighlyAvailable
  • catalogd-controller-manager has exactly 2 replicas
  • operator-controller-manager has exactly 2 replicas
  • PodDisruptionBudgets exist for both controllers
  • Pod anti-affinity configured for both controllers
  • Pods for each controller run on different nodes
  • All pods are in Running state

/verified by @grokspawn

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@grokspawn
Copy link
Copy Markdown
Contributor

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label May 7, 2026
@openshift-merge-bot openshift-merge-bot Bot merged commit 5d9f062 into openshift:main May 7, 2026
12 checks passed
@openshift-ci-robot
Copy link
Copy Markdown

@tmshort: Jira Issue OCPBUGS-62517: Some pull requests linked via external trackers have merged:

The following pull request, linked via external tracker, has not merged:

All associated pull requests must be merged or unlinked from the Jira bug in order for it to move to the next state. Once unlinked, request a bug refresh with /jira refresh.

Jira Issue OCPBUGS-62517 has not been moved to the MODIFIED state.

This PR is marked as verified. If the remaining PRs listed above are marked as verified before merging, the issue will automatically be moved to VERIFIED after all of the changes from the PRs are available in an accepted nightly payload.

Details

In response to this:

Rolling updates in HighlyAvailable clusters leave catalogd and operator-controller unavailable when the only running pod is evicted before its replacement is ready.

Fetch the cluster Infrastructure resource at startup and check ControlPlaneTopology. When a HighlyAvailable topology is detected (HighlyAvailable, HighlyAvailableArbiter, or DualReplica), override the Helm values to replicas=2 and podDisruptionBudget.enabled=true before rendering manifests. SingleReplica (SNO) and External topologies keep the static defaults of replicas=1 and PDB disabled.

When a topology change is observed at runtime via the infrastructure informer (exceedingly rare), the operator exits so its deployment controller restarts it, triggering a fresh Helm render with the correct values for the new topology.

Changes:

  • helmvalues: add SetIntValue and SetBoolValue helpers
  • clients: add InfrastructureClient backed by the config informer
  • controller/builder: add Infrastructure field to Builder
  • controller/helm: apply HA replica/PDB overrides in renderHelmTemplate; add isHighlyAvailableTopology helper
  • main: fetch infra at startup, pass to Builder, watch for topology changes and exit to trigger re-render

Summary by CodeRabbit

  • New Features
  • Infrastructure topology monitoring: The operator now watches control-plane topology and automatically restarts when topology changes are detected to ensure configuration is reapplied consistently.
  • High-availability adjustments: The system detects highly-available control planes and automatically adjusts deployments (replica counts and PodDisruptionBudgets) for key components to maintain redundancy.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tmshort tmshort deleted the ocpbugs-62517-ha-replicas branch May 8, 2026 13:12
@tmshort
Copy link
Copy Markdown
Contributor Author

tmshort commented May 8, 2026

/jira refresh

@openshift-ci-robot
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants