Skip to content

[SLSRE-534] Migrate ACM PlacementRule resources to Placement resources#2742

Open
Ankit152 wants to merge 1 commit into
openshift:masterfrom
Ankit152:slsre-534
Open

[SLSRE-534] Migrate ACM PlacementRule resources to Placement resources#2742
Ankit152 wants to merge 1 commit into
openshift:masterfrom
Ankit152:slsre-534

Conversation

@Ankit152
Copy link
Copy Markdown
Contributor

@Ankit152 Ankit152 commented May 6, 2026

What type of PR is this?

(bug/feature/cleanup/documentation)
Cleanup

What this PR does / why we need it?

Migrate ACM PlacementRule resources to Placement resources

Which Jira/Github issue(s) this PR fixes?

Fixes SLSRE-534

Special notes for your reviewer:

Pre-checks (if applicable):

  • Tested latest changes against a cluster

  • Included documentation changes with PR

  • If this is a new object that is not intended for the FedRAMP environment (if unsure, please reach out to team FedRAMP), please exclude it with:

    matchExpressions:
    - key: api.openshift.com/fedramp
      operator: NotIn
      values: ["true"]

Summary by CodeRabbit

  • Chores
    • Updated cluster placement resource configurations across multiple deployment environments (integration, production, and staging).
    • Modernized cluster selection mechanisms and placement binding architecture to align with current platform standards.
    • Enhanced placement logic with improved predicates-based cluster selection for more efficient policy application.
    • Updated autoscaler monitoring and OAuth template policy placements for consistent configuration management.

@openshift-ci openshift-ci Bot requested review from devppratik and joshbranham May 6, 2026 10:10
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 6, 2026

Warning

Rate limit exceeded

@Ankit152 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 53 minutes and 48 seconds before requesting another review.

To continue reviewing without waiting, purchase usage credits in the billing tab.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: bc29831b-622f-4f72-abcb-d43637f98a30

📥 Commits

Reviewing files that changed from the base of the PR and between 477e8b3 and 4f30d5a.

📒 Files selected for processing (59)
  • Dockerfile
  • Dockerfile.prow
  • Makefile
  • deploy/acm-policies/50-GENERATED-backplane-acs.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-ai-agent-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-ai-agent.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-cee-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-cee.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-cse-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-cse.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-csm-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-csm.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-elevated-sre.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-lpsre-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-lpsre.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-mcs-tier-two-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-mcs-tier-two.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-mobb-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-mobb.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-srep-ro-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-srep-ro.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-srep-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-srep.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-tam-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane-tam.Policy.yaml
  • deploy/acm-policies/50-GENERATED-backplane.Policy.yaml
  • deploy/acm-policies/50-GENERATED-ccs-dedicated-admins-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-ccs-dedicated-admins.Policy.yaml
  • deploy/acm-policies/50-GENERATED-customer-registry-cas.Policy.yaml
  • deploy/acm-policies/50-GENERATED-hcp-ze-ecr-creds.Policy.yaml
  • deploy/acm-policies/50-GENERATED-hosted-uwm.Policy.yaml
  • deploy/acm-policies/50-GENERATED-hypershift-ovn-logging.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-backplane-managed-scripts.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-cluster-admin.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-customer-monitoring.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-delete-backplane-script-resources.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-delete-backplane-serviceaccounts-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-delete-backplane-serviceaccounts.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-logging-unsupported.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-must-gather-operator.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-openshift-operators-redhat.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-pcap-collector.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-project-request-template.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-user-workload-monitoring-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-osd-user-workload-monitoring.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rbac-permissions-operator-config-sp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rbac-permissions-operator-config.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rosa-console-branding-hcp.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rosa-console-legacy-branding-configmap.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rosa-ingress-certificate-check.Policy.yaml
  • deploy/acm-policies/50-GENERATED-rosa-ingress-certificate-policies.Policy.yaml
  • deploy/acm-policies/50-GENERATED-srep-vap-autonode-karpenter.Policy.yaml
  • deploy/acm-policies/50-GENERATED-srep-vap-hcp-node-label.Policy.yaml
  • deploy/acm-policies/50-GENERATED-srep-vap-vcpu-overcommit.Policy.yaml
  • hack/00-osd-managed-cluster-config-integration.yaml.tmpl
  • hack/00-osd-managed-cluster-config-production.yaml.tmpl
  • hack/00-osd-managed-cluster-config-stage.yaml.tmpl
  • scripts/generate-policy-config.py
  • scripts/policy-generator-config.yaml

Walkthrough

This pull request migrates cluster placement policies from the deprecated PlacementRule API (apps.open-cluster-management.io/v1) to the newer Placement CRD (cluster.open-cluster-management.io/v1beta1) across seven configuration files. PlacementBinding references are updated accordingly to target the new Placement resources. The migration is applied consistently across individual policy definitions and large environment template files.

Changes

Placement API Migration

Layer / File(s) Summary
Placement Resource Introduction
deploy/hypershift-autoscaler-podmonitor/01-autoscaler-podmonitor.yaml, deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-*.Policy.yaml, hack/00-osd-managed-cluster-config-*.yaml.tmpl
Replaces deprecated PlacementRule (apps.open-cluster-management.io/v1) with new Placement CRD (cluster.open-cluster-management.io/v1beta1) using predicates and requiredClusterSelector for cluster selection logic.
PlacementBinding Refactoring
deploy/hypershift-autoscaler-podmonitor/01-autoscaler-podmonitor.yaml, deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-*.Policy.yaml, hack/00-osd-managed-cluster-config-*.yaml.tmpl
Updates placementRef in PlacementBinding resources to reference Placement (apiGroup: cluster.open-cluster-management.io) instead of PlacementRule (apiGroup: apps.open-cluster-management.io).
Configuration Rollout
deploy/hypershift-autoscaler-podmonitor/01-autoscaler-podmonitor.yaml, deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-*.Policy.yaml, hack/00-osd-managed-cluster-config-integration|production|stage.yaml.tmpl
Migration applied across autoscaler, ROSA OAuth policy sets, and backplane service configurations; small individual policy files followed by large-scale template updates maintaining consistent placement semantics across integration, production, and stage environments.

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 12
✅ Passed checks (12 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title accurately describes the main change: migrating ACM PlacementRule resources to Placement resources, which is the primary refactor across all modified files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed The custom check requires stable test names in Ginkgo tests. This PR contains only YAML manifest files and shell scripts—no Go test files. Ginkgo tests are not present, so the check is not applicable.
Test Structure And Quality ✅ Passed Custom check for Ginkgo test code quality is not applicable. Repository contains only Kubernetes/ACM manifests; no Go/Ginkgo test files exist in the repo or in this PR.
Microshift Test Compatibility ✅ Passed No Ginkgo e2e tests are added in this PR. The pull request contains only YAML manifest changes for migrating ACM PlacementRule resources to Placement resources. The check is not applicable.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR contains no Ginkgo e2e tests. It only modifies Kubernetes resource manifests and ACM policy configuration files (.yaml and .tmpl). The custom check is not applicable.
Topology-Aware Scheduling Compatibility ✅ Passed Not applicable. PR only modifies ACM cluster-placement resources (PlacementRule→Placement migration), not deployment manifests, operator code, or pod scheduling constraints.
Ote Binary Stdout Contract ✅ Passed Check not applicable: PR contains only YAML/manifest configuration changes, no Go code. OTE Binary Stdout Contract check applies exclusively to Go code in process-level test contexts.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed PR contains only YAML configuration manifests. The custom check applies to Ginkgo e2e tests (It(), Describe(), Context(), When()), which are not present. Check is not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml (1)

42-53: ⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Remove invalid clusterConditions field from Placement v1beta1 spec.

The spec.clusterConditions field is not part of the Placement v1beta1 schema (cluster.open-cluster-management.io/v1beta1). It belongs to the legacy PlacementRule API. Placement selects clusters via spec.predicates, and automatically filters out unavailable managed clusters by default—this stanza can be safely removed.

This invalid field appears in at least three files in this PR:

  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml
  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml
  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yaml

Apply the same removal consistently across all affected files.

Proposed fix
 spec:
-    clusterConditions:
-    - status: "True"
-      type: ManagedClusterConditionAvailable
     predicates:
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml`
around lines 42 - 53, Remove the invalid spec.clusterConditions stanza from the
Placement v1beta1 resources (the Placement spec)—specifically delete the entire
clusterConditions key and its array content while keeping spec.predicates
intact; this should be applied to the three 50-rosa-oauth-tpl-*.Policy.yaml
templates (errors, providers, login) so the Placement conforms to
cluster.open-cluster-management.io/v1beta1 schema and the YAML remains
syntactically valid after removal.
🧹 Nitpick comments (3)
hack/00-osd-managed-cluster-config-production.yaml.tmpl (1)

29963-29986: ⚖️ Poor tradeoff

Stale placement-rule / pr- naming on the new Placement resources.

These resources are now kind: Placement but retain PlacementRule-era names (autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers). Functionally fine, but it makes the manifests harder to grep and reason about post-migration. Consider renaming in a follow-up (along with their PlacementBinding placementRef.name and metadata.name); doing it in this PR risks orphaning the previously created PlacementRule objects on existing hubs unless the old names are also reaped.

Also applies to: 45631-45656, 45695-45720, 45759-45784

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 29963 -
29986, Manifest resource names still use the old PlacementRule-era prefixes
(e.g., autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) even though the kind is
now Placement; update those Placement resource metadata.name values to a clearer
Placement-oriented naming scheme and also update any corresponding
PlacementBinding objects’ placementRef.name and metadata.name to match; do this
as a follow-up change (not in this PR) and ensure you also add a
migration/reaper plan for the old PlacementRule objects so they are not orphaned
on existing hubs.
hack/00-osd-managed-cluster-config-integration.yaml.tmpl (1)

29963-29985: ⚖️ Poor tradeoff

Resource name still says -placement-rule — consider renaming for the Placement migration.

autoscaler-podmonitor-placement-rule is now a Placement, not a PlacementRule, but the name carries the legacy term. Likewise the matching PlacementBinding is named autoscaler-podmonitor-placement-binding. Renaming is non-trivial (the binding's placementRef.name and any downstream references would need to change in lockstep, and the live cluster would see a delete+create), so this is just an optional cleanup. If you keep the current name to avoid churn, consider it elsewhere when the resource is re-created.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 29963
- 29985, The resource names still contain the legacy suffix "-placement-rule"
(autoscaler-podmonitor-placement-rule) and its binding
autoscaler-podmonitor-placement-binding; rename them to remove the legacy term
(e.g., autoscaler-podmonitor-placement and autoscaler-podmonitor-binding) or
choose a consistent new name, and update the PlacementBinding.placementRef.name
and any downstream references to match exactly; if you opt to avoid live-cluster
churn keep the current names but add a TODO/note to rename when the resources
are recreated.
hack/00-osd-managed-cluster-config-stage.yaml.tmpl (1)

29963-29966: 💤 Low value

Stale "placement-rule" / pr- naming on resources that are now Placements.

Several migrated resources keep names tied to the old API:

  • Line 29966: autoscaler-podmonitor-placement-rule (now kind: Placement)
  • Line 45634: pr-rosa-oauth-tpl-errors
  • Line 45698: pr-rosa-oauth-tpl-login
  • Line 45762: pr-rosa-oauth-tpl-providers

Renaming would be a breaking change for any external reference and would force PlacementBinding.placementRef.name updates, so it's reasonable to defer. Just note this so future readers don't mistake them for PlacementRule resources, and consider a follow-up cleanup once the migration has fully soaked.

Also applies to: 45631-45634, 45695-45698, 45759-45762

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 29963 -
29966, The manifest contains legacy resource names still using "placement-rule"
/ "pr-" prefixes (e.g., autoscaler-podmonitor-placement-rule,
pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers)
while the resources are now kind: Placement; add a concise inline comment
adjacent to each affected resource declaration (near the occurrences of
autoscaler-podmonitor-placement-rule and the three pr-rosa-oauth-tpl-*
resources) stating these are legacy names from the old PlacementRule API and
that they map to current Placement objects, and add a short TODO noting any
future rename must update PlacementBinding.placementRef.name to avoid accidental
breakage and that a follow-up cleanup can be performed after migration soak.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In
`@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml`:
- Around line 43-49: The Placement manifest includes an invalid
spec.clusterConditions field (the clusterConditions block under the Placement
spec) which is not part of cluster.open-cluster-management.io/v1beta1
PlacementSpec; remove the entire clusterConditions section (the status/type
entries under clusterConditions) so the PlacementSpec contains only valid fields
(e.g., predicates -> requiredClusterSelector -> labelSelector ->
matchExpressions), or if you actually need clusterConditions semantics migrate
this logic to a deprecated PlacementRule resource instead; update the YAML to
delete the clusterConditions subtree and leave the
predicates/requiredClusterSelector intact.

In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl`:
- Around line 23278-23282: Remove the duplicate apiGroups entry
"cluster.open-cluster-management.io" from the relevant Role/ClusterRole
definition in the template so apiGroups contains each group only once; locate
the resource listing apiGroups (the block that also includes permissions for
ManagedCluster/ManagedClusterSet) and delete the redundant
"cluster.open-cluster-management.io" line so the YAML/RBAC linter no longer
flags a duplicate entry.
- Around line 45631-45647: The Placement resources in the template include the
obsolete top-level field "clusterConditions" which is invalid for kind:
Placement (cluster.open-cluster-management.io/v1beta1); remove all
"clusterConditions" blocks from any Placement documents (e.g., the
rosa-oauth-tpl login/errors/providers Placement entries) and any other
occurrences in the template, leaving only supported spec fields (clusterSets,
numberOfClusters, predicates, prioritizerPolicy, tolerations, decisionStrategy,
spreadPolicy); search for the symbol "clusterConditions" and delete the entire
block and any trailing commas/empty maps so the resulting Placement spec
validates against the v1beta1 schema.
- Around line 2371-2395: The Placement resources in this template rely on
predicate filtering but lack any ManagedClusterSetBinding, so they will see zero
candidate clusters; add ManagedClusterSetBinding custom resources that bind the
openshift-acm-policies and openshift-rosa-oauth-tpl-policies namespaces to the
appropriate ManagedClusterSet(s) that contain your ROSA/HCP management and
hosted clusters. For each target ManagedClusterSet, create a
ManagedClusterSetBinding with metadata.namespace set to the target namespace
(openshift-acm-policies or openshift-rosa-oauth-tpl-policies) and
spec.managedClusterSet set to the ManagedClusterSet name so the Placement
objects (Placement) in those namespaces can select clusters from that set.
Ensure you add one binding per namespace per ManagedClusterSet that should be
considered by the Placement predicates.

In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl`:
- Around line 23276-23282: The apiGroups list in the template contains a
duplicated value: cluster.open-cluster-management.io was added twice (one
occurrence replaced apps.open-cluster-management.io), so edit the apiGroups
array in hack/00-osd-managed-cluster-config-production.yaml.tmpl to remove the
duplicate cluster.open-cluster-management.io and restore/keep
apps.open-cluster-management.io if intended; ensure each api group value (e.g.,
cluster.open-cluster-management.io and apps.open-cluster-management.io) appears
only once in the apiGroups list.
- Around line 45636-45647: Remove the invalid clusterConditions field from all
Placement v1beta1 resource specs in this template: locate each Placement
resource that contains the clusterConditions key (appearing on each Placement
manifest carried over from PlacementRule v1) and delete the entire
clusterConditions stanza so the spec only contains valid fields like
numberOfClusters, clusterSets, predicates, prioritizerPolicy, tolerations,
decisionStrategy, and spreadPolicy; ensure you remove the clusterConditions
entries from all three Placement manifests so the API server will not reject or
prune the resource and, if you still need to filter by availability, replace it
with a predicates[].requiredClusterSelector.claimSelector-based selector or
policy-based reporting as noted.

In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl`:
- Around line 23276-23282: The apiGroups list in the RBAC rule contains a
duplicate "cluster.open-cluster-management.io" entry (likely from a migration) —
remove the duplicated "cluster.open-cluster-management.io" so the list contains
each group only once; if the original "apps.open-cluster-management.io" entry is
intentionally required (to tolerate legacy PlacementRule objects), restore
"apps.open-cluster-management.io" instead of dropping it. Locate the RBAC rule's
apiGroups array (the rule that currently lists
cluster.open-cluster-management.io twice) and edit that array to either
deduplicate entries or replace the extra duplicate with
"apps.open-cluster-management.io" as appropriate.
- Around line 45631-45647: The Placement resources for pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, and pr-rosa-oauth-tpl-providers contain an invalid
spec.clusterConditions block; remove that block from each Placement (or replace
it with Placement-native fields) and instead add equivalent tolerations for the
cluster.open-cluster-management.io/unavailable and
cluster.open-cluster-management.io/unreachable taints (or omit if the
controller’s terminating-cluster filter suffices); also update the upstream
generator source
(deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml)
so future renders of hack/00-osd-managed-cluster-config-stage.yaml.tmpl do not
reintroduce spec.clusterConditions.

---

Outside diff comments:
In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml`:
- Around line 42-53: Remove the invalid spec.clusterConditions stanza from the
Placement v1beta1 resources (the Placement spec)—specifically delete the entire
clusterConditions key and its array content while keeping spec.predicates
intact; this should be applied to the three 50-rosa-oauth-tpl-*.Policy.yaml
templates (errors, providers, login) so the Placement conforms to
cluster.open-cluster-management.io/v1beta1 schema and the YAML remains
syntactically valid after removal.

---

Nitpick comments:
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl`:
- Around line 29963-29985: The resource names still contain the legacy suffix
"-placement-rule" (autoscaler-podmonitor-placement-rule) and its binding
autoscaler-podmonitor-placement-binding; rename them to remove the legacy term
(e.g., autoscaler-podmonitor-placement and autoscaler-podmonitor-binding) or
choose a consistent new name, and update the PlacementBinding.placementRef.name
and any downstream references to match exactly; if you opt to avoid live-cluster
churn keep the current names but add a TODO/note to rename when the resources
are recreated.

In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl`:
- Around line 29963-29986: Manifest resource names still use the old
PlacementRule-era prefixes (e.g., autoscaler-podmonitor-placement-rule,
pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers)
even though the kind is now Placement; update those Placement resource
metadata.name values to a clearer Placement-oriented naming scheme and also
update any corresponding PlacementBinding objects’ placementRef.name and
metadata.name to match; do this as a follow-up change (not in this PR) and
ensure you also add a migration/reaper plan for the old PlacementRule objects so
they are not orphaned on existing hubs.

In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl`:
- Around line 29963-29966: The manifest contains legacy resource names still
using "placement-rule" / "pr-" prefixes (e.g.,
autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) while the resources are
now kind: Placement; add a concise inline comment adjacent to each affected
resource declaration (near the occurrences of
autoscaler-podmonitor-placement-rule and the three pr-rosa-oauth-tpl-*
resources) stating these are legacy names from the old PlacementRule API and
that they map to current Placement objects, and add a short TODO noting any
future rename must update PlacementBinding.placementRef.name to avoid accidental
breakage and that a follow-up cleanup can be performed after migration soak.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 8b618b0a-7073-4c3f-a098-ba77725de237

📥 Commits

Reviewing files that changed from the base of the PR and between 52aac2f and 477e8b3.

📒 Files selected for processing (7)
  • deploy/hypershift-autoscaler-podmonitor/01-autoscaler-podmonitor.yaml
  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml
  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yaml
  • deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml
  • hack/00-osd-managed-cluster-config-integration.yaml.tmpl
  • hack/00-osd-managed-cluster-config-production.yaml.tmpl
  • hack/00-osd-managed-cluster-config-stage.yaml.tmpl

Comment on lines +43 to +49
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

In Open Cluster Management, does cluster.open-cluster-management.io/v1beta1 PlacementSpecsupportclusterConditions? Please provide the official schema/docs fields list.

💡 Result:

No. In cluster.open-cluster-management.io/v1beta1, PlacementSpec does not support a clusterConditions field. Official schema/docs fields list for PlacementSpec (v1beta1) - numberOfClusters [1] - clusterSets [1] - predicates [1] - prioritizerPolicy [1] - spreadPolicy [1] - tolerations [1] - decisionStrategy [1] Additionally, the user-facing Placement docs describe “Placement conditions” as status/conditions of the Placement resource (e.g., PlacementMisconfigured, PlacementSatisfied), not as a clusterConditions field in spec. PlacementSpec [2].

Citations:


Remove invalid spec.clusterConditions field from Placement v1beta1.

clusterConditions is not a valid field in cluster.open-cluster-management.io/v1beta1 PlacementSpec. It belongs to the deprecated PlacementRule resource. This field will fail schema validation or be silently dropped by the API server, causing unexpected placement behavior.

Suggested fix
 spec:
-    clusterConditions:
-    - status: "True"
-      type: ManagedClusterConditionAvailable
     predicates:
     - requiredClusterSelector:
         labelSelector:
           matchExpressions:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
clusterConditions:
- status: "True"
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
spec:
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml`
around lines 43 - 49, The Placement manifest includes an invalid
spec.clusterConditions field (the clusterConditions block under the Placement
spec) which is not part of cluster.open-cluster-management.io/v1beta1
PlacementSpec; remove the entire clusterConditions section (the status/type
entries under clusterConditions) so the PlacementSpec contains only valid fields
(e.g., predicates -> requiredClusterSelector -> labelSelector ->
matchExpressions), or if you actually need clusterConditions semantics migrate
this logic to a deprecated PlacementRule resource instead; update the YAML to
delete the clusterConditions subtree and leave the
predicates/requiredClusterSelector intact.

Comment on lines +2371 to 2395
- apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: placement-backplane-acs
namespace: openshift-acm-policies
spec:
clusterSelector:
matchExpressions:
- key: api.openshift.com/addon-acs-fleetshard
operator: In
values:
- 'true'
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: api.openshift.com/addon-acs-fleetshard
operator: In
values:
- 'true'
- apiVersion: policy.open-cluster-management.io/v1
kind: PlacementBinding
metadata:
name: binding-backplane-acs
namespace: openshift-acm-policies
placementRef:
apiGroup: apps.open-cluster-management.io
kind: PlacementRule
apiGroup: cluster.open-cluster-management.io
kind: Placement
name: placement-backplane-acs
subjects:
- apiGroup: policy.open-cluster-management.io
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy lift

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find any ManagedClusterSetBinding shipped alongside these policy templates
rg -nP --type=yaml -C3 '\bkind:\s*ManagedClusterSetBinding\b'
# And check whether the integration/production/stage templates create them in these two namespaces
rg -nP -C5 'openshift-acm-policies|openshift-rosa-oauth-tpl-policies' \
  hack/00-osd-managed-cluster-config-integration.yaml.tmpl \
  hack/00-osd-managed-cluster-config-production.yaml.tmpl \
  hack/00-osd-managed-cluster-config-stage.yaml.tmpl 2>/dev/null \
  | rg -n 'ManagedClusterSetBinding|ManagedClusterSet\b'

Repository: openshift/managed-cluster-config

Length of output: 58


🏁 Script executed:

#!/bin/bash
# First, let's check the file size and read the specific lines mentioned
wc -l hack/00-osd-managed-cluster-config-integration.yaml.tmpl
echo "---"
# Read lines around the mentioned range
sed -n '2365,2400p' hack/00-osd-managed-cluster-config-integration.yaml.tmpl

Repository: openshift/managed-cluster-config

Length of output: 1384


🏁 Script executed:

#!/bin/bash
# Search for all Placement and ManagedClusterSetBinding resources in the entire repo
echo "=== Searching for Placement resources ==="
rg -n 'kind:\s*Placement' --type yaml
echo ""
echo "=== Searching for ManagedClusterSetBinding resources ==="
rg -n 'kind:\s*ManagedClusterSetBinding' --type yaml

Repository: openshift/managed-cluster-config

Length of output: 15703


🏁 Script executed:

#!/bin/bash
# Search for references to the specific namespaces
echo "=== Searching for openshift-acm-policies namespace ==="
rg -n 'openshift-acm-policies' --type yaml | head -20
echo ""
echo "=== Searching for openshift-rosa-oauth-tpl-policies namespace ==="
rg -n 'openshift-rosa-oauth-tpl-policies' --type yaml | head -20

Repository: openshift/managed-cluster-config

Length of output: 4118


🏁 Script executed:

#!/bin/bash
# Search specifically in hack/ directory for ManagedClusterSetBinding
echo "=== Searching for ManagedClusterSetBinding in hack/ directory ==="
rg -n 'ManagedClusterSetBinding' hack/
echo ""
# Also search for clusterSets in hack templates
echo "=== Searching for clusterSets in hack/ directory ==="
rg -n 'clusterSets' hack/

Repository: openshift/managed-cluster-config

Length of output: 196


🏁 Script executed:

#!/bin/bash
# List the hack files to understand structure
echo "=== Hack files in the repo ==="
find hack/ -type f -name "*.tmpl" -o -name "*.yaml" | head -20
echo ""
# Check if hack files reference the generated deploy files
echo "=== Checking relationship between hack and deploy ==="
head -20 hack/00-osd-managed-cluster-config-integration.yaml.tmpl

Repository: openshift/managed-cluster-config

Length of output: 735


🏁 Script executed:

#!/bin/bash
# Search the entire hack template file for any Placement configurations
# to understand what's at lines 2371-2395 in context
echo "=== Context around lines 2371-2395 in the integration template ==="
sed -n '2360,2410p' hack/00-osd-managed-cluster-config-integration.yaml.tmpl | cat -n

Repository: openshift/managed-cluster-config

Length of output: 2394


🏁 Script executed:

#!/bin/bash
# Search for any scripts that might create ManagedClusterSetBinding
echo "=== Searching for scripts that might create bindings ==="
find . -name "*.sh" -o -name "*.py" -o -name "Makefile" | head -20
echo ""
# Check for README or documentation
echo "=== Looking for documentation files ==="
find . -maxdepth 2 -name "README*" -o -name "*.md" | head -10

Repository: openshift/managed-cluster-config

Length of output: 1046


🏁 Script executed:

#!/bin/bash
# Search for any references to ManagedClusterSet (not binding) in hack files
echo "=== Searching for ManagedClusterSet in hack/ ==="
rg -n 'ManagedClusterSet' hack/
echo ""
# Also check if there are any namespace creation resources that might include bindings
echo "=== Searching for Namespace definitions in hack/ ==="
rg -n 'kind: Namespace' hack/ -A 10 | head -50

Repository: openshift/managed-cluster-config

Length of output: 4828


🏁 Script executed:

#!/bin/bash
# Check README for information about how bindings are handled
echo "=== Checking README ==="
cat README.md | head -100

Repository: openshift/managed-cluster-config

Length of output: 7482


🏁 Script executed:

#!/bin/bash
# Search for any ACM or Placement documentation/references
echo "=== Searching for ACM_BASE_URL or similar config parameters ==="
rg -n 'ManagedClusterSet' . --type yaml
echo ""
echo "=== Checking if there are any references to binding in comments ==="
rg -n -i 'binding' hack/00-osd-managed-cluster-config-integration.yaml.tmpl | head -20

Repository: openshift/managed-cluster-config

Length of output: 1039


🏁 Script executed:

#!/antml:bash
# Check if there's any indication that ManagedClusterSetBinding needs to be created
# Look at the generated deploy files to see if they have any bindings
echo "=== Checking deploy files for ManagedClusterSetBinding ==="
find deploy/ -name "*.yaml" -exec grep -l "ManagedClusterSetBinding" {} \;
echo ""
echo "=== Checking if deployment files include namespace creation ==="
ls -la deploy/acm-policies/ | head -20

Repository: openshift/managed-cluster-config

Length of output: 1932


Add ManagedClusterSetBinding resources to enable Placement selectors.

The Placement resources in these templates use predicates but lack an explicit clusterSets field. Under the Placement API, this means the predicate filters will only apply to clusters from ManagedClusterSets explicitly bound to the placement's namespace via ManagedClusterSetBinding. No such bindings exist in the repository, so every Placement will have zero candidate clusters and silently fail to enforce policies.

For the openshift-acm-policies and openshift-rosa-oauth-tpl-policies namespaces, add ManagedClusterSetBinding resources that bind these namespaces to the ManagedClusterSet(s) containing your target ROSA/HCP management and hosted clusters. Without these bindings, policy enforcement will not work after the migration from PlacementRule to Placement.

Reference: https://open-cluster-management.io/docs/concepts/content-placement/placement/

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 2371 -
2395, The Placement resources in this template rely on predicate filtering but
lack any ManagedClusterSetBinding, so they will see zero candidate clusters; add
ManagedClusterSetBinding custom resources that bind the openshift-acm-policies
and openshift-rosa-oauth-tpl-policies namespaces to the appropriate
ManagedClusterSet(s) that contain your ROSA/HCP management and hosted clusters.
For each target ManagedClusterSet, create a ManagedClusterSetBinding with
metadata.namespace set to the target namespace (openshift-acm-policies or
openshift-rosa-oauth-tpl-policies) and spec.managedClusterSet set to the
ManagedClusterSet name so the Placement objects (Placement) in those namespaces
can select clusters from that set. Ensure you add one binding per namespace per
ManagedClusterSet that should be considered by the Placement predicates.

Comment on lines 23278 to 23282
- addon.open-cluster-management.io
- apps.open-cluster-management.io
- cluster.open-cluster-management.io
- cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.io
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Duplicate cluster.open-cluster-management.io entry in apiGroups.

Lines 23279 and 23280 both list cluster.open-cluster-management.io. The newly added entry is redundant — the same group was already present (likely covering the existing ManagedCluster/ManagedClusterSet permissions), so the addition introduces a no-op duplicate that will pass through admission but is misleading and will fail strict YAML/RBAC linters that flag duplicates.

🛠️ Proposed fix
       rules:
       - apiGroups:
         - addon.open-cluster-management.io
-        - cluster.open-cluster-management.io
         - cluster.open-cluster-management.io
         - clusterview.open-cluster-management.io
         - discovery.open-cluster-management.io
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 23278
- 23282, Remove the duplicate apiGroups entry
"cluster.open-cluster-management.io" from the relevant Role/ClusterRole
definition in the template so apiGroups contains each group only once; locate
the resource listing apiGroups (the block that also includes permissions for
ManagedCluster/ManagedClusterSet) and delete the redundant
"cluster.open-cluster-management.io" line so the YAML/RBAC linter no longer
flags a duplicate entry.

Comment on lines +45631 to +45647
- apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: pr-rosa-oauth-tpl-errors
namespace: openshift-rosa-oauth-tpl-policies
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm clusterConditions is not part of Placement v1beta1 spec
fd -e yaml -e tmpl . hack deploy 2>/dev/null | xargs rg -nP '(?s)kind:\s*Placement\b.*?clusterConditions' -U
# Also check the Placement CRD if vendored anywhere
rg -nP --type=yaml -C2 '\bclusterConditions\b'

Repository: openshift/managed-cluster-config

Length of output: 50391


clusterConditions is not a valid field on Placement v1beta1 — schema validation will fail.

Confirmed in deployed files: deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yaml, 50-rosa-oauth-tpl-errors.Policy.yaml, and 50-rosa-oauth-tpl-providers.Policy.yaml all contain Placement resources with clusterConditions at line 43. The same issue appears in the template at the cited line ranges.

The clusterConditions field belonged to the deprecated PlacementRule (apps.open-cluster-management.io/v1). The new Placement (cluster.open-cluster-management.io/v1beta1) spec does not define a clusterConditions field — its top-level spec fields are clusterSets, numberOfClusters, predicates, prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy. The API server will reject these resources on creation or update.

Proposed fix — drop the legacy `clusterConditions` block
       spec:
-        clusterConditions:
-        - status: 'True'
-          type: ManagedClusterConditionAvailable
         predicates:
         - requiredClusterSelector:
             labelSelector:
               matchExpressions:
               - key: hypershift.open-cluster-management.io/management-cluster
                 operator: In
                 values:
                 - 'true'

Apply this fix across all three rosa-oauth-tpl placements and anywhere else in the template where this pattern appears.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 45631
- 45647, The Placement resources in the template include the obsolete top-level
field "clusterConditions" which is invalid for kind: Placement
(cluster.open-cluster-management.io/v1beta1); remove all "clusterConditions"
blocks from any Placement documents (e.g., the rosa-oauth-tpl
login/errors/providers Placement entries) and any other occurrences in the
template, leaving only supported spec fields (clusterSets, numberOfClusters,
predicates, prioritizerPolicy, tolerations, decisionStrategy, spreadPolicy);
search for the symbol "clusterConditions" and delete the entire block and any
trailing commas/empty maps so the resulting Placement spec validates against the
v1beta1 schema.

Comment on lines 23276 to 23282
rules:
- apiGroups:
- addon.open-cluster-management.io
- apps.open-cluster-management.io
- cluster.open-cluster-management.io
- cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.io
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Duplicate cluster.open-cluster-management.io entry in apiGroups.

It appears the previous apps.open-cluster-management.io was rewritten to cluster.open-cluster-management.io (line 23279) without noticing that an entry with the same value already existed on the next line (23280). The list should be deduplicated.

♻️ Proposed fix
       rules:
       - apiGroups:
         - addon.open-cluster-management.io
         - cluster.open-cluster-management.io
-        - cluster.open-cluster-management.io
         - clusterview.open-cluster-management.io
         - discovery.open-cluster-management.io
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 23276 -
23282, The apiGroups list in the template contains a duplicated value:
cluster.open-cluster-management.io was added twice (one occurrence replaced
apps.open-cluster-management.io), so edit the apiGroups array in
hack/00-osd-managed-cluster-config-production.yaml.tmpl to remove the duplicate
cluster.open-cluster-management.io and restore/keep
apps.open-cluster-management.io if intended; ensure each api group value (e.g.,
cluster.open-cluster-management.io and apps.open-cluster-management.io) appears
only once in the apiGroups list.

Comment on lines +45636 to +45647
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

Does cluster.open-cluster-management.io/v1beta1 Placement spec accept a clusterConditions field, or is it a PlacementRule-only field?

💡 Result:

In Open Cluster Management, the Placement CRD/spec at apiVersion cluster.open-cluster-management.io/v1beta1 does not define a spec.clusterConditions field; instead, it uses a PlacementSpec that includes fields like predicates, prioritizerPolicy, numberOfClusters, spreadPolicy, tolerations, and decisionStrategy, and any “conditions” you see are represented under Placement status (Placement conditions such as PlacementMisconfigured/PlacementSatisfied), not as a spec.clusterConditions input [1][2]. There is also not a PlacementRule-only “clusterConditions” field in the Placement API—OCM’s Placement API is a PlacementSpec, and the only conditions described for Placement are status conditions like PlacementMisconfigured/PlacementSatisfied [2][3]. So: cluster.open-cluster-management.io/v1beta1 Placement spec does not accept a clusterConditions field; there is no PlacementRule-only clusterConditions in this API—“conditions” are about Placement status, not user-provided clusterConditions [2][1].

Citations:


🏁 Script executed:

#!/bin/bash
# Check if the file exists
if [ -f "hack/00-osd-managed-cluster-config-production.yaml.tmpl" ]; then
  echo "=== File exists ==="
  # Check line count
  wc -l hack/00-osd-managed-cluster-config-production.yaml.tmpl
  
  # Check lines 45635-45650 (around 45637-45639)
  echo -e "\n=== Lines 45635-45650 ==="
  sed -n '45635,45650p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
  
  # Check lines 45698-45715 (around 45700-45703)
  echo -e "\n=== Lines 45698-45715 ==="
  sed -n '45698,45715p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
  
  # Check lines 45762-45779 (around 45764-45767)
  echo -e "\n=== Lines 45762-45779 ==="
  sed -n '45762,45779p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
else
  echo "File not found at hack/00-osd-managed-cluster-config-production.yaml.tmpl"
  # Try to find it
  find . -name "*00-osd-managed-cluster-config-production*" 2>/dev/null
fi

Repository: openshift/managed-cluster-config

Length of output: 2007


🏁 Script executed:

#!/bin/bash
# Check the apiVersion for each of the three Placement resources
echo "=== Lines 45620-45635 (before first clusterConditions) ==="
sed -n '45620,45635p' hack/00-osd-managed-cluster-config-production.yaml.tmpl

echo -e "\n=== Lines 45680-45700 (before second clusterConditions) ==="
sed -n '45680,45700p' hack/00-osd-managed-cluster-config-production.yaml.tmpl

echo -e "\n=== Lines 45744-45765 (before third clusterConditions) ==="
sed -n '45744,45765p' hack/00-osd-managed-cluster-config-production.yaml.tmpl

Repository: openshift/managed-cluster-config

Length of output: 50389


🏁 Script executed:

#!/bin/bash
# Search for kind: Placement and apiVersion near the clusterConditions lines
echo "=== Searching for Placement kind and apiVersion around lines 45600-45650 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "456[0-4][0-9]:|456[5-9][0-9]:" | head -10

echo -e "\n=== Searching for Placement kind and apiVersion around lines 45650-45750 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "457[0-4][0-9]:" | head -10

echo -e "\n=== Searching for Placement kind and apiVersion around lines 45750-45800 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "457[5-9][0-9]:|458[0-0][0-9]:" | head -10

Repository: openshift/managed-cluster-config

Length of output: 1385


Remove clusterConditions from Placement v1beta1 resources — this field is invalid and will be rejected or silently pruned.

The cluster.open-cluster-management.io/v1beta1 Placement spec does not support clusterConditions. Valid spec fields are: numberOfClusters, clusterSets, predicates, prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy.

This field exists on all three Placement resources (lines 45637–45639, 45701–45703, 45765–45767) and was carried over from PlacementRule v1. On the new Placement CRD, it will either be rejected by strict schema validation or silently pruned by the API server, silently dropping the ManagedClusterConditionAvailable=True availability filter.

Note: The Placement controller already filters out terminating clusters by default (OCM v0.14.0+) but does not enforce the ManagedClusterConditionAvailable=True condition that PlacementRule provided. If you need to gate on cluster availability, use predicates[].requiredClusterSelector.claimSelector against an appropriate ClusterClaim, or rely on the policy framework's compliance reporting.

Remove clusterConditions from all three resources
       spec:
-        clusterConditions:
-        - status: 'True'
-          type: ManagedClusterConditionAvailable
         predicates:

Apply this removal at lines 45637–45639, 45701–45703, and 45765–45767.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 45636 -
45647, Remove the invalid clusterConditions field from all Placement v1beta1
resource specs in this template: locate each Placement resource that contains
the clusterConditions key (appearing on each Placement manifest carried over
from PlacementRule v1) and delete the entire clusterConditions stanza so the
spec only contains valid fields like numberOfClusters, clusterSets, predicates,
prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy; ensure you
remove the clusterConditions entries from all three Placement manifests so the
API server will not reject or prune the resource and, if you still need to
filter by availability, replace it with a
predicates[].requiredClusterSelector.claimSelector-based selector or
policy-based reporting as noted.

Comment on lines 23276 to 23282
rules:
- apiGroups:
- addon.open-cluster-management.io
- apps.open-cluster-management.io
- cluster.open-cluster-management.io
- cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.io
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Duplicate cluster.open-cluster-management.io apiGroup in rules.

Lines 23279 and 23280 both list cluster.open-cluster-management.io. This is almost certainly a migration leftover where apps.open-cluster-management.io (the old PlacementRule group) was rewritten to cluster.open-cluster-management.io even though that group was already present in the same apiGroups list. Functionally harmless (Kubernetes deduplicates), but it's clutter that obscures the real diff and is best cleaned up at the source.

       rules:
       - apiGroups:
         - addon.open-cluster-management.io
-        - cluster.open-cluster-management.io
         - cluster.open-cluster-management.io
         - clusterview.open-cluster-management.io
         - discovery.open-cluster-management.io

If apps.open-cluster-management.io is still needed elsewhere (e.g. for tolerating any lingering PlacementRule objects during rollout), restore that entry instead of dropping it.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 23276 -
23282, The apiGroups list in the RBAC rule contains a duplicate
"cluster.open-cluster-management.io" entry (likely from a migration) — remove
the duplicated "cluster.open-cluster-management.io" so the list contains each
group only once; if the original "apps.open-cluster-management.io" entry is
intentionally required (to tolerate legacy PlacementRule objects), restore
"apps.open-cluster-management.io" instead of dropping it. Locate the RBAC rule's
apiGroups array (the rule that currently lists
cluster.open-cluster-management.io twice) and edit that array to either
deduplicate entries or replace the extra duplicate with
"apps.open-cluster-management.io" as appropriate.

Comment on lines +45631 to +45647
- apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: pr-rosa-oauth-tpl-errors
namespace: openshift-rosa-oauth-tpl-policies
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
clusterSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

🧩 Analysis chain

🌐 Web query:

Does cluster.open-cluster-management.io/v1beta1 Placement spec support a clusterConditions field, or is it PlacementRule-only?

💡 Result:

In cluster.open-cluster-management.io/v1beta1, the Placement spec does not have a spec.clusterConditions field; Placement is configured via predicates/prioritizers (and related scheduling fields), and it reports selection status via status.conditions (PlacementSatisfied/PlacementMisconfigured) rather than a clusterConditions spec field [1][2]. So it is Placement-only (not PlacementRule-only): Placement supports status-based condition reporting, but there is no PlacementSpec.clusterConditions field in v1beta1 according to the public API docs and Go type documentation for PlacementSpec/conditions [1][2].

Citations:


clusterConditions is not a valid field in Placement (cluster.open-cluster-management.io/v1beta1) — these resources will fail validation.

The three rosa-oauth-tpl placements (pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) still carry the spec.clusterConditions block from the old PlacementRule schema:

spec:
  clusterConditions:
  - status: 'True'
    type: ManagedClusterConditionAvailable
  predicates:
  - requiredClusterSelector:
      ...

Placement v1beta1 does not support clusterConditions in its spec. It only supports numberOfClusters, clusterSets, predicates, prioritizerPolicy, spreadPolicy, and tolerations. With CRD structural-schema pruning, clusterConditions will be silently dropped or rejected by the API server, and the intended availability gate will be lost either way.

Replace clusterConditions with Placement-native equivalents — typically tolerations against the cluster.open-cluster-management.io/unavailable and cluster.open-cluster-management.io/unreachable taints — or remove the block if the controller's built-in terminating-cluster filter is sufficient. This generated template likely inherits the issue from deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml; fixing it there should regenerate this template correctly.

Suggested fix (apply to all three blocks)
     - apiVersion: cluster.open-cluster-management.io/v1beta1
       kind: Placement
       metadata:
         name: pr-rosa-oauth-tpl-errors
         namespace: openshift-rosa-oauth-tpl-policies
       spec:
-        clusterConditions:
-        - status: 'True'
-          type: ManagedClusterConditionAvailable
         predicates:
         - requiredClusterSelector:
             labelSelector:
               matchExpressions:
               - key: hypershift.open-cluster-management.io/management-cluster
                 operator: In
                 values:
                 - 'true'
+        tolerations:
+        - key: cluster.open-cluster-management.io/unavailable
+          operator: Exists
+        - key: cluster.open-cluster-management.io/unreachable
+          operator: Exists

Also applies to: 45695-45711, 45759-45775

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 45631 -
45647, The Placement resources for pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, and pr-rosa-oauth-tpl-providers contain an invalid
spec.clusterConditions block; remove that block from each Placement (or replace
it with Placement-native fields) and instead add equivalent tolerations for the
cluster.open-cluster-management.io/unavailable and
cluster.open-cluster-management.io/unreachable taints (or omit if the
controller’s terminating-cluster filter suffices); also update the upstream
generator source
(deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml)
so future renders of hack/00-osd-managed-cluster-config-stage.yaml.tmpl do not
reintroduce spec.clusterConditions.

Copy link
Copy Markdown
Contributor

@Ajpantuso Ajpantuso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hack/* files should not be modified directly as they will be generated with the make generate-hive-templates target.

The primary change here will be to update the PolicyGen Version. The latest version will fully remove PlacementRule generation.

I suspect the Placements still need to be updated like you are doing with this PR, but let's see what the new PolicyGen output looks like

@Ankit152
Copy link
Copy Markdown
Contributor Author

Ankit152 commented May 6, 2026

Thanks @Ajpantuso

I will update the Policy Generator version in the makefile. I believe we would also need similar changes in the Dockerfile and Dcokerfile.prow too.

…sources

Signed-off-by: Ankit152 <ankitkurmi152@gmail.com>
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Ankit152
Once this PR has been reviewed and has the lgtm label, please assign bmeng for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

@Ankit152: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/pr-check 4f30d5a link true /test pr-check

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants