[SLSRE-534] Migrate ACM PlacementRule resources to Placement resources#2742
[SLSRE-534] Migrate ACM PlacementRule resources to Placement resources#2742Ankit152 wants to merge 1 commit into
Conversation
|
Warning Rate limit exceeded
To continue reviewing without waiting, purchase usage credits in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository: openshift/coderabbit/.coderabbit.yaml Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (59)
WalkthroughThis pull request migrates cluster placement policies from the deprecated ChangesPlacement API Migration
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 12✅ Passed checks (12 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning Review ran into problems🔥 ProblemsGit: Failed to clone repository. Please run the Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml (1)
42-53:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winRemove invalid
clusterConditionsfield from Placement v1beta1 spec.The
spec.clusterConditionsfield is not part of the Placement v1beta1 schema (cluster.open-cluster-management.io/v1beta1). It belongs to the legacy PlacementRule API. Placement selects clusters viaspec.predicates, and automatically filters out unavailable managed clusters by default—this stanza can be safely removed.This invalid field appears in at least three files in this PR:
deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yamldeploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yamldeploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yamlApply the same removal consistently across all affected files.
Proposed fix
spec: - clusterConditions: - - status: "True" - type: ManagedClusterConditionAvailable predicates:🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml` around lines 42 - 53, Remove the invalid spec.clusterConditions stanza from the Placement v1beta1 resources (the Placement spec)—specifically delete the entire clusterConditions key and its array content while keeping spec.predicates intact; this should be applied to the three 50-rosa-oauth-tpl-*.Policy.yaml templates (errors, providers, login) so the Placement conforms to cluster.open-cluster-management.io/v1beta1 schema and the YAML remains syntactically valid after removal.
🧹 Nitpick comments (3)
hack/00-osd-managed-cluster-config-production.yaml.tmpl (1)
29963-29986: ⚖️ Poor tradeoffStale
placement-rule/pr-naming on the new Placement resources.These resources are now
kind: Placementbut retain PlacementRule-era names (autoscaler-podmonitor-placement-rule,pr-rosa-oauth-tpl-errors,pr-rosa-oauth-tpl-login,pr-rosa-oauth-tpl-providers). Functionally fine, but it makes the manifests harder to grep and reason about post-migration. Consider renaming in a follow-up (along with their PlacementBindingplacementRef.nameandmetadata.name); doing it in this PR risks orphaning the previously created PlacementRule objects on existing hubs unless the old names are also reaped.Also applies to: 45631-45656, 45695-45720, 45759-45784
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 29963 - 29986, Manifest resource names still use the old PlacementRule-era prefixes (e.g., autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) even though the kind is now Placement; update those Placement resource metadata.name values to a clearer Placement-oriented naming scheme and also update any corresponding PlacementBinding objects’ placementRef.name and metadata.name to match; do this as a follow-up change (not in this PR) and ensure you also add a migration/reaper plan for the old PlacementRule objects so they are not orphaned on existing hubs.hack/00-osd-managed-cluster-config-integration.yaml.tmpl (1)
29963-29985: ⚖️ Poor tradeoffResource name still says
-placement-rule— consider renaming for the Placement migration.
autoscaler-podmonitor-placement-ruleis now aPlacement, not aPlacementRule, but the name carries the legacy term. Likewise the matchingPlacementBindingis namedautoscaler-podmonitor-placement-binding. Renaming is non-trivial (the binding'splacementRef.nameand any downstream references would need to change in lockstep, and the live cluster would see a delete+create), so this is just an optional cleanup. If you keep the current name to avoid churn, consider it elsewhere when the resource is re-created.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 29963 - 29985, The resource names still contain the legacy suffix "-placement-rule" (autoscaler-podmonitor-placement-rule) and its binding autoscaler-podmonitor-placement-binding; rename them to remove the legacy term (e.g., autoscaler-podmonitor-placement and autoscaler-podmonitor-binding) or choose a consistent new name, and update the PlacementBinding.placementRef.name and any downstream references to match exactly; if you opt to avoid live-cluster churn keep the current names but add a TODO/note to rename when the resources are recreated.hack/00-osd-managed-cluster-config-stage.yaml.tmpl (1)
29963-29966: 💤 Low valueStale "placement-rule" /
pr-naming on resources that are nowPlacements.Several migrated resources keep names tied to the old API:
- Line 29966:
autoscaler-podmonitor-placement-rule(nowkind: Placement)- Line 45634:
pr-rosa-oauth-tpl-errors- Line 45698:
pr-rosa-oauth-tpl-login- Line 45762:
pr-rosa-oauth-tpl-providersRenaming would be a breaking change for any external reference and would force
PlacementBinding.placementRef.nameupdates, so it's reasonable to defer. Just note this so future readers don't mistake them forPlacementRuleresources, and consider a follow-up cleanup once the migration has fully soaked.Also applies to: 45631-45634, 45695-45698, 45759-45762
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 29963 - 29966, The manifest contains legacy resource names still using "placement-rule" / "pr-" prefixes (e.g., autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) while the resources are now kind: Placement; add a concise inline comment adjacent to each affected resource declaration (near the occurrences of autoscaler-podmonitor-placement-rule and the three pr-rosa-oauth-tpl-* resources) stating these are legacy names from the old PlacementRule API and that they map to current Placement objects, and add a short TODO noting any future rename must update PlacementBinding.placementRef.name to avoid accidental breakage and that a follow-up cleanup can be performed after migration soak.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In
`@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml`:
- Around line 43-49: The Placement manifest includes an invalid
spec.clusterConditions field (the clusterConditions block under the Placement
spec) which is not part of cluster.open-cluster-management.io/v1beta1
PlacementSpec; remove the entire clusterConditions section (the status/type
entries under clusterConditions) so the PlacementSpec contains only valid fields
(e.g., predicates -> requiredClusterSelector -> labelSelector ->
matchExpressions), or if you actually need clusterConditions semantics migrate
this logic to a deprecated PlacementRule resource instead; update the YAML to
delete the clusterConditions subtree and leave the
predicates/requiredClusterSelector intact.
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl`:
- Around line 23278-23282: Remove the duplicate apiGroups entry
"cluster.open-cluster-management.io" from the relevant Role/ClusterRole
definition in the template so apiGroups contains each group only once; locate
the resource listing apiGroups (the block that also includes permissions for
ManagedCluster/ManagedClusterSet) and delete the redundant
"cluster.open-cluster-management.io" line so the YAML/RBAC linter no longer
flags a duplicate entry.
- Around line 45631-45647: The Placement resources in the template include the
obsolete top-level field "clusterConditions" which is invalid for kind:
Placement (cluster.open-cluster-management.io/v1beta1); remove all
"clusterConditions" blocks from any Placement documents (e.g., the
rosa-oauth-tpl login/errors/providers Placement entries) and any other
occurrences in the template, leaving only supported spec fields (clusterSets,
numberOfClusters, predicates, prioritizerPolicy, tolerations, decisionStrategy,
spreadPolicy); search for the symbol "clusterConditions" and delete the entire
block and any trailing commas/empty maps so the resulting Placement spec
validates against the v1beta1 schema.
- Around line 2371-2395: The Placement resources in this template rely on
predicate filtering but lack any ManagedClusterSetBinding, so they will see zero
candidate clusters; add ManagedClusterSetBinding custom resources that bind the
openshift-acm-policies and openshift-rosa-oauth-tpl-policies namespaces to the
appropriate ManagedClusterSet(s) that contain your ROSA/HCP management and
hosted clusters. For each target ManagedClusterSet, create a
ManagedClusterSetBinding with metadata.namespace set to the target namespace
(openshift-acm-policies or openshift-rosa-oauth-tpl-policies) and
spec.managedClusterSet set to the ManagedClusterSet name so the Placement
objects (Placement) in those namespaces can select clusters from that set.
Ensure you add one binding per namespace per ManagedClusterSet that should be
considered by the Placement predicates.
In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl`:
- Around line 23276-23282: The apiGroups list in the template contains a
duplicated value: cluster.open-cluster-management.io was added twice (one
occurrence replaced apps.open-cluster-management.io), so edit the apiGroups
array in hack/00-osd-managed-cluster-config-production.yaml.tmpl to remove the
duplicate cluster.open-cluster-management.io and restore/keep
apps.open-cluster-management.io if intended; ensure each api group value (e.g.,
cluster.open-cluster-management.io and apps.open-cluster-management.io) appears
only once in the apiGroups list.
- Around line 45636-45647: Remove the invalid clusterConditions field from all
Placement v1beta1 resource specs in this template: locate each Placement
resource that contains the clusterConditions key (appearing on each Placement
manifest carried over from PlacementRule v1) and delete the entire
clusterConditions stanza so the spec only contains valid fields like
numberOfClusters, clusterSets, predicates, prioritizerPolicy, tolerations,
decisionStrategy, and spreadPolicy; ensure you remove the clusterConditions
entries from all three Placement manifests so the API server will not reject or
prune the resource and, if you still need to filter by availability, replace it
with a predicates[].requiredClusterSelector.claimSelector-based selector or
policy-based reporting as noted.
In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl`:
- Around line 23276-23282: The apiGroups list in the RBAC rule contains a
duplicate "cluster.open-cluster-management.io" entry (likely from a migration) —
remove the duplicated "cluster.open-cluster-management.io" so the list contains
each group only once; if the original "apps.open-cluster-management.io" entry is
intentionally required (to tolerate legacy PlacementRule objects), restore
"apps.open-cluster-management.io" instead of dropping it. Locate the RBAC rule's
apiGroups array (the rule that currently lists
cluster.open-cluster-management.io twice) and edit that array to either
deduplicate entries or replace the extra duplicate with
"apps.open-cluster-management.io" as appropriate.
- Around line 45631-45647: The Placement resources for pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, and pr-rosa-oauth-tpl-providers contain an invalid
spec.clusterConditions block; remove that block from each Placement (or replace
it with Placement-native fields) and instead add equivalent tolerations for the
cluster.open-cluster-management.io/unavailable and
cluster.open-cluster-management.io/unreachable taints (or omit if the
controller’s terminating-cluster filter suffices); also update the upstream
generator source
(deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml)
so future renders of hack/00-osd-managed-cluster-config-stage.yaml.tmpl do not
reintroduce spec.clusterConditions.
---
Outside diff comments:
In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yaml`:
- Around line 42-53: Remove the invalid spec.clusterConditions stanza from the
Placement v1beta1 resources (the Placement spec)—specifically delete the entire
clusterConditions key and its array content while keeping spec.predicates
intact; this should be applied to the three 50-rosa-oauth-tpl-*.Policy.yaml
templates (errors, providers, login) so the Placement conforms to
cluster.open-cluster-management.io/v1beta1 schema and the YAML remains
syntactically valid after removal.
---
Nitpick comments:
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl`:
- Around line 29963-29985: The resource names still contain the legacy suffix
"-placement-rule" (autoscaler-podmonitor-placement-rule) and its binding
autoscaler-podmonitor-placement-binding; rename them to remove the legacy term
(e.g., autoscaler-podmonitor-placement and autoscaler-podmonitor-binding) or
choose a consistent new name, and update the PlacementBinding.placementRef.name
and any downstream references to match exactly; if you opt to avoid live-cluster
churn keep the current names but add a TODO/note to rename when the resources
are recreated.
In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl`:
- Around line 29963-29986: Manifest resource names still use the old
PlacementRule-era prefixes (e.g., autoscaler-podmonitor-placement-rule,
pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers)
even though the kind is now Placement; update those Placement resource
metadata.name values to a clearer Placement-oriented naming scheme and also
update any corresponding PlacementBinding objects’ placementRef.name and
metadata.name to match; do this as a follow-up change (not in this PR) and
ensure you also add a migration/reaper plan for the old PlacementRule objects so
they are not orphaned on existing hubs.
In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl`:
- Around line 29963-29966: The manifest contains legacy resource names still
using "placement-rule" / "pr-" prefixes (e.g.,
autoscaler-podmonitor-placement-rule, pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) while the resources are
now kind: Placement; add a concise inline comment adjacent to each affected
resource declaration (near the occurrences of
autoscaler-podmonitor-placement-rule and the three pr-rosa-oauth-tpl-*
resources) stating these are legacy names from the old PlacementRule API and
that they map to current Placement objects, and add a short TODO noting any
future rename must update PlacementBinding.placementRef.name to avoid accidental
breakage and that a follow-up cleanup can be performed after migration soak.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 8b618b0a-7073-4c3f-a098-ba77725de237
📒 Files selected for processing (7)
deploy/hypershift-autoscaler-podmonitor/01-autoscaler-podmonitor.yamldeploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-errors.Policy.yamldeploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yamldeploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yamlhack/00-osd-managed-cluster-config-integration.yaml.tmplhack/00-osd-managed-cluster-config-production.yaml.tmplhack/00-osd-managed-cluster-config-stage.yaml.tmpl
| clusterConditions: | ||
| - status: "True" | ||
| type: ManagedClusterConditionAvailable | ||
| clusterSelector: | ||
| matchExpressions: | ||
| predicates: | ||
| - requiredClusterSelector: | ||
| labelSelector: | ||
| matchExpressions: |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Open Cluster Management, does cluster.open-cluster-management.io/v1beta1 PlacementSpecsupportclusterConditions? Please provide the official schema/docs fields list.
💡 Result:
No. In cluster.open-cluster-management.io/v1beta1, PlacementSpec does not support a clusterConditions field. Official schema/docs fields list for PlacementSpec (v1beta1) - numberOfClusters [1] - clusterSets [1] - predicates [1] - prioritizerPolicy [1] - spreadPolicy [1] - tolerations [1] - decisionStrategy [1] Additionally, the user-facing Placement docs describe “Placement conditions” as status/conditions of the Placement resource (e.g., PlacementMisconfigured, PlacementSatisfied), not as a clusterConditions field in spec. PlacementSpec [2].
Citations:
- 1: https://github.com/open-cluster-management-io/placement/blob/main/README.md
- 2: https://open-cluster-management.io/docs/concepts/content-placement/placement/
Remove invalid spec.clusterConditions field from Placement v1beta1.
clusterConditions is not a valid field in cluster.open-cluster-management.io/v1beta1 PlacementSpec. It belongs to the deprecated PlacementRule resource. This field will fail schema validation or be silently dropped by the API server, causing unexpected placement behavior.
Suggested fix
spec:
- clusterConditions:
- - status: "True"
- type: ManagedClusterConditionAvailable
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| clusterConditions: | |
| - status: "True" | |
| type: ManagedClusterConditionAvailable | |
| clusterSelector: | |
| matchExpressions: | |
| predicates: | |
| - requiredClusterSelector: | |
| labelSelector: | |
| matchExpressions: | |
| spec: | |
| predicates: | |
| - requiredClusterSelector: | |
| labelSelector: | |
| matchExpressions: |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-providers.Policy.yaml`
around lines 43 - 49, The Placement manifest includes an invalid
spec.clusterConditions field (the clusterConditions block under the Placement
spec) which is not part of cluster.open-cluster-management.io/v1beta1
PlacementSpec; remove the entire clusterConditions section (the status/type
entries under clusterConditions) so the PlacementSpec contains only valid fields
(e.g., predicates -> requiredClusterSelector -> labelSelector ->
matchExpressions), or if you actually need clusterConditions semantics migrate
this logic to a deprecated PlacementRule resource instead; update the YAML to
delete the clusterConditions subtree and leave the
predicates/requiredClusterSelector intact.
| - apiVersion: cluster.open-cluster-management.io/v1beta1 | ||
| kind: Placement | ||
| metadata: | ||
| name: placement-backplane-acs | ||
| namespace: openshift-acm-policies | ||
| spec: | ||
| clusterSelector: | ||
| matchExpressions: | ||
| - key: api.openshift.com/addon-acs-fleetshard | ||
| operator: In | ||
| values: | ||
| - 'true' | ||
| predicates: | ||
| - requiredClusterSelector: | ||
| labelSelector: | ||
| matchExpressions: | ||
| - key: api.openshift.com/addon-acs-fleetshard | ||
| operator: In | ||
| values: | ||
| - 'true' | ||
| - apiVersion: policy.open-cluster-management.io/v1 | ||
| kind: PlacementBinding | ||
| metadata: | ||
| name: binding-backplane-acs | ||
| namespace: openshift-acm-policies | ||
| placementRef: | ||
| apiGroup: apps.open-cluster-management.io | ||
| kind: PlacementRule | ||
| apiGroup: cluster.open-cluster-management.io | ||
| kind: Placement | ||
| name: placement-backplane-acs | ||
| subjects: | ||
| - apiGroup: policy.open-cluster-management.io |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find any ManagedClusterSetBinding shipped alongside these policy templates
rg -nP --type=yaml -C3 '\bkind:\s*ManagedClusterSetBinding\b'
# And check whether the integration/production/stage templates create them in these two namespaces
rg -nP -C5 'openshift-acm-policies|openshift-rosa-oauth-tpl-policies' \
hack/00-osd-managed-cluster-config-integration.yaml.tmpl \
hack/00-osd-managed-cluster-config-production.yaml.tmpl \
hack/00-osd-managed-cluster-config-stage.yaml.tmpl 2>/dev/null \
| rg -n 'ManagedClusterSetBinding|ManagedClusterSet\b'Repository: openshift/managed-cluster-config
Length of output: 58
🏁 Script executed:
#!/bin/bash
# First, let's check the file size and read the specific lines mentioned
wc -l hack/00-osd-managed-cluster-config-integration.yaml.tmpl
echo "---"
# Read lines around the mentioned range
sed -n '2365,2400p' hack/00-osd-managed-cluster-config-integration.yaml.tmplRepository: openshift/managed-cluster-config
Length of output: 1384
🏁 Script executed:
#!/bin/bash
# Search for all Placement and ManagedClusterSetBinding resources in the entire repo
echo "=== Searching for Placement resources ==="
rg -n 'kind:\s*Placement' --type yaml
echo ""
echo "=== Searching for ManagedClusterSetBinding resources ==="
rg -n 'kind:\s*ManagedClusterSetBinding' --type yamlRepository: openshift/managed-cluster-config
Length of output: 15703
🏁 Script executed:
#!/bin/bash
# Search for references to the specific namespaces
echo "=== Searching for openshift-acm-policies namespace ==="
rg -n 'openshift-acm-policies' --type yaml | head -20
echo ""
echo "=== Searching for openshift-rosa-oauth-tpl-policies namespace ==="
rg -n 'openshift-rosa-oauth-tpl-policies' --type yaml | head -20Repository: openshift/managed-cluster-config
Length of output: 4118
🏁 Script executed:
#!/bin/bash
# Search specifically in hack/ directory for ManagedClusterSetBinding
echo "=== Searching for ManagedClusterSetBinding in hack/ directory ==="
rg -n 'ManagedClusterSetBinding' hack/
echo ""
# Also search for clusterSets in hack templates
echo "=== Searching for clusterSets in hack/ directory ==="
rg -n 'clusterSets' hack/Repository: openshift/managed-cluster-config
Length of output: 196
🏁 Script executed:
#!/bin/bash
# List the hack files to understand structure
echo "=== Hack files in the repo ==="
find hack/ -type f -name "*.tmpl" -o -name "*.yaml" | head -20
echo ""
# Check if hack files reference the generated deploy files
echo "=== Checking relationship between hack and deploy ==="
head -20 hack/00-osd-managed-cluster-config-integration.yaml.tmplRepository: openshift/managed-cluster-config
Length of output: 735
🏁 Script executed:
#!/bin/bash
# Search the entire hack template file for any Placement configurations
# to understand what's at lines 2371-2395 in context
echo "=== Context around lines 2371-2395 in the integration template ==="
sed -n '2360,2410p' hack/00-osd-managed-cluster-config-integration.yaml.tmpl | cat -nRepository: openshift/managed-cluster-config
Length of output: 2394
🏁 Script executed:
#!/bin/bash
# Search for any scripts that might create ManagedClusterSetBinding
echo "=== Searching for scripts that might create bindings ==="
find . -name "*.sh" -o -name "*.py" -o -name "Makefile" | head -20
echo ""
# Check for README or documentation
echo "=== Looking for documentation files ==="
find . -maxdepth 2 -name "README*" -o -name "*.md" | head -10Repository: openshift/managed-cluster-config
Length of output: 1046
🏁 Script executed:
#!/bin/bash
# Search for any references to ManagedClusterSet (not binding) in hack files
echo "=== Searching for ManagedClusterSet in hack/ ==="
rg -n 'ManagedClusterSet' hack/
echo ""
# Also check if there are any namespace creation resources that might include bindings
echo "=== Searching for Namespace definitions in hack/ ==="
rg -n 'kind: Namespace' hack/ -A 10 | head -50Repository: openshift/managed-cluster-config
Length of output: 4828
🏁 Script executed:
#!/bin/bash
# Check README for information about how bindings are handled
echo "=== Checking README ==="
cat README.md | head -100Repository: openshift/managed-cluster-config
Length of output: 7482
🏁 Script executed:
#!/bin/bash
# Search for any ACM or Placement documentation/references
echo "=== Searching for ACM_BASE_URL or similar config parameters ==="
rg -n 'ManagedClusterSet' . --type yaml
echo ""
echo "=== Checking if there are any references to binding in comments ==="
rg -n -i 'binding' hack/00-osd-managed-cluster-config-integration.yaml.tmpl | head -20Repository: openshift/managed-cluster-config
Length of output: 1039
🏁 Script executed:
#!/antml:bash
# Check if there's any indication that ManagedClusterSetBinding needs to be created
# Look at the generated deploy files to see if they have any bindings
echo "=== Checking deploy files for ManagedClusterSetBinding ==="
find deploy/ -name "*.yaml" -exec grep -l "ManagedClusterSetBinding" {} \;
echo ""
echo "=== Checking if deployment files include namespace creation ==="
ls -la deploy/acm-policies/ | head -20Repository: openshift/managed-cluster-config
Length of output: 1932
Add ManagedClusterSetBinding resources to enable Placement selectors.
The Placement resources in these templates use predicates but lack an explicit clusterSets field. Under the Placement API, this means the predicate filters will only apply to clusters from ManagedClusterSets explicitly bound to the placement's namespace via ManagedClusterSetBinding. No such bindings exist in the repository, so every Placement will have zero candidate clusters and silently fail to enforce policies.
For the openshift-acm-policies and openshift-rosa-oauth-tpl-policies namespaces, add ManagedClusterSetBinding resources that bind these namespaces to the ManagedClusterSet(s) containing your target ROSA/HCP management and hosted clusters. Without these bindings, policy enforcement will not work after the migration from PlacementRule to Placement.
Reference: https://open-cluster-management.io/docs/concepts/content-placement/placement/
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 2371 -
2395, The Placement resources in this template rely on predicate filtering but
lack any ManagedClusterSetBinding, so they will see zero candidate clusters; add
ManagedClusterSetBinding custom resources that bind the openshift-acm-policies
and openshift-rosa-oauth-tpl-policies namespaces to the appropriate
ManagedClusterSet(s) that contain your ROSA/HCP management and hosted clusters.
For each target ManagedClusterSet, create a ManagedClusterSetBinding with
metadata.namespace set to the target namespace (openshift-acm-policies or
openshift-rosa-oauth-tpl-policies) and spec.managedClusterSet set to the
ManagedClusterSet name so the Placement objects (Placement) in those namespaces
can select clusters from that set. Ensure you add one binding per namespace per
ManagedClusterSet that should be considered by the Placement predicates.
| - addon.open-cluster-management.io | ||
| - apps.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - clusterview.open-cluster-management.io | ||
| - discovery.open-cluster-management.io |
There was a problem hiding this comment.
Duplicate cluster.open-cluster-management.io entry in apiGroups.
Lines 23279 and 23280 both list cluster.open-cluster-management.io. The newly added entry is redundant — the same group was already present (likely covering the existing ManagedCluster/ManagedClusterSet permissions), so the addition introduces a no-op duplicate that will pass through admission but is misleading and will fail strict YAML/RBAC linters that flag duplicates.
🛠️ Proposed fix
rules:
- apiGroups:
- addon.open-cluster-management.io
- - cluster.open-cluster-management.io
- cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.io🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 23278
- 23282, Remove the duplicate apiGroups entry
"cluster.open-cluster-management.io" from the relevant Role/ClusterRole
definition in the template so apiGroups contains each group only once; locate
the resource listing apiGroups (the block that also includes permissions for
ManagedCluster/ManagedClusterSet) and delete the redundant
"cluster.open-cluster-management.io" line so the YAML/RBAC linter no longer
flags a duplicate entry.
| - apiVersion: cluster.open-cluster-management.io/v1beta1 | ||
| kind: Placement | ||
| metadata: | ||
| name: pr-rosa-oauth-tpl-errors | ||
| namespace: openshift-rosa-oauth-tpl-policies | ||
| spec: | ||
| clusterConditions: | ||
| - status: 'True' | ||
| type: ManagedClusterConditionAvailable | ||
| clusterSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' | ||
| predicates: | ||
| - requiredClusterSelector: | ||
| labelSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm clusterConditions is not part of Placement v1beta1 spec
fd -e yaml -e tmpl . hack deploy 2>/dev/null | xargs rg -nP '(?s)kind:\s*Placement\b.*?clusterConditions' -U
# Also check the Placement CRD if vendored anywhere
rg -nP --type=yaml -C2 '\bclusterConditions\b'Repository: openshift/managed-cluster-config
Length of output: 50391
clusterConditions is not a valid field on Placement v1beta1 — schema validation will fail.
Confirmed in deployed files: deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-login.Policy.yaml, 50-rosa-oauth-tpl-errors.Policy.yaml, and 50-rosa-oauth-tpl-providers.Policy.yaml all contain Placement resources with clusterConditions at line 43. The same issue appears in the template at the cited line ranges.
The clusterConditions field belonged to the deprecated PlacementRule (apps.open-cluster-management.io/v1). The new Placement (cluster.open-cluster-management.io/v1beta1) spec does not define a clusterConditions field — its top-level spec fields are clusterSets, numberOfClusters, predicates, prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy. The API server will reject these resources on creation or update.
Proposed fix — drop the legacy `clusterConditions` block
spec:
- clusterConditions:
- - status: 'True'
- type: ManagedClusterConditionAvailable
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'Apply this fix across all three rosa-oauth-tpl placements and anywhere else in the template where this pattern appears.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-integration.yaml.tmpl` around lines 45631
- 45647, The Placement resources in the template include the obsolete top-level
field "clusterConditions" which is invalid for kind: Placement
(cluster.open-cluster-management.io/v1beta1); remove all "clusterConditions"
blocks from any Placement documents (e.g., the rosa-oauth-tpl
login/errors/providers Placement entries) and any other occurrences in the
template, leaving only supported spec fields (clusterSets, numberOfClusters,
predicates, prioritizerPolicy, tolerations, decisionStrategy, spreadPolicy);
search for the symbol "clusterConditions" and delete the entire block and any
trailing commas/empty maps so the resulting Placement spec validates against the
v1beta1 schema.
| rules: | ||
| - apiGroups: | ||
| - addon.open-cluster-management.io | ||
| - apps.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - clusterview.open-cluster-management.io | ||
| - discovery.open-cluster-management.io |
There was a problem hiding this comment.
Duplicate cluster.open-cluster-management.io entry in apiGroups.
It appears the previous apps.open-cluster-management.io was rewritten to cluster.open-cluster-management.io (line 23279) without noticing that an entry with the same value already existed on the next line (23280). The list should be deduplicated.
♻️ Proposed fix
rules:
- apiGroups:
- addon.open-cluster-management.io
- cluster.open-cluster-management.io
- - cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.io🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 23276 -
23282, The apiGroups list in the template contains a duplicated value:
cluster.open-cluster-management.io was added twice (one occurrence replaced
apps.open-cluster-management.io), so edit the apiGroups array in
hack/00-osd-managed-cluster-config-production.yaml.tmpl to remove the duplicate
cluster.open-cluster-management.io and restore/keep
apps.open-cluster-management.io if intended; ensure each api group value (e.g.,
cluster.open-cluster-management.io and apps.open-cluster-management.io) appears
only once in the apiGroups list.
| spec: | ||
| clusterConditions: | ||
| - status: 'True' | ||
| type: ManagedClusterConditionAvailable | ||
| clusterSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' | ||
| predicates: | ||
| - requiredClusterSelector: | ||
| labelSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does cluster.open-cluster-management.io/v1beta1 Placement spec accept a clusterConditions field, or is it a PlacementRule-only field?
💡 Result:
In Open Cluster Management, the Placement CRD/spec at apiVersion cluster.open-cluster-management.io/v1beta1 does not define a spec.clusterConditions field; instead, it uses a PlacementSpec that includes fields like predicates, prioritizerPolicy, numberOfClusters, spreadPolicy, tolerations, and decisionStrategy, and any “conditions” you see are represented under Placement status (Placement conditions such as PlacementMisconfigured/PlacementSatisfied), not as a spec.clusterConditions input [1][2]. There is also not a PlacementRule-only “clusterConditions” field in the Placement API—OCM’s Placement API is a PlacementSpec, and the only conditions described for Placement are status conditions like PlacementMisconfigured/PlacementSatisfied [2][3]. So: cluster.open-cluster-management.io/v1beta1 Placement spec does not accept a clusterConditions field; there is no PlacementRule-only clusterConditions in this API—“conditions” are about Placement status, not user-provided clusterConditions [2][1].
Citations:
- 1: https://pkg.go.dev/open-cluster-management.io/api/cluster/v1beta1
- 2: https://open-cluster-management.io/docs/concepts/content-placement/placement/
- 3: https://open-cluster-management.io/concepts/placement/
🏁 Script executed:
#!/bin/bash
# Check if the file exists
if [ -f "hack/00-osd-managed-cluster-config-production.yaml.tmpl" ]; then
echo "=== File exists ==="
# Check line count
wc -l hack/00-osd-managed-cluster-config-production.yaml.tmpl
# Check lines 45635-45650 (around 45637-45639)
echo -e "\n=== Lines 45635-45650 ==="
sed -n '45635,45650p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
# Check lines 45698-45715 (around 45700-45703)
echo -e "\n=== Lines 45698-45715 ==="
sed -n '45698,45715p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
# Check lines 45762-45779 (around 45764-45767)
echo -e "\n=== Lines 45762-45779 ==="
sed -n '45762,45779p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
else
echo "File not found at hack/00-osd-managed-cluster-config-production.yaml.tmpl"
# Try to find it
find . -name "*00-osd-managed-cluster-config-production*" 2>/dev/null
fiRepository: openshift/managed-cluster-config
Length of output: 2007
🏁 Script executed:
#!/bin/bash
# Check the apiVersion for each of the three Placement resources
echo "=== Lines 45620-45635 (before first clusterConditions) ==="
sed -n '45620,45635p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
echo -e "\n=== Lines 45680-45700 (before second clusterConditions) ==="
sed -n '45680,45700p' hack/00-osd-managed-cluster-config-production.yaml.tmpl
echo -e "\n=== Lines 45744-45765 (before third clusterConditions) ==="
sed -n '45744,45765p' hack/00-osd-managed-cluster-config-production.yaml.tmplRepository: openshift/managed-cluster-config
Length of output: 50389
🏁 Script executed:
#!/bin/bash
# Search for kind: Placement and apiVersion near the clusterConditions lines
echo "=== Searching for Placement kind and apiVersion around lines 45600-45650 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "456[0-4][0-9]:|456[5-9][0-9]:" | head -10
echo -e "\n=== Searching for Placement kind and apiVersion around lines 45650-45750 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "457[0-4][0-9]:" | head -10
echo -e "\n=== Searching for Placement kind and apiVersion around lines 45750-45800 ==="
rg "kind: Placement|apiVersion:" hack/00-osd-managed-cluster-config-production.yaml.tmpl -n | grep -E "457[5-9][0-9]:|458[0-0][0-9]:" | head -10Repository: openshift/managed-cluster-config
Length of output: 1385
Remove clusterConditions from Placement v1beta1 resources — this field is invalid and will be rejected or silently pruned.
The cluster.open-cluster-management.io/v1beta1 Placement spec does not support clusterConditions. Valid spec fields are: numberOfClusters, clusterSets, predicates, prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy.
This field exists on all three Placement resources (lines 45637–45639, 45701–45703, 45765–45767) and was carried over from PlacementRule v1. On the new Placement CRD, it will either be rejected by strict schema validation or silently pruned by the API server, silently dropping the ManagedClusterConditionAvailable=True availability filter.
Note: The Placement controller already filters out terminating clusters by default (OCM v0.14.0+) but does not enforce the ManagedClusterConditionAvailable=True condition that PlacementRule provided. If you need to gate on cluster availability, use predicates[].requiredClusterSelector.claimSelector against an appropriate ClusterClaim, or rely on the policy framework's compliance reporting.
Remove clusterConditions from all three resources
spec:
- clusterConditions:
- - status: 'True'
- type: ManagedClusterConditionAvailable
predicates:Apply this removal at lines 45637–45639, 45701–45703, and 45765–45767.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-production.yaml.tmpl` around lines 45636 -
45647, Remove the invalid clusterConditions field from all Placement v1beta1
resource specs in this template: locate each Placement resource that contains
the clusterConditions key (appearing on each Placement manifest carried over
from PlacementRule v1) and delete the entire clusterConditions stanza so the
spec only contains valid fields like numberOfClusters, clusterSets, predicates,
prioritizerPolicy, tolerations, decisionStrategy, and spreadPolicy; ensure you
remove the clusterConditions entries from all three Placement manifests so the
API server will not reject or prune the resource and, if you still need to
filter by availability, replace it with a
predicates[].requiredClusterSelector.claimSelector-based selector or
policy-based reporting as noted.
| rules: | ||
| - apiGroups: | ||
| - addon.open-cluster-management.io | ||
| - apps.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - cluster.open-cluster-management.io | ||
| - clusterview.open-cluster-management.io | ||
| - discovery.open-cluster-management.io |
There was a problem hiding this comment.
Duplicate cluster.open-cluster-management.io apiGroup in rules.
Lines 23279 and 23280 both list cluster.open-cluster-management.io. This is almost certainly a migration leftover where apps.open-cluster-management.io (the old PlacementRule group) was rewritten to cluster.open-cluster-management.io even though that group was already present in the same apiGroups list. Functionally harmless (Kubernetes deduplicates), but it's clutter that obscures the real diff and is best cleaned up at the source.
rules:
- apiGroups:
- addon.open-cluster-management.io
- - cluster.open-cluster-management.io
- cluster.open-cluster-management.io
- clusterview.open-cluster-management.io
- discovery.open-cluster-management.ioIf apps.open-cluster-management.io is still needed elsewhere (e.g. for tolerating any lingering PlacementRule objects during rollout), restore that entry instead of dropping it.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 23276 -
23282, The apiGroups list in the RBAC rule contains a duplicate
"cluster.open-cluster-management.io" entry (likely from a migration) — remove
the duplicated "cluster.open-cluster-management.io" so the list contains each
group only once; if the original "apps.open-cluster-management.io" entry is
intentionally required (to tolerate legacy PlacementRule objects), restore
"apps.open-cluster-management.io" instead of dropping it. Locate the RBAC rule's
apiGroups array (the rule that currently lists
cluster.open-cluster-management.io twice) and edit that array to either
deduplicate entries or replace the extra duplicate with
"apps.open-cluster-management.io" as appropriate.
| - apiVersion: cluster.open-cluster-management.io/v1beta1 | ||
| kind: Placement | ||
| metadata: | ||
| name: pr-rosa-oauth-tpl-errors | ||
| namespace: openshift-rosa-oauth-tpl-policies | ||
| spec: | ||
| clusterConditions: | ||
| - status: 'True' | ||
| type: ManagedClusterConditionAvailable | ||
| clusterSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' | ||
| predicates: | ||
| - requiredClusterSelector: | ||
| labelSelector: | ||
| matchExpressions: | ||
| - key: hypershift.open-cluster-management.io/management-cluster | ||
| operator: In | ||
| values: | ||
| - 'true' |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does cluster.open-cluster-management.io/v1beta1 Placement spec support a clusterConditions field, or is it PlacementRule-only?
💡 Result:
In cluster.open-cluster-management.io/v1beta1, the Placement spec does not have a spec.clusterConditions field; Placement is configured via predicates/prioritizers (and related scheduling fields), and it reports selection status via status.conditions (PlacementSatisfied/PlacementMisconfigured) rather than a clusterConditions spec field [1][2]. So it is Placement-only (not PlacementRule-only): Placement supports status-based condition reporting, but there is no PlacementSpec.clusterConditions field in v1beta1 according to the public API docs and Go type documentation for PlacementSpec/conditions [1][2].
Citations:
- 1: https://open-cluster-management.io/docs/concepts/content-placement/placement/
- 2: https://pkg.go.dev/open-cluster-management.io/api/cluster/v1beta1
clusterConditions is not a valid field in Placement (cluster.open-cluster-management.io/v1beta1) — these resources will fail validation.
The three rosa-oauth-tpl placements (pr-rosa-oauth-tpl-errors, pr-rosa-oauth-tpl-login, pr-rosa-oauth-tpl-providers) still carry the spec.clusterConditions block from the old PlacementRule schema:
spec:
clusterConditions:
- status: 'True'
type: ManagedClusterConditionAvailable
predicates:
- requiredClusterSelector:
...Placement v1beta1 does not support clusterConditions in its spec. It only supports numberOfClusters, clusterSets, predicates, prioritizerPolicy, spreadPolicy, and tolerations. With CRD structural-schema pruning, clusterConditions will be silently dropped or rejected by the API server, and the intended availability gate will be lost either way.
Replace clusterConditions with Placement-native equivalents — typically tolerations against the cluster.open-cluster-management.io/unavailable and cluster.open-cluster-management.io/unreachable taints — or remove the block if the controller's built-in terminating-cluster filter is sufficient. This generated template likely inherits the issue from deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml; fixing it there should regenerate this template correctly.
Suggested fix (apply to all three blocks)
- apiVersion: cluster.open-cluster-management.io/v1beta1
kind: Placement
metadata:
name: pr-rosa-oauth-tpl-errors
namespace: openshift-rosa-oauth-tpl-policies
spec:
- clusterConditions:
- - status: 'True'
- type: ManagedClusterConditionAvailable
predicates:
- requiredClusterSelector:
labelSelector:
matchExpressions:
- key: hypershift.open-cluster-management.io/management-cluster
operator: In
values:
- 'true'
+ tolerations:
+ - key: cluster.open-cluster-management.io/unavailable
+ operator: Exists
+ - key: cluster.open-cluster-management.io/unreachable
+ operator: ExistsAlso applies to: 45695-45711, 45759-45775
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@hack/00-osd-managed-cluster-config-stage.yaml.tmpl` around lines 45631 -
45647, The Placement resources for pr-rosa-oauth-tpl-errors,
pr-rosa-oauth-tpl-login, and pr-rosa-oauth-tpl-providers contain an invalid
spec.clusterConditions block; remove that block from each Placement (or replace
it with Placement-native fields) and instead add equivalent tolerations for the
cluster.open-cluster-management.io/unavailable and
cluster.open-cluster-management.io/unreachable taints (or omit if the
controller’s terminating-cluster filter suffices); also update the upstream
generator source
(deploy/rosa-oauth-templates-policies/50-rosa-oauth-tpl-{errors,login,providers}.Policy.yaml)
so future renders of hack/00-osd-managed-cluster-config-stage.yaml.tmpl do not
reintroduce spec.clusterConditions.
Ajpantuso
left a comment
There was a problem hiding this comment.
The hack/* files should not be modified directly as they will be generated with the make generate-hive-templates target.
The primary change here will be to update the PolicyGen Version. The latest version will fully remove PlacementRule generation.
I suspect the Placements still need to be updated like you are doing with this PR, but let's see what the new PolicyGen output looks like
|
Thanks @Ajpantuso I will update the Policy Generator version in the makefile. I believe we would also need similar changes in the Dockerfile and Dcokerfile.prow too. |
…sources Signed-off-by: Ankit152 <ankitkurmi152@gmail.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Ankit152 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
@Ankit152: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
(bug/feature/cleanup/documentation)
Cleanup
What this PR does / why we need it?
Migrate ACM PlacementRule resources to Placement resources
Which Jira/Github issue(s) this PR fixes?
Fixes SLSRE-534
Special notes for your reviewer:
Pre-checks (if applicable):
Tested latest changes against a cluster
Included documentation changes with PR
If this is a new object that is not intended for the FedRAMP environment (if unsure, please reach out to team FedRAMP), please exclude it with:
Summary by CodeRabbit