Skip to content

HYPERFLEET-722 - fix: update adapter configs to follow new standard#18

Open
rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:update-adapter-configs
Open

HYPERFLEET-722 - fix: update adapter configs to follow new standard#18
rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
rh-amarin:update-adapter-configs

Conversation

@rh-amarin
Copy link
Contributor

@rh-amarin rh-amarin commented Mar 6, 2026

https://issues.redhat.com/browse/HYPERFLEET-722

Update the adapters to follow the new standard

I also updated adapter3 to make use of the now available field event.owner_references.id in the events for nodepools

Summary by CodeRabbit

  • Refactor

    • Consolidated and flattened adapter and task configs into a single adapter-root layout, removing legacy wrappers.
    • Standardized field naming to snake_case and unified logging/debug layout across adapters.
    • Simplified discovery and resource schemas for more consistent manifest-oriented handling.
  • New Features

    • Added top-level params (cluster, namespace, identifiers) and richer message payload content.
    • Introduced structured preconditions/validation checks and manifest-style resource orchestration.
    • Enhanced post-action/status payloads for clearer, more detailed reporting.

@coderabbitai
Copy link

coderabbitai bot commented Mar 6, 2026

Walkthrough

This PR flattens and renames fields across multiple Helm adapter configs and adapter-task-configs for adapter1, adapter2, and adapter3. Top-level Kubernetes-style wrappers (apiVersion/kind/metadata/spec) are removed and replaced by a single root adapter object; many keys are converted from camelCase to snake_case (e.g., hyperfleetApi → hyperfleet_api, subscriptionId → subscription_id, kubeConfigPath → kube_config_path). Adapter versions bumped to 0.2.0. Task configs are restructured: params/preconditions promoted, validationCheck/CEL expressions formalized, resources/manifests and discovery reorganized (ManifestWork in adapter2), and post/post_actions payloads and status extraction consolidated. sentinel-nodepools adds messageData fields.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant EventSource as External Event
participant Adapter as Adapter (helm config)
participant Hyperfleet as Hyperfleet API
participant Maestro as Maestro/ManifestWork
participant K8s as Kubernetes API
Note over EventSource,Adapter: 1) Event triggers adapter task with params
EventSource->>Adapter: provide params (cluster_id, nodepool_id, namespace, etc.)
Adapter->>Hyperfleet: precondition api_call (GET /clusters/{{cluster_id}}/nodepools/{{nodepool_id}})
Hyperfleet-->>Adapter: capture fields (readyConditionStatus, generation, name)
Adapter->>Adapter: evaluate validationCheck (CEL)
alt validation passes
Adapter->>Maestro: submit ManifestWork / resources
Maestro->>K8s: apply manifests (Namespace, ConfigMap, ...)
K8s-->>Maestro: resource status
Maestro-->>Adapter: discovery / observed status
Adapter->>Hyperfleet: post_action api_call (POST status payload)
Hyperfleet-->>Adapter: ack
else validation fails
Adapter->>Hyperfleet: post_action api_call (validation failure payload)
Hyperfleet-->>Adapter: ack
end

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • ciaranRoche
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: updating adapter configs to follow a new standard across multiple adapter configuration files.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
helm/adapter2/adapter-task-config.yaml (1)

39-41: Replace hardcoded placement target before multi-cluster rollout.

Line 40 hardcodes placementClusterName to "cluster1", so every run resolves to the same target_cluster (Line 60). This should come from placement output/event data or an explicit deploy-time input.

If helpful, I can draft the param/capture wiring once the placement field contract is finalized.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The
placementClusterName entry is hardcoded to "\"cluster1\"" causing target_cluster
to always resolve to the same cluster; update the placementClusterName
expression to pull from the placement output or an explicit deploy-time input
instead of a literal string (e.g., reference the placement event field or a
chart/value parameter), and ensure target_cluster (the consumer of
placementClusterName) uses that value; locate the placementClusterName key in
adapter-task-config.yaml and replace the literal expression with the appropriate
placement-output or parameter reference (or a placeholder variable that your
deployment tooling will populate) so multi-cluster rollouts receive the correct
target.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter1/adapter-config.yaml`:
- Line 7: The merged-config debug flag debug_config is currently enabled by
default; change its default value to false to prevent verbose merged-config
logging and reduce risk of sensitive-data exposure, i.e., update the
debug_config setting in adapter-config.yaml to false and ensure any chart/values
templates or README mention that debug_config is opt-in rather than enabled by
default.

In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The accessor paths in the post data extraction are
incorrect: replace the erroneous repeated segment
"resources.?resources.resource0" with the correct "resources.?resource0"
wherever it appears (e.g., the metadata name expression and the
creationTimestamp expression) so they match the pattern used in
adapter2/adapter3 and resolve correctly.

In `@helm/adapter2/adapter-config.yaml`:
- Line 38: The Helm chart currently sets the Maestro transport flag insecure:
true which disables TLS verification; change insecure to false (or remove the
key) in the Maestro/transport configuration so TLS verification is enforced, and
ensure corresponding TLS settings/certificates are provided (e.g., enable TLS
certs/ca/tls.enabled entries) so the chart can establish verified connections;
locate the insecure: true entry in the adapter-config.yaml under the Maestro
transport section and update it to false and verify related TLS keys are
present.
- Line 7: The merged-config debug flag is enabled (debug_config: true) which
should be off by default; change the value of debug_config to false in
adapter-config.yaml (or remove the true setting so defaults disable
merged-config logging) and, if applicable, add a brief comment or documentation
note explaining how to enable it when needed.
- Around line 33-37: The current source_id and client_id values
("hyperfleet-adapter" and "hyperfleet-adapter-client") are generic and risk
collisions; update the values for the symbols source_id and client_id in
adapter-config.yaml to unique, adapter-specific identifiers (for example include
the adapter name/namespace or a short unique suffix) and ensure client_id either
matches the new source_id or is a distinct unique value; also update the
associated HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change
client_id so runtime configuration matches the new identifier.

In `@helm/adapter3/adapter-config.yaml`:
- Line 7: The merged-config debug flag is enabled (debug_config: true) but
should be disabled by default; update the configuration to set debug_config:
false (or remove the key to rely on documented default) so merged runtime config
is not logged, ensuring the change targets the debug_config entry in the
adapter-config.yaml.

---

Nitpick comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The placementClusterName entry is hardcoded to
"\"cluster1\"" causing target_cluster to always resolve to the same cluster;
update the placementClusterName expression to pull from the placement output or
an explicit deploy-time input instead of a literal string (e.g., reference the
placement event field or a chart/value parameter), and ensure target_cluster
(the consumer of placementClusterName) uses that value; locate the
placementClusterName key in adapter-task-config.yaml and replace the literal
expression with the appropriate placement-output or parameter reference (or a
placeholder variable that your deployment tooling will populate) so
multi-cluster rollouts receive the correct target.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: cc85da2f-0bfc-4afa-b520-1286192b0baa

📥 Commits

Reviewing files that changed from the base of the PR and between 15ec000 and f796f8d.

📒 Files selected for processing (6)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 enables verbose merged-config logging. Keep this disabled by default to reduce accidental sensitive-data exposure in logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-config.yaml` at line 7, The merged-config debug flag
debug_config is currently enabled by default; change its default value to false
to prevent verbose merged-config logging and reduce risk of sensitive-data
exposure, i.e., update the debug_config setting in adapter-config.yaml to false
and ensure any chart/values templates or README mention that debug_config is
opt-in rather than enabled by default.

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 enables full merged-config logging. Keep this off by default to avoid leaking env-injected configuration details.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) which should be off by default; change the value of
debug_config to false in adapter-config.yaml (or remove the true setting so
defaults disable merged-config logging) and, if applicable, add a brief comment
or documentation note explaining how to enable it when needed.

Comment on lines +33 to +37
source_id: "hyperfleet-adapter"

# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use adapter-specific Maestro identifiers.

Line 33 and Line 37 use generic IDs, while the inline comment says source_id must be unique across adapters. Generic IDs increase routing/attribution collision risk.

🔧 Suggested change
-    source_id: "hyperfleet-adapter"
+    source_id: "adapter2"
@@
-    client_id: "hyperfleet-adapter-client"
+    client_id: "adapter2-client"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
source_id: "hyperfleet-adapter"
# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
source_id: "adapter2"
# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "adapter2-client"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` around lines 33 - 37, The current
source_id and client_id values ("hyperfleet-adapter" and
"hyperfleet-adapter-client") are generic and risk collisions; update the values
for the symbols source_id and client_id in adapter-config.yaml to unique,
adapter-specific identifiers (for example include the adapter name/namespace or
a short unique suffix) and ensure client_id either matches the new source_id or
is a distinct unique value; also update the associated
HYPERFLEET_MAESTRO_CLIENT_ID environment usage if you change client_id so
runtime configuration matches the new identifier.

# Client identifier (defaults to source_id if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
client_id: "hyperfleet-adapter-client"
insecure: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid insecure Maestro transport by default.

Line 38 sets insecure: true, which disables TLS verification and weakens connection security.

🔧 Suggested change
-    insecure: true
+    insecure: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
insecure: true
insecure: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-config.yaml` at line 38, The Helm chart currently sets
the Maestro transport flag insecure: true which disables TLS verification;
change insecure to false (or remove the key) in the Maestro/transport
configuration so TLS verification is enforced, and ensure corresponding TLS
settings/certificates are provided (e.g., enable TLS certs/ca/tls.enabled
entries) so the chart can establish verified connections; locate the insecure:
true entry in the adapter-config.yaml under the Maestro transport section and
update it to false and verify related TLS keys are present.

log:
level: debug
# Log the full merged configuration after load (default: false)
debug_config: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Disable merged-config debug logging by default.

Line 7 sets debug_config: true even though the comment documents default: false; this can expose merged runtime config (including sensitive env-derived values) in logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debug_config: true
debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-config.yaml` at line 7, The merged-config debug flag is
enabled (debug_config: true) but should be disabled by default; update the
configuration to set debug_config: false (or remove the key to rely on
documented default) so merged runtime config is not logged, ensuring the change
targets the debug_config entry in the adapter-config.yaml.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
helm/adapter1/adapter-config.yaml (1)

6-9: ⚠️ Potential issue | 🟠 Major

Disable merged-config logging by default.

Line 7 turns full merged-config dumping back on. Once deploy-time values inject credentials or tokens, this is an easy way to leak them into logs.

🔧 Suggested change
-debug_config: true
+debug_config: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-config.yaml` around lines 6 - 9, The config currently
enables full merged-config dumping via the debug_config key and sets log.level
to debug, which risks leaking secrets; change debug_config to false and lower
log.level from debug to info (or warn) so merged-config is not logged by default
and verbose debug logs are avoided.
helm/adapter1/adapter-task-config.yaml (1)

133-136: ⚠️ Potential issue | 🟠 Major

Fix the status data extractor path.

Lines 133 and 136 still use resources.?resources.resource0..., so these fields walk a non-existent intermediate key and the payload data comes back empty.

🔧 Suggested change
-                resources.?resources.resource0.?metadata.?name.orValue("")
+                resources.?resource0.?metadata.?name.orValue("")
@@
-                resources.?resources.resource0.?metadata.?creationTimestamp.orValue("")
+                resources.?resource0.?metadata.?creationTimestamp.orValue("")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-task-config.yaml` around lines 133 - 136, The extractor
paths mistakenly include a duplicate intermediate key
("resources.?resources.resource0...") causing empty payloads; update both
occurrences so they reference the correct path
("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp
expression to use
"resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing
the extra "resources" segment in the path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 39-41: The config currently hardcodes placementClusterName to
"\"cluster1\"" which forces target_cluster for ManifestWork to the same cluster;
change placementClusterName to reference the dynamic placement result (e.g.,
remove the literal string and use the proper templating/expression to read the
placement adapter output) so target_cluster uses that variable at runtime;
update the entry for placementClusterName (and any usages of
target_cluster/ManifestWork generation) to interpolate or reference the
placement adapter's value instead of the fixed "cluster1" string.

In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 122-135: The current Health, reason and message expressions only
check for resources.resource0.data.nodepoolId and dump the whole resource JSON;
update them to derive health from the runtime status and surface execution
errors: change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.

---

Duplicate comments:
In `@helm/adapter1/adapter-config.yaml`:
- Around line 6-9: The config currently enables full merged-config dumping via
the debug_config key and sets log.level to debug, which risks leaking secrets;
change debug_config to false and lower log.level from debug to info (or warn) so
merged-config is not logged by default and verbose debug logs are avoided.

In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 133-136: The extractor paths mistakenly include a duplicate
intermediate key ("resources.?resources.resource0...") causing empty payloads;
update both occurrences so they reference the correct path
("resources.?resource0.?metadata.?name.orValue(\"\")" and the creationTimestamp
expression to use
"resources.?resource0.?metadata.?creationTimestamp.orValue(\"\")") by removing
the extra "resources" segment in the path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 77ebfba6-c75b-4960-80af-01b427fcc61b

📥 Commits

Reviewing files that changed from the base of the PR and between f796f8d and d7e26fe.

📒 Files selected for processing (7)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml
  • helm/sentinel-nodepools/values.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • helm/adapter3/adapter-config.yaml
  • helm/adapter2/adapter-config.yaml

Comment on lines +39 to +41
- name: "placementClusterName"
expression: "\"cluster1\"" # TBC coming from placement adapter
description: "Unique identifier for the target maestro"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't hardcode the Maestro target cluster.

Lines 39-41 pin placementClusterName to "cluster1", and Line 60 then uses it as target_cluster. That routes every ManifestWork to the same cluster regardless of the actual placement result.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 39 - 41, The config
currently hardcodes placementClusterName to "\"cluster1\"" which forces
target_cluster for ManifestWork to the same cluster; change placementClusterName
to reference the dynamic placement result (e.g., remove the literal string and
use the proper templating/expression to read the placement adapter output) so
target_cluster uses that variable at runtime; update the entry for
placementClusterName (and any usages of target_cluster/ManifestWork generation)
to interpolate or reference the placement adapter's value instead of the fixed
"cluster1" string.

Comment on lines +122 to +135
- type: "Health"
status:
expression: |
has(resources.resource0.data.nodepoolId)
? "True"
: "False"
reason:
expression: |
has(resources.resource0.data.nodepoolId)
? "ConfigMap data available"
: "ConfigMap data not yet available"
message:
expression: |
toJson(resources.resource0)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore runtime-based health reporting.

Lines 124-135 only check whether the discovered ConfigMap contains data.nodepoolId, so a failed run can still emit Health=True. The message also becomes raw resource JSON instead of the execution error.

🔧 Suggested change
           - type: "Health"
             status:
               expression: |
-                has(resources.resource0.data.nodepoolId)
-                  ? "True"
-                  : "False"
+                adapter.?executionStatus.orValue("") == "success"
+                  ? "True"
+                  : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown")
             reason:
               expression: |
-                has(resources.resource0.data.nodepoolId)
-                  ? "ConfigMap data available"
-                  : "ConfigMap data not yet available"
+                adapter.?errorReason.orValue("") != ""
+                  ? adapter.?errorReason.orValue("")
+                  : "Healthy"
             message:
               expression: |
-                toJson(resources.resource0)
+                adapter.?errorMessage.orValue("") != ""
+                  ? adapter.?errorMessage.orValue("")
+                  : "All adapter operations completed successfully"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-task-config.yaml` around lines 122 - 135, The current
Health, reason and message expressions only check for
resources.resource0.data.nodepoolId and dump the whole resource JSON; update
them to derive health from the runtime status and surface execution errors:
change the Health expression to require both
has(resources.resource0.data.nodepoolId) and resources.resource0.status.state ==
"Succeeded" (or not equals "Failed"), update reason to pick a success/failure
text based on resources.resource0.status.state, and replace message to emit
resources.resource0.status.error or resources.resource0.status.message (falling
back to the nodepoolId) instead of toJson(resources.resource0) so runtime errors
are shown.

@rh-amarin rh-amarin force-pushed the update-adapter-configs branch from d7e26fe to 79b19fe Compare March 11, 2026 11:59
resourceSelector: []
hyperfleetApi:
baseUrl: http://hyperfleet-api:8000
messageData:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just something to keep in mind — adapter3's task config now pulls clusterId from event.owner_references.id, which only gets populated if sentinel-nodepools is deployed with this new messageData block. So there's a cross-component coupling here: if someone updates the adapter charts but doesn't pick up this sentinel change, adapter3 will silently fail to extract clusterId from events.

Might be worth a comment here noting that adapter3 depends on this, or adding some kind of fallback/validation in the adapter3 param definition so it fails loudly instead of just getting an empty string.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
helm/adapter1/adapter-task-config.yaml (1)

130-136: ⚠️ Potential issue | 🟠 Major

Invalid resource accessor path causes empty data extraction.

Lines 133 and 136 use resources.?resources.resource0... which contains an extra .resources segment. The correct pattern (used in adapter2 and adapter3) is resources.?resource0.... This will cause the expressions to return empty strings instead of actual metadata.

🔧 Suggested fix
         data:
           namespace:
             name:
               expression: |
-                resources.?resources.resource0.?metadata.?name.orValue("")
+                resources.?resource0.?metadata.?name.orValue("")
             creationTimestamp:
               expression: |
-                resources.?resources.resource0.?metadata.?creationTimestamp.orValue("")
+                resources.?resource0.?metadata.?creationTimestamp.orValue("")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-task-config.yaml` around lines 130 - 136, The
expressions for namespace.name and creationTimestamp use an incorrect accessor
path "resources.?resources.resource0..." which contains an extra ".resources"
segment; update both expressions (the fields under namespace:name and
creationTimestamp) to use the correct path "resources.?resource0..." so they
match the pattern used in adapter2/adapter3 and return actual metadata values.
🧹 Nitpick comments (2)
helm/adapter1/adapter-task-config.yaml (1)

43-46: Consider removing redundant validation check.

The validationCheck (Line 45-46) duplicates the condition already evaluated at Line 39-41 within clusterStatus.conditions. Both check readyConditionStatus == "False". If both must pass for the task to proceed, keeping only one is cleaner.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-task-config.yaml` around lines 43 - 46, The
validationCheck block duplicates the same condition already evaluated in
clusterStatus.conditions (both use readyConditionStatus == "False"); remove the
redundant validationCheck entry named "validationCheck" from the YAML so the
single condition in clusterStatus.conditions controls the flow, or if you
intended distinct semantics, consolidate the logic by referencing
readyConditionStatus only once and rename the check to clarify its purpose
(validationCheck => e.g., clusterReadyCheck) in the remaining condition.
helm/adapter3/adapter-task-config.yaml (1)

48-51: Consider removing redundant validation check.

The validationCheck duplicates the condition already evaluated at Lines 44-46 within nodepoolStatus.conditions. Both check readyConditionStatus == "False".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-task-config.yaml` around lines 48 - 51, Remove the
redundant validation block named "validationCheck" that evaluates
readyConditionStatus == "False" since the same condition is already checked
under nodepoolStatus.conditions; locate the "validationCheck" entry in
adapter-task-config.yaml and delete that mapping (and any related
references/inputs to "validationCheck") so logic relies solely on
nodepoolStatus.conditions, then run a quick lint/helm template to verify no
remaining references to validationCheck remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 130-136: The expressions for namespace.name and creationTimestamp
use an incorrect accessor path "resources.?resources.resource0..." which
contains an extra ".resources" segment; update both expressions (the fields
under namespace:name and creationTimestamp) to use the correct path
"resources.?resource0..." so they match the pattern used in adapter2/adapter3
and return actual metadata values.

---

Nitpick comments:
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 43-46: The validationCheck block duplicates the same condition
already evaluated in clusterStatus.conditions (both use readyConditionStatus ==
"False"); remove the redundant validationCheck entry named "validationCheck"
from the YAML so the single condition in clusterStatus.conditions controls the
flow, or if you intended distinct semantics, consolidate the logic by
referencing readyConditionStatus only once and rename the check to clarify its
purpose (validationCheck => e.g., clusterReadyCheck) in the remaining condition.

In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 48-51: Remove the redundant validation block named
"validationCheck" that evaluates readyConditionStatus == "False" since the same
condition is already checked under nodepoolStatus.conditions; locate the
"validationCheck" entry in adapter-task-config.yaml and delete that mapping (and
any related references/inputs to "validationCheck") so logic relies solely on
nodepoolStatus.conditions, then run a quick lint/helm template to verify no
remaining references to validationCheck remain.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 17f5329b-2856-4800-956b-fd3a2e64bb1b

📥 Commits

Reviewing files that changed from the base of the PR and between d7e26fe and 79b19fe.

📒 Files selected for processing (7)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml
  • helm/sentinel-nodepools/values.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • helm/sentinel-nodepools/values.yaml

kind: "resource.kind"
href: "resource.href"
generation: "resource.generation"
owner_references:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

owner_references: "resource.owner_references" also work.

@rh-amarin rh-amarin force-pushed the update-adapter-configs branch from 79b19fe to 3382942 Compare March 12, 2026 07:13
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@helm/adapter1/adapter-task-config.yaml`:
- Around line 32-36: The precondition capture "readyConditionStatus" currently
calls status.conditions.filter(...) which will throw if status.conditions is
missing; update the expression to defensively check that status.conditions
exists and is an array (e.g., status.conditions != null &&
status.conditions.size() > 0 or equivalent) before calling .filter(), and only
then evaluate the Ready condition with the existing ternary fallback to "False";
modify the expression for "readyConditionStatus" to guard access to
status.conditions and preserve the same output semantics.

In `@helm/adapter2/adapter-task-config.yaml`:
- Around line 34-38: The readyConditionStatus expression accesses
status.conditions directly which can cause runtime errors if status or
status.conditions is missing; update the expression tied to name
"readyConditionStatus" to first guard with has(status, "conditions") (and
optionally has(status, "conditions") and status.conditions.size() > 0) before
filtering — mirror the has() guard style used elsewhere in the file so the
ternary only evaluates the filter when conditions exist, otherwise return
"False".

In `@helm/adapter3/adapter-task-config.yaml`:
- Around line 37-41: The readyConditionStatus expression calls
status.conditions.filter(...) without checking that status.conditions exists;
update the expression for the "readyConditionStatus" field to first guard that
status and status.conditions are present (e.g., status != null and
status.conditions != null and size > 0) before calling filter so the fallback
"False" is reachable when conditions are absent, mirroring the defensive pattern
used in adapter2's post-processing conditions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c6cc3b29-c597-49ec-be6f-7892dc165da1

📥 Commits

Reviewing files that changed from the base of the PR and between 79b19fe and 3382942.

📒 Files selected for processing (7)
  • helm/adapter1/adapter-config.yaml
  • helm/adapter1/adapter-task-config.yaml
  • helm/adapter2/adapter-config.yaml
  • helm/adapter2/adapter-task-config.yaml
  • helm/adapter3/adapter-config.yaml
  • helm/adapter3/adapter-task-config.yaml
  • helm/sentinel-nodepools/values.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • helm/sentinel-nodepools/values.yaml

Comment on lines +32 to +36
- name: "readyConditionStatus"
expression: |
status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and examine the adapter-task-config.yaml file
find . -name "adapter-task-config.yaml" -type f

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 203


🏁 Script executed:

# Once we find it, let's read the full context around the flagged lines
# First, let's see what files match the pattern
fd -t f "adapter-task-config.yaml"

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 197


🏁 Script executed:

# Look for the file in helm directory
find . -path "*/helm/adapter1/adapter-task-config.yaml" -type f 2>/dev/null || echo "File not found in expected path"

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 121


🏁 Script executed:

# Search more broadly for adapter-task-config files
find . -name "*adapter*task*config*" -type f 2>/dev/null | head -20

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 203


🏁 Script executed:

# Read the adapter-task-config.yaml file to see the full context
cat -n helm/adapter1/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1655


🏁 Script executed:

# Also check if this same pattern exists in other adapter configs
cat -n helm/adapter2/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1712


🏁 Script executed:

# Get line count to understand file size
wc -l helm/adapter1/adapter-task-config.yaml

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 123


🏁 Script executed:

# Check the full content of adapter2 to see if the pattern differs
cat -n helm/adapter2/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1712


🏁 Script executed:

# Check adapter3 as well
cat -n helm/adapter3/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1687


🏁 Script executed:

# Search for other CEL expressions in the codebase to understand patterns
rg "expression:" helm/ -A 2 -B 1 | head -100

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 7448


🏁 Script executed:

# Look for documentation or comments about CEL usage
rg "CEL|has\(" helm/ -B 2 -A 2

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 15626


🏁 Script executed:

# Check if there's any API documentation or comments about status.conditions
rg "status.conditions" helm/ -B 3 -A 3 | head -80

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 5089


🏁 Script executed:

# Look for API endpoint documentation or schema validation
find . -name "*.md" -o -name "*.yaml" -o -name "*.json" | xargs grep -l "status.conditions\|cluster.*schema\|api.*contract" 2>/dev/null | head -20

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 203


🏁 Script executed:

# Check the full context of how the readyConditionStatus is used after capture
cat -n helm/adapter1/adapter-task-config.yaml | sed -n '20,50p'

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1149


Add defensive guard for missing status.conditions in precondition capture.

The expression calls .filter() on status.conditions directly. If the status.conditions field is absent from the API response, this will fail before reaching the ternary fallback.

Suggested fix
      - name: "readyConditionStatus"
        expression: |
-         status.conditions.filter(c, c.type == "Ready").size() > 0
+         has(status.conditions)
+           && status.conditions.filter(c, c.type == "Ready").size() > 0
           ? status.conditions.filter(c, c.type == "Ready")[0].status
           : "False"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: "readyConditionStatus"
expression: |
status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
- name: "readyConditionStatus"
expression: |
has(status.conditions)
&& status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter1/adapter-task-config.yaml` around lines 32 - 36, The
precondition capture "readyConditionStatus" currently calls
status.conditions.filter(...) which will throw if status.conditions is missing;
update the expression to defensively check that status.conditions exists and is
an array (e.g., status.conditions != null && status.conditions.size() > 0 or
equivalent) before calling .filter(), and only then evaluate the Ready condition
with the existing ternary fallback to "False"; modify the expression for
"readyConditionStatus" to guard access to status.conditions and preserve the
same output semantics.

Comment on lines +34 to +38
- name: "readyConditionStatus"
expression: |
status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's check if the file exists and read the relevant section
cat -n helm/adapter2/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1712


🏁 Script executed:

# Also search for any other readyConditionStatus usages
rg "readyConditionStatus" --type yaml -B 2 -A 5

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 4618


🏁 Script executed:

# Check for similar patterns with status.conditions in the codebase
rg "status\.conditions" --type yaml -B 1 -A 1

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 2596


🏁 Script executed:

# Check for CEL expressions with has() guards in the codebase
rg "has\(" --type yaml -B 1 -A 1

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 10005


Guard the Ready condition lookup against missing status.

This expression directly accesses status.conditions without a defensive check. Other similar property accesses throughout the same file use has() guards—consistent patterns should be applied here to prevent runtime errors if the field is absent.

🛠️ Suggested fix
       - name: "readyConditionStatus"
         expression: |
-          status.conditions.filter(c, c.type == "Ready").size() > 0
+          has(status.conditions)
+            && status.conditions.filter(c, c.type == "Ready").size() > 0
             ? status.conditions.filter(c, c.type == "Ready")[0].status
             : "False"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter2/adapter-task-config.yaml` around lines 34 - 38, The
readyConditionStatus expression accesses status.conditions directly which can
cause runtime errors if status or status.conditions is missing; update the
expression tied to name "readyConditionStatus" to first guard with has(status,
"conditions") (and optionally has(status, "conditions") and
status.conditions.size() > 0) before filtering — mirror the has() guard style
used elsewhere in the file so the ternary only evaluates the filter when
conditions exist, otherwise return "False".

Comment on lines +37 to +41
- name: "readyConditionStatus"
expression: |
status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "adapter-task-config.yaml" -type f

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 203


🏁 Script executed:

git ls-files | grep -i adapter | head -20

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 497


🏁 Script executed:

cat -n ./helm/adapter3/adapter-task-config.yaml

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 6330


🏁 Script executed:

cat -n ./helm/adapter1/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1655


🏁 Script executed:

cat -n ./helm/adapter2/adapter-task-config.yaml | head -50

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1712


🏁 Script executed:

find . -name "*.md" -o -name "*.txt" | xargs grep -l "CEL" 2>/dev/null | head -10

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 63


🏁 Script executed:

rg -i "status\.conditions" --type yaml -B 2 -A 2

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 3269


🏁 Script executed:

rg -A 5 "has\(resources\." helm/adapter2/adapter-task-config.yaml | head -20

Repository: openshift-hyperfleet/hyperfleet-infra

Length of output: 1972


Add guard for status.conditions before filtering.

The expression calls filter() on status.conditions without verifying the field exists. If the nodepool API response has not yet populated this field, the filter will fail before reaching the "False" fallback. The codebase already uses this defensive pattern elsewhere (see adapter2's post-processing conditions), so apply the same guard here.

Suggested fix
       - name: "readyConditionStatus"
         expression: |
-          status.conditions.filter(c, c.type == "Ready").size() > 0
+          has(status.conditions)
+            && status.conditions.filter(c, c.type == "Ready").size() > 0
             ? status.conditions.filter(c, c.type == "Ready")[0].status
             : "False"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: "readyConditionStatus"
expression: |
status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
- name: "readyConditionStatus"
expression: |
has(status.conditions)
&& status.conditions.filter(c, c.type == "Ready").size() > 0
? status.conditions.filter(c, c.type == "Ready")[0].status
: "False"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@helm/adapter3/adapter-task-config.yaml` around lines 37 - 41, The
readyConditionStatus expression calls status.conditions.filter(...) without
checking that status.conditions exists; update the expression for the
"readyConditionStatus" field to first guard that status and status.conditions
are present (e.g., status != null and status.conditions != null and size > 0)
before calling filter so the fallback "False" is reachable when conditions are
absent, mirroring the defensive pattern used in adapter2's post-processing
conditions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants