HYPERFLEET-551 - feat: use configuration standard#67
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughFlattens and renames the public configuration surface from CRD-style (apiVersion/kind/metadata/spec) to a snake_case, adapter-centric schema (adapter, clients, params, preconditions, resources, post, post_actions). Updates YAML tags, viper key mappings, loader/validator/types, executor param extraction and runtime paths, helm charts/examples/values and templates, command wiring (serve, dry-run, new config-dump), health server (adds /config endpoint), many tests, and documentation (docs/configuration.md). Removes several maestro-kubernetes example files and legacy templates; adds config redaction and config-dump support. Sequence Diagram(s)sequenceDiagram
participant Event as Event / Client
participant CLI as Adapter CLI
participant ConfigLoader as Config Loader
participant Executor as Executor
participant Hyperfleet as Hyperfleet API
participant Maestro as Maestro Transport
participant K8s as Kubernetes API
participant Broker as Broker / PubSub
participant Health as Health Server
Event->>CLI: trigger execution (or CLI run)
CLI->>ConfigLoader: load config (file/env/flags) -> merged Config (adapter, clients, params...)
ConfigLoader-->>CLI: parsed & redacted Config (+resolved path)
CLI->>Executor: start execution with Config
Executor->>Hyperfleet: api_call (preconditions / post_actions)
Executor->>Maestro: transport requests (manifest/manifestWork)
Executor->>K8s: apply/inspect manifests
Executor->>Broker: publish/consume messages
Hyperfleet-->>Executor: API responses / captures
Maestro-->>Executor: transport responses
K8s-->>Executor: resource statuses
Broker-->>Executor: broker events
CLI->>Health: register config (SetConfig)
Health-->>CLI: serve /config endpoint
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labelslgtm, approved Suggested reviewers
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
charts/examples/README.md (1)
62-69:⚠️ Potential issue | 🟡 MinorUpdate README broker examples to snake_case keys.
These examples still use camelCase (
projectId,subscriptionId,deadLetterTopic). With the new config standard, users will likely copy invalid overrides from this section.Suggested doc update
broker: googlepubsub: - projectId: CHANGE_ME - subscriptionId: CHANGE_ME + project_id: CHANGE_ME + subscription_id: CHANGE_ME topic: CHANGE_ME - deadLetterTopic: CHANGE_ME + dead_letter_topic: CHANGE_ME- --set broker.googlepubsub.projectId=<gcp-project> \ - --set broker.googlepubsub.subscriptionId=<gcp-subscription> \ + --set broker.googlepubsub.project_id=<gcp-project> \ + --set broker.googlepubsub.subscription_id=<gcp-subscription> \ --set broker.googlepubsub.topic=<gcp-topic> \ - --set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic> + --set broker.googlepubsub.dead_letter_topic=<gcp-dlq-topic>Also applies to: 87-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` around lines 62 - 69, Update the README broker example YAML to use snake_case config keys instead of camelCase: replace projectId → project_id, subscriptionId → subscription_id, topic → topic (already fine), and deadLetterTopic → dead_letter_topic so users copying overrides match the new config standard; update the same changes in the other occurrence noted (lines around the second example).internal/config_loader/README.md (1)
39-44:⚠️ Potential issue | 🟡 MinorAccess example still references deprecated
Metadata/Specpaths.The snippet should reflect the flattened config API; otherwise readers will copy invalid access patterns.
Suggested doc fix
-// Metadata -config.Metadata.Name +// Adapter metadata +config.Adapter.Name // API config -timeout := config.Spec.Clients.HyperfleetAPI.Timeout +timeout := config.Clients.HyperfleetAPI.Timeout🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/README.md` around lines 39 - 44, The README examples use deprecated nested paths (config.Metadata.Name and config.Spec.Clients.HyperfleetAPI.Timeout); update them to the flattened API: replace config.Metadata.Name with config.Name and replace config.Spec.Clients.HyperfleetAPI.Timeout with config.Clients.HyperfleetAPI.Timeout so examples match the current config structure.
🧹 Nitpick comments (5)
charts/examples/maestro/adapter-task-config.yaml (1)
173-175: Avoid hardcodingplacementClusterNamein an executable example.
target_clustercurrently depends on a fixed"cluster1"placeholder, which can route resources to the wrong cluster when reused. Prefer wiring this from real event/precondition data.I can draft a concrete dynamic capture pattern for
placementClusterNameif you want.Also applies to: 215-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/maestro/adapter-task-config.yaml` around lines 173 - 175, The YAML currently hardcodes placementClusterName to the string "cluster1", which is unsafe for an executable example; update the placementClusterName expression to dynamically capture the target cluster from the event/precondition or placement adapter data (e.g., reference the incoming event field or a placement adapter capture used elsewhere) instead of the literal; change both occurrences of placementClusterName so they use the real dynamic expression (the symbol to update is placementClusterName and any related target_cluster mappings) so resources are routed based on runtime data rather than a fixed placeholder.test/integration/maestro_client/client_tls_config_integration_test.go (2)
200-201: Addrequire.NotNilguards before usingcfg.Clients.Maestroin all test flows.This avoids panic-style failures and gives clearer diagnostics if mapping/loading drifts.
♻️ Suggested hardening patch
@@ - clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ - maestroCfg := cfg.Clients.Maestro + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + maestroCfg := cfg.Clients.Maestro @@ - maestroCfg := cfg.Clients.Maestro + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + maestroCfg := cfg.Clients.MaestroAlso applies to: 237-241, 285-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 200 - 201, Add explicit nil checks using require.NotNil(t, cfg.Clients.Maestro) before any use of cfg.Clients.Maestro in this test file (e.g., before the call to buildMaestroClientConfigFromLoaded and the other flows around the referenced ranges). Specifically, assert cfg.Clients.Maestro is not nil in each test path that calls buildMaestroClientConfigFromLoaded or otherwise dereferences cfg.Clients.Maestro so failures become clear diagnostics instead of panics; update the tests around the occurrences of buildMaestroClientConfigFromLoaded, and the blocks at the other noted locations (the other flows where cfg.Clients.Maestro is used) to include the require.NotNil guard.
60-66: Prefer omitting empty TLS keys in CA-only fixture generation.For CA-only scenarios, emitting
cert_file: ""/key_file: ""/http_ca_file: ""can test a different codepath than true key absence. Consider only writing non-empty TLS fields.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 60 - 66, The TLS fixture currently always includes cert_file, key_file, and http_ca_file (via tlsBlock and fmt.Sprintf using opts["certFile"], opts["keyFile"], opts["httpCaFile"]), which emits empty strings for CA-only cases; change the builder to conditionally append only non-empty TLS keys: construct tlsBlock starting with the common auth/type/tls_config header then append cert_file, key_file, and http_ca_file lines only when opts[...] is non-empty (use the tlsBlock variable or a strings.Builder and check opts["certFile"], opts["keyFile"], opts["httpCaFile"] before adding each line) so CA-only fixtures omit those fields entirely.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
45-49: Prefer built-inadapter.nameover a duplicatedadapterNameparam.Using the built-in adapter context here avoids drift and keeps templates aligned with the new adapter-centric standard.
♻️ Suggested simplification
- - name: "adapterName" - source: "env.ADAPTER_NAME" - type: "string" - default: "dry-run-adapter" @@ - hyperfleet.io/managed-by: "{{ .adapterName }}" + hyperfleet.io/managed-by: "{{ .adapter.name }}"Also applies to: 165-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 45 - 49, Remove the duplicated parameter "adapterName" and use the built-in adapter context "adapter.name" instead: delete the param block defining name: "adapterName" / source: "env.ADAPTER_NAME" / type: "string" / default: "dry-run-adapter" and update any template references that consume adapterName to read from adapter.name; apply the same change to the other occurrence of "adapterName" in this file so all templates use adapter.name consistently.internal/config_loader/validator.go (1)
617-623: Add a nil guard inValidateAdapterVersionto avoid panic.If
expectedVersionis set andconfigis nil,config.Adapter.Versionwill dereference nil.Proposed defensive fix
func ValidateAdapterVersion(config *AdapterConfig, expectedVersion string) error { if expectedVersion == "" { return nil } + if config == nil { + return fmt.Errorf("adapter config is nil") + } configVersion := config.Adapter.Version🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 617 - 623, ValidateAdapterVersion currently dereferences config.Adapter.Version without checking for nil; add a guard at the start of ValidateAdapterVersion that, when expectedVersion != "" and (config == nil || config.Adapter == nil), returns a clear error (e.g., fmt.Errorf indicating a nil config or missing Adapter) instead of proceeding to access config.Adapter.Version; ensure the function still returns nil when expectedVersion == "" as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@charts/templates/configmap-broker.yaml`:
- Line 50: The template emits a camelCase key "routingKey" but should use
snake_case to match downstream expectations; update the configmap template line
that currently reads routingKey: {{ .Values.broker.rabbitmq.routing_key | quote
}} to use routing_key instead (i.e., routing_key: {{
.Values.broker.rabbitmq.routing_key | quote }}), and ensure any consumers or
other templates reference the same snake_case key.
In `@charts/values.yaml`:
- Line 23: The chart's example path comments (e.g., the comment referencing
"adapter-config.yaml: examples/adapter-config.yaml" in charts/values.yaml) are
outdated; update each such comment to point to the new example locations or
remove the stale path. Locate occurrences of the example path comment (the
"adapter-config.yaml" reference and the similar comment at the other occurrence)
and replace them with the correct new path(s) or a neutral note like "see
examples directory" so users aren’t misled.
In `@configs/adapter-deployment-config.yaml`:
- Around line 87-90: The sample config's keepalive block contains an unsupported
key `permitWithoutStream`; remove the `permitWithoutStream: true` line from the
`keepalive` section (the block with keys `keepalive`, `time`, `timeout`) so the
sample matches the current keepalive schema and won't mislead adapter authors.
In `@internal/config_loader/accessors.go`:
- Around line 27-28: The comment above builtinVariables is out of sync: it lists
only "adapter, now, date" but the implementation of builtinVariables also
includes "config"; update the comment near the builtinVariables declaration (in
accessors.go) to list all built-ins (adapter, now, date, config) and any other
variables implemented so the docs match the code.
In `@internal/config_loader/types.go`:
- Around line 43-46: Guard Config.Redacted against a nil receiver by adding an
initial nil check in the Redacted method: if c == nil { return nil }. Then
proceed to copy the struct and call redactedClients(c.Clients) as before (or
ensure redactedClients safely handles nil maps), returning the new copy; this
prevents dereferencing a nil *Config in Config.Redacted.
- Around line 55-64: The TLS sanitization in Redacted() omits the HTTP CA field;
update the Redacted implementation (the code building tlsCopy and assigning
maestroCopy.Auth.TLSConfig) to also redact the http_ca_file field by setting the
TLS copy's HTTP CA field (e.g., tlsCopy.HTTPCAFile or tlsCopy.HttpCAFile — match
the exact struct field name used in the TLS config) to "**REDACTED**" when it is
non-empty, alongside the existing CAFile, CertFile, and KeyFile redactions so
maestroCopy.Auth.TLSConfig contains the sanitized value.
In `@internal/executor/executor_test.go`:
- Around line 367-368: The test currently ignores the error returned by
configToMap; update the setup so that you check that error and fail the test on
failure (e.g., replace the ignored error with an assertion like
require.NoError(t, err) or t.Fatalf) before calling extractConfigParams;
specifically, after calling configToMap(config) validate the returned error
(variable err) and stop the test if non-nil so configMap and subsequent
extractConfigParams(config, execCtx, configMap) are not run with invalid state.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml`:
- Line 43: The fixture currently uses field: "metadata.name" for the clusterName
capture but HyperFleet responses use a top-level name; update the capture so
clusterName reads from the root name instead of metadata.name (replace the field
value "metadata.name" with "name" in the adapter_config_valid.yaml fixture where
the clusterName capture is defined).
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 308-319: The CEL expressions access the first filtered Available
condition with .filter(...)[0] without checking that the filtered list is
non-empty, which can throw when no Available condition exists; update the reason
and message expressions to guard by checking the length or presence of
resources.clusterController.status.conditions.filter(c, c.type == 'Available')
before indexing (e.g., use has(...) or size(...) on the filtered list) and only
access [0].reason and [0].message after that guard, and provide the existing
fallback values ("ResourcesCreated" and "All Kubernetes resources created
successfully") when the condition is absent; apply the same guarded pattern to
both the reason and message expressions (also replicate the fix for the similar
block at the other occurrence).
- Line 140: The template currently captures clusterName from metadata.name;
update the adapter-config-template.yaml so the HyperFleet API mapping for
clusterName reads the root-level name field instead of metadata.name (i.e.,
replace any use of "metadata.name" with "name" in the api_call / capture mapping
that sets clusterName) so the adapter's api_call captures align with the
HyperFleet response model.
In `@test/integration/executor/executor_integration_test.go`:
- Line 86: The precondition config is capturing clusterName from metadata.name
but HyperFleet responses expose the cluster name at the root field "name";
update the FieldExpressionDef for the entry with Name: "clusterName" (the
map/struct using FieldExpressionDef{Field: "..."} ) to use Field: "name" instead
of "metadata.name", and make the same change for the other identical occurrence
referenced (the second precondition instance). Ensure both occurrences use
Field: "name" so the precondition reads the root-level name from HyperFleet
responses.
In `@test/integration/executor/executor_k8s_integration_test.go`:
- Around line 220-221: Replace the legacy template path "{{ .metadata.name }}"
with the new adapter context "{{ .adapter.name }}" in the label templates
(specifically the "hyperfleet.io/managed-by" label and any other label entries
that currently use "{{ .metadata.name }}"); search for other occurrences of the
same template (the review notes mention two more places) and update them to use
"{{ .adapter.name }}" so all label templates reference the adapter context
consistently.
In `@test/integration/executor/testdata/test-adapter-config.yaml`:
- Around line 31-33: Update the precondition that maps the HyperFleet response
into the clusterName field: replace the Kubernetes-style path `field:
metadata.name` used for the `name: clusterName` entry with the HyperFleet root
field `field: name` so the precondition captures the top-level `name` from the
API response (leave the `readyConditionStatus` entry unchanged).
In `@test/testdata/adapter_config_valid.yaml`:
- Around line 95-96: The fixture's precondition is capturing Kubernetes-style
metadata.name (field: metadata.name) but should read the HyperFleet response
root-level name; update the capture for the precondition that sets clusterName
so it uses field: name (root-level) instead of field: metadata.name to ensure
clusterName is extracted from HyperFleet's response schema.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: The cluster_metadata_region field's expression uses
dig(resources, "configmap0.data.cluster_metadata") which points to a key not
emitted by configmap0; update the expression in cluster_metadata_region to
reference the actual key produced by configmap0 (for example replace
"configmap0.data.cluster_metadata" with
"configmap0.data.cluster_metadata_region" or the correct produced key name), or
use a safe lookup with a fallback to avoid empty resolution; locate the
expression under cluster_metadata_region and adjust the dig path accordingly.
In `@test/testdata/dryrun/dryrun-maestro-adapter-config.yaml`:
- Around line 28-30: Update the inline comment that references the default
identifier to use snake_case: replace "sourceId" with "source_id" in the comment
above the client_id key so the wording matches the project's standardized
snake_case terminology (affects the comment associated with client_id).
In `@test/testdata/task-config.yaml`:
- Around line 29-35: The current applied-status checks reference
resources.clusterConfigMap.spec (which doesn't exist on a ConfigMap), so update
the condition in both the top-level status field and the status.expression to
check an actual ConfigMap property such as resources.clusterConfigMap.data (or
existence of the resource itself via has(resources.clusterConfigMap)); replace
has(resources.clusterConfigMap.spec) ? "JobApplied" : "JobPending" and
has(resources.clusterConfigMap.spec) ? "True" : "False" with
has(resources.clusterConfigMap.data) ? "JobApplied" : "JobPending" and
has(resources.clusterConfigMap.data) ? "True" : "False" (or use
has(resources.clusterConfigMap) if you prefer to detect presence of the
resource).
- Around line 83-84: The mapping for clusterName is using the wrong JSON path;
update the precondition that maps to clusterName so it reads field: "name"
instead of field: "metadata.name". Locate the entry that sets name: clusterName
(the mapping that currently uses metadata.name) and change the field value to
name (root-level) so the HyperFleet API response root is used to capture
clusterName.
---
Outside diff comments:
In `@charts/examples/README.md`:
- Around line 62-69: Update the README broker example YAML to use snake_case
config keys instead of camelCase: replace projectId → project_id, subscriptionId
→ subscription_id, topic → topic (already fine), and deadLetterTopic →
dead_letter_topic so users copying overrides match the new config standard;
update the same changes in the other occurrence noted (lines around the second
example).
In `@internal/config_loader/README.md`:
- Around line 39-44: The README examples use deprecated nested paths
(config.Metadata.Name and config.Spec.Clients.HyperfleetAPI.Timeout); update
them to the flattened API: replace config.Metadata.Name with config.Name and
replace config.Spec.Clients.HyperfleetAPI.Timeout with
config.Clients.HyperfleetAPI.Timeout so examples match the current config
structure.
---
Nitpick comments:
In `@charts/examples/maestro/adapter-task-config.yaml`:
- Around line 173-175: The YAML currently hardcodes placementClusterName to the
string "cluster1", which is unsafe for an executable example; update the
placementClusterName expression to dynamically capture the target cluster from
the event/precondition or placement adapter data (e.g., reference the incoming
event field or a placement adapter capture used elsewhere) instead of the
literal; change both occurrences of placementClusterName so they use the real
dynamic expression (the symbol to update is placementClusterName and any related
target_cluster mappings) so resources are routed based on runtime data rather
than a fixed placeholder.
In `@internal/config_loader/validator.go`:
- Around line 617-623: ValidateAdapterVersion currently dereferences
config.Adapter.Version without checking for nil; add a guard at the start of
ValidateAdapterVersion that, when expectedVersion != "" and (config == nil ||
config.Adapter == nil), returns a clear error (e.g., fmt.Errorf indicating a nil
config or missing Adapter) instead of proceeding to access
config.Adapter.Version; ensure the function still returns nil when
expectedVersion == "" as before.
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Around line 200-201: Add explicit nil checks using require.NotNil(t,
cfg.Clients.Maestro) before any use of cfg.Clients.Maestro in this test file
(e.g., before the call to buildMaestroClientConfigFromLoaded and the other flows
around the referenced ranges). Specifically, assert cfg.Clients.Maestro is not
nil in each test path that calls buildMaestroClientConfigFromLoaded or otherwise
dereferences cfg.Clients.Maestro so failures become clear diagnostics instead of
panics; update the tests around the occurrences of
buildMaestroClientConfigFromLoaded, and the blocks at the other noted locations
(the other flows where cfg.Clients.Maestro is used) to include the
require.NotNil guard.
- Around line 60-66: The TLS fixture currently always includes cert_file,
key_file, and http_ca_file (via tlsBlock and fmt.Sprintf using opts["certFile"],
opts["keyFile"], opts["httpCaFile"]), which emits empty strings for CA-only
cases; change the builder to conditionally append only non-empty TLS keys:
construct tlsBlock starting with the common auth/type/tls_config header then
append cert_file, key_file, and http_ca_file lines only when opts[...] is
non-empty (use the tlsBlock variable or a strings.Builder and check
opts["certFile"], opts["keyFile"], opts["httpCaFile"] before adding each line)
so CA-only fixtures omit those fields entirely.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 45-49: Remove the duplicated parameter "adapterName" and use the
built-in adapter context "adapter.name" instead: delete the param block defining
name: "adapterName" / source: "env.ADAPTER_NAME" / type: "string" / default:
"dry-run-adapter" and update any template references that consume adapterName to
read from adapter.name; apply the same change to the other occurrence of
"adapterName" in this file so all templates use adapter.name consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0877c7bf-ebfe-49f6-85b7-0d3887924aca
📒 Files selected for processing (61)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamldocs/adapter-authoring-guide.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (4)
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
f709706 to
dbae2bd
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
charts/examples/README.md (2)
62-69:⚠️ Potential issue | 🟠 MajorUpdate broker example keys to snake_case.
The YAML snippet still documents camelCase keys, which now conflicts with the values schema used in this PR.
Suggested patch
broker: googlepubsub: - projectId: CHANGE_ME - subscriptionId: CHANGE_ME + project_id: CHANGE_ME + subscription_id: CHANGE_ME topic: CHANGE_ME - deadLetterTopic: CHANGE_ME + dead_letter_topic: CHANGE_ME🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` around lines 62 - 69, The README YAML example uses camelCase keys under broker.googlepubsub (projectId, subscriptionId, topic, deadLetterTopic) which conflicts with the new values schema; update those keys to snake_case (project_id, subscription_id, topic, dead_letter_topic) in the example so the documented sample matches the schema used by broker.googlepubsub and the chart values.
84-90:⚠️ Potential issue | 🟠 MajorFix Helm
--setexamples to match renamed keys.The command still uses camelCase broker paths; with the new schema these overrides won’t target the right values.
Suggested patch
helm install <name> ./charts -f charts/examples/<example>/values.yaml \ --namespace <namespace> \ --set image.registry=quay.io/<developer-registry> \ - --set broker.googlepubsub.projectId=<gcp-project> \ - --set broker.googlepubsub.subscriptionId=<gcp-subscription> \ + --set broker.googlepubsub.project_id=<gcp-project> \ + --set broker.googlepubsub.subscription_id=<gcp-subscription> \ --set broker.googlepubsub.topic=<gcp-topic> \ - --set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic> + --set broker.googlepubsub.dead_letter_topic=<gcp-dlq-topic>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` around lines 84 - 90, The helm example uses old camelCase broker keys (broker.googlepubsub.projectId, subscriptionId, topic, deadLetterTopic); update the --set overrides in the helm install example to use the renamed schema keys (replace camelCase with kebab-case) — e.g. use broker.google-pubsub.project-id, broker.google-pubsub.subscription-id, broker.google-pubsub.topic, and broker.google-pubsub.dead-letter-topic in the command so the overrides target the correct values.internal/config_loader/README.md (1)
38-50:⚠️ Potential issue | 🟡 MinorCode example references removed fields (
Metadata,Spec).The "Accessing Configuration" example still uses paths from the old structure that no longer exist after this refactor:
config.Metadata.Name—Metadatawas removedconfig.Spec.Clients.HyperfleetAPI.Timeout—Specwrapper was removedSuggested fix
### Accessing Configuration ```go -// Metadata -config.Metadata.Name +// Adapter info +config.Adapter.Name -// API config -timeout := config.Spec.Clients.HyperfleetAPI.Timeout +// Client config +timeout := config.Clients.HyperfleetAPI.Timeout // Query helpers config.GetRequiredParams()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/README.md` around lines 38 - 50, The example uses removed fields (config.Metadata and config.Spec) — update the "Accessing Configuration" snippet to use the new structure: replace references to config.Metadata.Name with config.Adapter.Name and replace config.Spec.Clients.HyperfleetAPI.Timeout with config.Clients.HyperfleetAPI.Timeout, and keep the query helper calls (config.GetRequiredParams(), config.GetResourceByName, config.GetPreconditionByName, config.GetPostActionByName) intact so the example matches the refactored config API.internal/config_loader/validator.go (1)
370-378:⚠️ Potential issue | 🟠 MajorSkip manifest template validation for Maestro resources as intended.
Lines 372-374 still validate manifest template variables for all resources, even though the comment below states Maestro manifests should be skipped. This can reject valid ManifestWork templates that rely on runtime-provided variables.
🔧 Proposed fix
- if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) - } - // NOTE: For maestro transport, we skip template variable validation for manifest content. + // NOTE: For maestro transport, we skip template variable validation for manifest content. // ManifestWork templates may use variables provided at runtime by the framework // (e.g., adapterName, timestamp) that are not necessarily declared in params or captures. + if !resource.IsMaestroTransport() { + if manifest, ok := resource.Manifest.(map[string]interface{}); ok { + v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 370 - 378, The code currently always calls v.validateTemplateMap on resource.Manifest but the comment says Maestro transport manifests should skip template validation; update the loop in validator.go to only call v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) when the resource is not using the Maestro transport (e.g., check resource.Transport != TransportMaestro or resource.Transport != "maestro" depending on how transports are represented). Locate the loop over v.config.Resources and modify the conditional that wraps v.validateTemplateMap to include a guard against Maestro transport while keeping the existing manifest map type check and preserving resourcePath/FieldManifest usage.
♻️ Duplicate comments (8)
test/integration/executor/testdata/test-adapter-config.yaml (1)
31-33:⚠️ Potential issue | 🟡 MinorCapture
clusterNamefrom the HyperFleet response root.
metadata.nameis Kubernetes-shaped; this precondition should read the HyperFleet API rootnamefield.🔧 Proposed fix
capture: - name: clusterName - field: metadata.name + field: nameBased on learnings: precondition captures from HyperFleet API responses should use root-level fields (
field: "name"), notmetadata.name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/testdata/test-adapter-config.yaml` around lines 31 - 33, Update the precondition for the `clusterName` entry to capture the HyperFleet root-level name field instead of Kubernetes-shaped metadata; change the `field` value on the `clusterName` precondition (the entry with name: clusterName) from `metadata.name` to `name` so the adapter reads the API root `name` field (leave the `readyConditionStatus` entry untouched).test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-334:⚠️ Potential issue | 🟡 Minor
dig()path points to a key not produced byconfigmap0.
configmap0.data.cluster_metadatais not emitted by the resource manifest, so this extraction can resolve empty.🔧 Proposed fix
- cluster_metadata_region: + cluster_name: expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 334, The dig() expression is pointing at a non-existent key "configmap0.data.cluster_metadata" so cluster_metadata_region will be empty; update the expression in the same expression block that uses dig(...) to reference the actual key emitted by configmap0 (e.g., replace "configmap0.data.cluster_metadata" with the real key name present under configmap0.data or adjust the resource id from configmap0 to the correct resource name), or modify configmap0 to emit cluster_metadata if that is intended; ensure the symbol dig(resources, "...") targets a valid path that exists in the manifest.test/testdata/adapter_config_valid.yaml (1)
94-96:⚠️ Potential issue | 🟡 MinorCapture path should use HyperFleet response schema (
name), notmetadata.name.This precondition reads from a HyperFleet API response;
metadata.nameis Kubernetes-style and will produce incorrectclusterNameextraction.Suggested fix
capture: - - field: metadata.name + - field: name name: clusterNameBased on learnings: precondition captures from HyperFleet API responses should use root-level
namerather thanmetadata.name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/adapter_config_valid.yaml` around lines 94 - 96, The capture path for the precondition is using the Kubernetes-style key `metadata.name` but the HyperFleet API response uses a root-level `name`; update the capture entry that sets `name: clusterName` to use `field: name` instead of `field: metadata.name` so the precondition extracts the correct `clusterName` from the HyperFleet response.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorComment still needs to include
configin the built-in variables list.The
builtinVariablesslice (line 13) includes"config", but this comment still only listsadapter, now, date.Suggested fix
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, The comment above the builtinVariables slice is missing "config" — update the comment that lists built-in variables to include "config" alongside "adapter, now, date" so it matches the builtinVariables slice (symbol: builtinVariables) in accessors.go; ensure the textual list and punctuation mirror the actual slice contents.internal/executor/executor_test.go (1)
367-370:⚠️ Potential issue | 🟡 MinorRemove the redundant
configToMapassignment.Line 367 assigns
configMapwhile discarding the error, then Line 368 immediately overwrites it. This is an ineffectual assignment and keeps an error-ignore pattern in test setup.🔧 Proposed fix
- configMap, _ := configToMap(config) configMap, err := configToMap(config) require.NoError(t, err) err = extractConfigParams(config, execCtx, configMap)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/executor/executor_test.go` around lines 367 - 370, Remove the first redundant assignment that ignores the error: the duplicate call to configToMap which sets configMap and discards its error should be deleted so only the subsequent validated call remains; update the test to call configToMap once (the configMap, err := configToMap(config) line), keep require.NoError(t, err), and then pass configMap into extractConfigParams with execCtx as before (ensure references to configMap, configToMap, require.NoError, and extractConfigParams remain intact).test/integration/config-loader/testdata/adapter-config-template.yaml (2)
139-140:⚠️ Potential issue | 🟠 MajorCapture
clusterNamefrom rootnamefield.Line 140 still uses
metadata.name. For HyperFleet API cluster responses, this should map from rootname, otherwise downstream template fields can resolve empty.🔧 Proposed fix
- name: "clusterName" - field: "metadata.name" + field: "name"Based on learnings: precondition captures from HyperFleet API responses should use root-level
name, notmetadata.name.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 139 - 140, The mapping for "clusterName" currently reads field: "metadata.name" but HyperFleet API responses provide the cluster name at the root "name"; update the adapter-config-template mapping for the entry with name "clusterName" (the object whose name is "clusterName") to use field: "name" instead of "metadata.name" so downstream template fields resolve correctly.
308-319:⚠️ Potential issue | 🟠 MajorGuard filtered condition indexing before
[0]access.These CEL expressions dereference
[0]without confirming the filtered list is non-empty, which can break payload generation whenAvailableis absent.🔧 Proposed fix
- resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message)Also applies to: 325-336
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 308 - 319, The CEL expressions access resources.clusterController.status.conditions.filter(...)[0] without ensuring the filtered list is non-empty; update the reason and message expressions (and the other occurrence at lines ~325-336) to first bind or check the filtered list (e.g., let available = resources.clusterController.status.conditions.filter(c, c.type == 'Available')) and only reference available[0] when available.size() > 0 (or use a conditional has/size check), returning the fallback strings ("ResourcesCreated" / "All Kubernetes resources created successfully") when the filtered list is empty.test/integration/executor/executor_k8s_integration_test.go (1)
219-221:⚠️ Potential issue | 🟠 MajorReplace legacy
{{ .metadata.name }}with{{ .adapter.name }}.These changed label templates still reference removed metadata context and can render incorrect
hyperfleet.io/managed-byvalues.🔧 Proposed fix
- "hyperfleet.io/managed-by": "{{ .metadata.name }}", + "hyperfleet.io/managed-by": "{{ .adapter.name }}", ... - "hyperfleet.io/managed-by": "{{ .metadata.name }}", + "hyperfleet.io/managed-by": "{{ .adapter.name }}", ... - "hyperfleet.io/managed-by": "{{ .metadata.name }}", + "hyperfleet.io/managed-by": "{{ .adapter.name }}",Also applies to: 246-248, 567-569
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_k8s_integration_test.go` around lines 219 - 221, Replace the legacy template context for the "hyperfleet.io/managed-by" label in executor_k8s_integration_test.go: update occurrences that use "{{ .metadata.name }}" to "{{ .adapter.name }}" (e.g., in the label map alongside "hyperfleet.io/cluster-id" and "test") so the template renders the adapter name correctly; ensure you update all instances mentioned (the blocks around the current snippet and the other occurrences at the same pattern near the second and third locations referenced).
🧹 Nitpick comments (4)
internal/config_loader/types.go (1)
58-69: Extract the redaction token to a constant.
"**REDACTED**"is duplicated in this block; a constant keeps the redaction policy centralized.♻️ Proposed refactor
+const redactedToken = "**REDACTED**" ... - tlsCopy.CAFile = "**REDACTED**" + tlsCopy.CAFile = redactedToken ... - tlsCopy.CertFile = "**REDACTED**" + tlsCopy.CertFile = redactedToken ... - tlsCopy.KeyFile = "**REDACTED**" + tlsCopy.KeyFile = redactedToken ... - tlsCopy.HTTPCAFile = "**REDACTED**" + tlsCopy.HTTPCAFile = redactedToken🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/types.go` around lines 58 - 69, Replace the duplicated literal "**REDACTED**" with a single package-level constant (e.g., redactedToken) and use that constant when redacting fields on tlsCopy (CAFile, CertFile, KeyFile, HTTPCAFile); add the constant near the top of the file (or in the same package) and update the assignments in the tlsCopy redaction block to assign redactedToken instead of the literal.test/integration/executor/executor_integration_test.go (2)
56-58: Drop the unusedapiBaseURLargument fromcreateTestConfig.Line 57 discards the input, so the parameter is now dead API surface.
♻️ Proposed cleanup
-func createTestConfig(apiBaseURL string) *config_loader.Config { - _ = apiBaseURL // Kept for compatibility; base URL comes from env params. +func createTestConfig() *config_loader.Config { return &config_loader.Config{- config := createTestConfig(mockAPI.URL()) + config := createTestConfig()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_integration_test.go` around lines 56 - 58, The function createTestConfig currently accepts an unused parameter apiBaseURL which is discarded; remove this dead argument from the function signature and all call sites. Update the declaration of createTestConfig to have no parameters and adjust any tests or helpers that call createTestConfig() to stop passing apiBaseURL. Ensure no other symbols (e.g., references to config_loader.Config construction) are changed besides removing the unused parameter.
922-1003: Extract a shared base config builder for test scenarios.The same
Adapter/Clients/Paramsboilerplate is repeated across multiple test cases, which makes schema migrations harder to keep consistent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_integration_test.go` around lines 922 - 1003, Extract the repeated Adapter/Clients/Params setup into a helper function (e.g., buildBaseTestConfig) that returns *config_loader.Config populated with Adapter: config_loader.AdapterInfo, Clients: config_loader.ClientsConfig (HyperfleetAPI timeout/retry), and Params slice; then replace the inline repeated block that assigns to config in executor_integration_test.go with a call to buildBaseTestConfig() and patch only the test-specific fields (Preconditions/Post) on the returned config; ensure the helper uses the same types (config_loader.Config, AdapterInfo, ClientsConfig, Parameter) so callers can append or override Pre/Post sections.internal/config_loader/loader_test.go (1)
210-251: Add one explicit test for requiredenv.*params without defaults.A focused case in this validation suite would lock in the expected fail-fast behavior during config load.
Based on learnings: environment variable validation for required parameters (
env.*,Required=true, no default) is intentionally enforced at config load time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/loader_test.go` around lines 210 - 251, Add a new table-driven test case inside TestTaskConfigValidation that asserts a config with a param sourced from env.* which is marked required and has no default fails validation; specifically add a test entry in the tests slice with yaml containing a params item like "name: SOME\nsource: env.SOME\nrequired: true" (or similar env.* syntax used by the loader) and set wantError=true and errorMsg to the expected validation message, so TestTaskConfigValidation covers the fail-fast behavior for required env.* params without defaults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@charts/examples/kubernetes/adapter-config.yaml`:
- Line 4: The example sets adapter.version to "1.25" which mismatches the
adapter runtime's 0.1.x scheme and will fail startup validation; update the
example value for adapter.version (the YAML key "adapter.version") to use the
0.1.x format matching the runtime (e.g., "0.1.25" or the specific runtime
version you ship) so major/minor components align, or replace the hardcoded
value with a variable/reference that reads the actual adapter runtime version
used by the deployment.
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently only checks for metadata["name"]
when metadata can be asserted to map[string]interface{}, so if
manifest["metadata"] exists but is the wrong type it silently skips validation;
change the logic to first test presence (e.g., _, present :=
manifest["metadata"]) and if present then attempt the type assertion, and if the
assertion fails call v.errors.Add(path+".metadata", fmt.Sprintf("field %q must
be an object", "metadata")) (or similar) to report a type error, otherwise when
metadata is a map[string]interface{} continue to check for the required
FieldName and call v.errors.Add(path+".metadata", fmt.Sprintf("missing required
field %q", FieldName)) as before.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 211-214: The discovery.namespace field is being set for a
cluster-scoped Namespace resource causing wrong discovery; update the template
so discovery.namespace is omitted for Namespace resources by adding a
conditional around the namespace line (e.g., only render discovery.namespace
when the resource is namespaced or when .resource.kind != "Namespace" /
.isNamespaced == true), leaving the by_selectors section unchanged.
In `@test/integration/executor/executor_k8s_integration_test.go`:
- Line 193: The test captures the clusterName using metadata.name which doesn't
match HyperFleet's response schema; update the config_loader.FieldExpressionDef
used for the clusterName capture (the entry named "clusterName") to use Field:
"name" instead of "metadata.name" so the integration test aligns with
HyperFleet's root-level name field.
In `@test/testdata/task-config.yaml`:
- Around line 27-31: The CEL ternary in reason.expression is invalid because of
an extra trailing colon after "ConfigMapPending"; update the expression under
reason.expression (the block referencing
resources.clusterConfigMap.metadata.resourceVersion) to use standard CEL ternary
syntax condition ? "ConfigMapApplied" : "ConfigMapPending" with no colon after
the final value so the expression evaluates correctly at runtime.
---
Outside diff comments:
In `@charts/examples/README.md`:
- Around line 62-69: The README YAML example uses camelCase keys under
broker.googlepubsub (projectId, subscriptionId, topic, deadLetterTopic) which
conflicts with the new values schema; update those keys to snake_case
(project_id, subscription_id, topic, dead_letter_topic) in the example so the
documented sample matches the schema used by broker.googlepubsub and the chart
values.
- Around line 84-90: The helm example uses old camelCase broker keys
(broker.googlepubsub.projectId, subscriptionId, topic, deadLetterTopic); update
the --set overrides in the helm install example to use the renamed schema keys
(replace camelCase with kebab-case) — e.g. use broker.google-pubsub.project-id,
broker.google-pubsub.subscription-id, broker.google-pubsub.topic, and
broker.google-pubsub.dead-letter-topic in the command so the overrides target
the correct values.
In `@internal/config_loader/README.md`:
- Around line 38-50: The example uses removed fields (config.Metadata and
config.Spec) — update the "Accessing Configuration" snippet to use the new
structure: replace references to config.Metadata.Name with config.Adapter.Name
and replace config.Spec.Clients.HyperfleetAPI.Timeout with
config.Clients.HyperfleetAPI.Timeout, and keep the query helper calls
(config.GetRequiredParams(), config.GetResourceByName,
config.GetPreconditionByName, config.GetPostActionByName) intact so the example
matches the refactored config API.
In `@internal/config_loader/validator.go`:
- Around line 370-378: The code currently always calls v.validateTemplateMap on
resource.Manifest but the comment says Maestro transport manifests should skip
template validation; update the loop in validator.go to only call
v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) when the
resource is not using the Maestro transport (e.g., check resource.Transport !=
TransportMaestro or resource.Transport != "maestro" depending on how transports
are represented). Locate the loop over v.config.Resources and modify the
conditional that wraps v.validateTemplateMap to include a guard against Maestro
transport while keeping the existing manifest map type check and preserving
resourcePath/FieldManifest usage.
---
Duplicate comments:
In `@internal/config_loader/accessors.go`:
- Around line 27-28: The comment above the builtinVariables slice is missing
"config" — update the comment that lists built-in variables to include "config"
alongside "adapter, now, date" so it matches the builtinVariables slice (symbol:
builtinVariables) in accessors.go; ensure the textual list and punctuation
mirror the actual slice contents.
In `@internal/executor/executor_test.go`:
- Around line 367-370: Remove the first redundant assignment that ignores the
error: the duplicate call to configToMap which sets configMap and discards its
error should be deleted so only the subsequent validated call remains; update
the test to call configToMap once (the configMap, err := configToMap(config)
line), keep require.NoError(t, err), and then pass configMap into
extractConfigParams with execCtx as before (ensure references to configMap,
configToMap, require.NoError, and extractConfigParams remain intact).
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 139-140: The mapping for "clusterName" currently reads field:
"metadata.name" but HyperFleet API responses provide the cluster name at the
root "name"; update the adapter-config-template mapping for the entry with name
"clusterName" (the object whose name is "clusterName") to use field: "name"
instead of "metadata.name" so downstream template fields resolve correctly.
- Around line 308-319: The CEL expressions access
resources.clusterController.status.conditions.filter(...)[0] without ensuring
the filtered list is non-empty; update the reason and message expressions (and
the other occurrence at lines ~325-336) to first bind or check the filtered list
(e.g., let available = resources.clusterController.status.conditions.filter(c,
c.type == 'Available')) and only reference available[0] when available.size() >
0 (or use a conditional has/size check), returning the fallback strings
("ResourcesCreated" / "All Kubernetes resources created successfully") when the
filtered list is empty.
In `@test/integration/executor/executor_k8s_integration_test.go`:
- Around line 219-221: Replace the legacy template context for the
"hyperfleet.io/managed-by" label in executor_k8s_integration_test.go: update
occurrences that use "{{ .metadata.name }}" to "{{ .adapter.name }}" (e.g., in
the label map alongside "hyperfleet.io/cluster-id" and "test") so the template
renders the adapter name correctly; ensure you update all instances mentioned
(the blocks around the current snippet and the other occurrences at the same
pattern near the second and third locations referenced).
In `@test/integration/executor/testdata/test-adapter-config.yaml`:
- Around line 31-33: Update the precondition for the `clusterName` entry to
capture the HyperFleet root-level name field instead of Kubernetes-shaped
metadata; change the `field` value on the `clusterName` precondition (the entry
with name: clusterName) from `metadata.name` to `name` so the adapter reads the
API root `name` field (leave the `readyConditionStatus` entry untouched).
In `@test/testdata/adapter_config_valid.yaml`:
- Around line 94-96: The capture path for the precondition is using the
Kubernetes-style key `metadata.name` but the HyperFleet API response uses a
root-level `name`; update the capture entry that sets `name: clusterName` to use
`field: name` instead of `field: metadata.name` so the precondition extracts the
correct `clusterName` from the HyperFleet response.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: The dig() expression is pointing at a non-existent key
"configmap0.data.cluster_metadata" so cluster_metadata_region will be empty;
update the expression in the same expression block that uses dig(...) to
reference the actual key emitted by configmap0 (e.g., replace
"configmap0.data.cluster_metadata" with the real key name present under
configmap0.data or adjust the resource id from configmap0 to the correct
resource name), or modify configmap0 to emit cluster_metadata if that is
intended; ensure the symbol dig(resources, "...") targets a valid path that
exists in the manifest.
---
Nitpick comments:
In `@internal/config_loader/loader_test.go`:
- Around line 210-251: Add a new table-driven test case inside
TestTaskConfigValidation that asserts a config with a param sourced from env.*
which is marked required and has no default fails validation; specifically add a
test entry in the tests slice with yaml containing a params item like "name:
SOME\nsource: env.SOME\nrequired: true" (or similar env.* syntax used by the
loader) and set wantError=true and errorMsg to the expected validation message,
so TestTaskConfigValidation covers the fail-fast behavior for required env.*
params without defaults.
In `@internal/config_loader/types.go`:
- Around line 58-69: Replace the duplicated literal "**REDACTED**" with a single
package-level constant (e.g., redactedToken) and use that constant when
redacting fields on tlsCopy (CAFile, CertFile, KeyFile, HTTPCAFile); add the
constant near the top of the file (or in the same package) and update the
assignments in the tlsCopy redaction block to assign redactedToken instead of
the literal.
In `@test/integration/executor/executor_integration_test.go`:
- Around line 56-58: The function createTestConfig currently accepts an unused
parameter apiBaseURL which is discarded; remove this dead argument from the
function signature and all call sites. Update the declaration of
createTestConfig to have no parameters and adjust any tests or helpers that call
createTestConfig() to stop passing apiBaseURL. Ensure no other symbols (e.g.,
references to config_loader.Config construction) are changed besides removing
the unused parameter.
- Around line 922-1003: Extract the repeated Adapter/Clients/Params setup into a
helper function (e.g., buildBaseTestConfig) that returns *config_loader.Config
populated with Adapter: config_loader.AdapterInfo, Clients:
config_loader.ClientsConfig (HyperfleetAPI timeout/retry), and Params slice;
then replace the inline repeated block that assigns to config in
executor_integration_test.go with a call to buildBaseTestConfig() and patch only
the test-specific fields (Preconditions/Post) on the returned config; ensure the
helper uses the same types (config_loader.Config, AdapterInfo, ClientsConfig,
Parameter) so callers can append or override Pre/Post sections.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7a224d84-2e2c-4522-9f52-6bbef6dca7c5
📒 Files selected for processing (63)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (5)
- configuration.md
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
✅ Files skipped from review due to trivial changes (2)
- docs/configuration.md
- internal/executor/types.go
🚧 Files skipped from review as they are similar to previous changes (12)
- internal/executor/README.md
- README.md
- internal/executor/utils_test.go
- test/testdata/templates/deployment.yaml
- internal/executor/executor.go
- internal/executor/param_extractor.go
- charts/examples/maestro/adapter-task-resource-manifestwork.yaml
- test/testdata/adapter-config.yaml
- charts/templates/configmap-broker.yaml
- internal/config_loader/loader.go
- charts/templates/deployment.yaml
- configs/templates/job.yaml
dbae2bd to
9e688d8
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/executor/README.md (1)
507-509:⚠️ Potential issue | 🟡 MinorUpdate stale config path in docs (
spec.clients.kubernetes→clients.kubernetes).Line 508 still points to the old nested path, which conflicts with the flattened schema introduced in this PR.
Suggested patch
Kubernetes client settings are read from the adapter deployment config at -`spec.clients.kubernetes` (kubeconfig path or in-cluster). +`clients.kubernetes` (kubeconfig path or in-cluster).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/executor/README.md` around lines 507 - 509, Update the README reference that currently points to the old nested config path `spec.clients.kubernetes` so it matches the flattened schema by replacing it with `clients.kubernetes`; search for the literal `spec.clients.kubernetes` in the README (the sentence describing Kubernetes client settings) and change it to `clients.kubernetes` and verify any surrounding explanatory text reflects the flattened config shape.test/integration/config-loader/loader_template_test.go (1)
111-124:⚠️ Potential issue | 🟡 MinorRequire
postto exist in this merge-path test.Line 111 makes post assertions conditional, so the test can still pass if
postmerging regresses tonil.Suggested test tightening
- if config.Post != nil { - assert.NotEmpty(t, config.Post.Payloads) - assert.NotEmpty(t, config.Post.PostActions) - - // Check post action - if len(config.Post.PostActions) > 0 { - firstAction := config.Post.PostActions[0] - assert.NotEmpty(t, firstAction.Name) - if firstAction.APICall != nil { - assert.NotEmpty(t, firstAction.APICall.Method) - assert.NotEmpty(t, firstAction.APICall.URL) - } - } - } + require.NotNil(t, config.Post, "post should be merged from task config") + assert.NotEmpty(t, config.Post.Payloads) + require.NotEmpty(t, config.Post.PostActions, "post_actions should be merged from task config") + + firstAction := config.Post.PostActions[0] + assert.NotEmpty(t, firstAction.Name) + if firstAction.APICall != nil { + assert.NotEmpty(t, firstAction.APICall.Method) + assert.NotEmpty(t, firstAction.APICall.URL) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/loader_template_test.go` around lines 111 - 124, The test currently guards the post assertions with "if config.Post != nil", allowing a regression where Post becomes nil to still pass; change the test to assert that config.Post is non-nil (e.g., require.NotNil(t, config.Post) or assert.NotNil) before checking its fields, then replace subsequent optional checks with strict assertions (require.NotEmpty/NotNil) on config.Post.Payloads, config.Post.PostActions, the firstAction.Name, and firstAction.APICall.Method/URL so the merge-path test fails if Post is missing or not properly merged.
♻️ Duplicate comments (7)
test/testdata/task-config.yaml (1)
27-31:⚠️ Potential issue | 🔴 CriticalFix invalid CEL ternary syntax in
reason.expression.Line 31 has an extra trailing
:after"ConfigMapPending", which makes the CEL expression invalid.🔧 Proposed fix
reason: expression: | has(resources.clusterConfigMap.metadata.resourceVersion) ? "ConfigMapApplied" - : "ConfigMapPending": + : "ConfigMapPending"#!/bin/bash # Verify whether invalid trailing-colon false branches still exist in this fixture. rg -nP ':\s*"ConfigMapPending":\s*$' test/testdata/task-config.yamlExpected result: currently 1 match; after the fix, 0 matches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/task-config.yaml` around lines 27 - 31, The CEL ternary in reason.expression is invalid due to an extra trailing colon after the false branch; locate the reason.expression block and remove the trailing ':' following "ConfigMapPending" so the expression becomes a valid ternary (has(...)? "ConfigMapApplied" : "ConfigMapPending") in the task-config.yaml fixture.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-333:⚠️ Potential issue | 🟡 MinorFix invalid
dig()target undercluster_metadata_region.
configmap0.data.cluster_metadatais not produced byconfigmap0.data, so this expression can resolve empty.🔧 Proposed fix
- cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 333, The dig() call for cluster_metadata_region targets a non-existent path "configmap0.data.cluster_metadata"; update the expression in cluster_metadata_region to reference the actual key produced by configmap0.data (e.g., dig(resources, "configmap0.data.<actual_key>") or just dig(resources, "configmap0.data") and then extract the correct subkey), replacing "configmap0.data.cluster_metadata" with the real field name emitted by configmap0.data so the expression does not resolve empty.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorUpdate built-in variable docs to include
config.Line [27] still documents built-ins as
adapter, now, date, but Line [13] includesconfigtoo.Suggested fix
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, Update the built-in variables list in the top-of-file comment to include `config` so it matches the other documentation (change the line that currently reads "Built-in variables (adapter, now, date)" to include `config` as "adapter, now, date, config"); locate the comment block in internal/config_loader/accessors.go (the comment surrounding the built-in variables) and make this single-word addition so the docs match the actual built-ins referenced elsewhere.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟡 MinorAdd an explicit type error when
metadatais not an object.If
metadataexists but is not a map, validation currently skips erroring on type and can let malformed manifests pass.Suggested fix
- if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { + if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", "metadata must be an object") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently only checks for a missing "name" when metadata is a map, but if manifest["metadata"] exists and is not a map[string]interface{} it silently passes; update the logic around manifest["metadata"] in validator.go to first detect presence and type: if metadata is present but not a map, call v.errors.Add with the same path (path+"."+ "metadata") and a clear type error message (e.g., "expected object for %q" using FieldName); keep the existing branch that, when metadata is a map, checks for the missing FieldName and adds the missing-field error as before.test/integration/config-loader/testdata/adapter-config-template.yaml (3)
211-214:⚠️ Potential issue | 🟠 MajorDo not set discovery namespace for a
Namespaceresource.
Namespaceis cluster-scoped; settingdiscovery.namespacehere can cause incorrect lookup behavior.Suggested fix
discovery: - namespace: "cluster-{{ .clusterId }}" by_selectors: label_selector:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 211 - 214, This template sets discovery.namespace for a cluster-scoped resource which is incorrect; remove the discovery.namespace field from the adapter-config-template.yaml when defining the Namespace resource (i.e., delete the discovery.namespace: "cluster-{{ .clusterId }}" entry under discovery) so discovery is limited to selectors (by_selectors / label_selector) and does not perform a namespaced lookup for the Namespace resource type.
308-336:⚠️ Potential issue | 🟠 MajorGuard filtered-condition indexing before
[0]access in CEL.These expressions index
[0]without checking list size, so payload build can fail when noAvailablecondition exists.#!/bin/bash set -euo pipefail rg -n "filter\(c, c.type == 'Available'\)\[0\]" test/integration/config-loader/testdata/adapter-config-template.yamlSuggested fix pattern
status: expression: | - resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "ResourcesCreated" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "All Kubernetes resources created successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 308 - 336, The CEL expressions directly index resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] (used in the reason and message expressions and the available checks) without guarding for an empty list; update each expression to first bind or evaluate the filtered list and check its length (e.g., assign a temp like availableConds = resources.clusterController.status.conditions.filter(c, c.type == 'Available') or use availableConds.size() > 0) before accessing [0], and fall back to the default reason/message ("ResourcesCreated"/"DeploymentReady" and the existing default messages) when the list is empty so no out-of-range access occurs.
139-141:⚠️ Potential issue | 🔴 CriticalFix malformed YAML in
clusterNamecapture.Line 140 has
field: name"(unbalanced quote), which breaks YAML parsing and capture mapping.Suggested fix
- name: "clusterName" - field: name" + field: "name"For HyperFleet API
api_callcaptures, cluster name is captured from the root-levelnamefield of the API response schema.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 139 - 141, The YAML capture for "clusterName" is malformed: the capture mapping uses `field: name"` (unbalanced quote) which breaks parsing; update the capture for the `clusterName` entry to use a valid field value (e.g., change `field: name"` to `field: name`) so the `clusterName` capture correctly pulls from the root-level `name` field; ensure the surrounding mapping (the `- name: "clusterName"` entry) remains syntactically valid and that `readyConditionStatus` remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/config_loader/README.md`:
- Line 43: The README example uses the old nested field path
config.Spec.Clients.HyperfleetAPI.Timeout which no longer exists; update the
example to use the flattened config path (e.g.,
config.Clients.HyperfleetAPI.Timeout) so it matches the current config struct
and compile-time fields referenced by the code samples (look for the example
line containing timeout := config.Spec.Clients.HyperfleetAPI.Timeout and replace
it with the flattened config.Clients... access).
- Line 118: The README has a fenced code block opened without a language tag
which triggers markdownlint MD040; update the opening fence in
internal/config_loader/README.md to include a language tag (e.g., change "```"
to "```text") so the block containing the validation messages
(params[0].name..., preconditions..., resources...) is properly language-tagged.
In `@internal/executor/param_extractor.go`:
- Around line 116-137: Delete the four unused wrapper functions convertToString,
convertToInt64, convertToFloat64, and convertToBool from param_extractor.go
(including the //nolint comment) because they are not referenced anywhere; keep
convertParamType as the single adapter to utils.ConvertToType (or, if you
prefer, replace its single call site with a direct utils.ConvertToType call and
then remove convertParamType) and ensure there are no remaining references to
the removed functions.
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Line 200: Add nil checks for cfg.Clients and cfg.Clients.Maestro before
dereferencing them in the TLS integration tests; specifically, guard the calls
to buildMaestroClientConfigFromLoaded by verifying cfg != nil, cfg.Clients !=
nil, and cfg.Clients.Maestro != nil and call t.Fatalf or require.NotNil with a
clear message if any is nil. Update each call site that uses
buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) (the occurrences around
the previous call sites) so the test fails with an assertion instead of
panicking when the Maestro client config is missing.
---
Outside diff comments:
In `@internal/executor/README.md`:
- Around line 507-509: Update the README reference that currently points to the
old nested config path `spec.clients.kubernetes` so it matches the flattened
schema by replacing it with `clients.kubernetes`; search for the literal
`spec.clients.kubernetes` in the README (the sentence describing Kubernetes
client settings) and change it to `clients.kubernetes` and verify any
surrounding explanatory text reflects the flattened config shape.
In `@test/integration/config-loader/loader_template_test.go`:
- Around line 111-124: The test currently guards the post assertions with "if
config.Post != nil", allowing a regression where Post becomes nil to still pass;
change the test to assert that config.Post is non-nil (e.g., require.NotNil(t,
config.Post) or assert.NotNil) before checking its fields, then replace
subsequent optional checks with strict assertions (require.NotEmpty/NotNil) on
config.Post.Payloads, config.Post.PostActions, the firstAction.Name, and
firstAction.APICall.Method/URL so the merge-path test fails if Post is missing
or not properly merged.
---
Duplicate comments:
In `@internal/config_loader/accessors.go`:
- Around line 27-28: Update the built-in variables list in the top-of-file
comment to include `config` so it matches the other documentation (change the
line that currently reads "Built-in variables (adapter, now, date)" to include
`config` as "adapter, now, date, config"); locate the comment block in
internal/config_loader/accessors.go (the comment surrounding the built-in
variables) and make this single-word addition so the docs match the actual
built-ins referenced elsewhere.
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently only checks for a missing "name"
when metadata is a map, but if manifest["metadata"] exists and is not a
map[string]interface{} it silently passes; update the logic around
manifest["metadata"] in validator.go to first detect presence and type: if
metadata is present but not a map, call v.errors.Add with the same path
(path+"."+ "metadata") and a clear type error message (e.g., "expected object
for %q" using FieldName); keep the existing branch that, when metadata is a map,
checks for the missing FieldName and adds the missing-field error as before.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 211-214: This template sets discovery.namespace for a
cluster-scoped resource which is incorrect; remove the discovery.namespace field
from the adapter-config-template.yaml when defining the Namespace resource
(i.e., delete the discovery.namespace: "cluster-{{ .clusterId }}" entry under
discovery) so discovery is limited to selectors (by_selectors / label_selector)
and does not perform a namespaced lookup for the Namespace resource type.
- Around line 308-336: The CEL expressions directly index
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] (used in the reason and message expressions and the available
checks) without guarding for an empty list; update each expression to first bind
or evaluate the filtered list and check its length (e.g., assign a temp like
availableConds = resources.clusterController.status.conditions.filter(c, c.type
== 'Available') or use availableConds.size() > 0) before accessing [0], and fall
back to the default reason/message ("ResourcesCreated"/"DeploymentReady" and the
existing default messages) when the list is empty so no out-of-range access
occurs.
- Around line 139-141: The YAML capture for "clusterName" is malformed: the
capture mapping uses `field: name"` (unbalanced quote) which breaks parsing;
update the capture for the `clusterName` entry to use a valid field value (e.g.,
change `field: name"` to `field: name`) so the `clusterName` capture correctly
pulls from the root-level `name` field; ensure the surrounding mapping (the `-
name: "clusterName"` entry) remains syntactically valid and that
`readyConditionStatus` remains unchanged.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-333: The dig() call for cluster_metadata_region targets a
non-existent path "configmap0.data.cluster_metadata"; update the expression in
cluster_metadata_region to reference the actual key produced by configmap0.data
(e.g., dig(resources, "configmap0.data.<actual_key>") or just dig(resources,
"configmap0.data") and then extract the correct subkey), replacing
"configmap0.data.cluster_metadata" with the real field name emitted by
configmap0.data so the expression does not resolve empty.
In `@test/testdata/task-config.yaml`:
- Around line 27-31: The CEL ternary in reason.expression is invalid due to an
extra trailing colon after the false branch; locate the reason.expression block
and remove the trailing ':' following "ConfigMapPending" so the expression
becomes a valid ternary (has(...)? "ConfigMapApplied" : "ConfigMapPending") in
the task-config.yaml fixture.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9f2653dc-9d8e-44d3-8b3d-1b06e9f7d0c3
📒 Files selected for processing (68)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
- internal/manifest/manifest.go
- configuration.md
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
✅ Files skipped from review due to trivial changes (1)
- internal/criteria/README.md
🚧 Files skipped from review as they are similar to previous changes (10)
- README.md
- internal/config_loader/viper_loader.go
- internal/executor/utils.go
- internal/executor/types.go
- charts/examples/README.md
- test/testdata/adapter-config.yaml
- test/integration/config-loader/testdata/templates/job.yaml
- test/testdata/dryrun/dryrun-kubernetes-adapter-config.yaml
- charts/templates/configmap-broker.yaml
- internal/config_loader/loader_test.go
9e688d8 to
df4b543
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
charts/examples/README.md (1)
46-49:⚠️ Potential issue | 🟠 MajorREADME examples still use deprecated camelCase keys.
The examples currently mix pre-migration names (
nestedDiscoveries,projectId,subscriptionId,deadLetterTopic) and will mislead users under the new snake_case standard.🔧 Proposed fix
-- Resource discovery by name and by label selectors (`nestedDiscoveries`) +- Resource discovery by name and by label selectors (`nested_discoveries`) @@ broker: googlepubsub: - projectId: CHANGE_ME - subscriptionId: CHANGE_ME + project_id: CHANGE_ME + subscription_id: CHANGE_ME topic: CHANGE_ME - deadLetterTopic: CHANGE_ME + dead_letter_topic: CHANGE_ME @@ - --set broker.googlepubsub.projectId=<gcp-project> \ - --set broker.googlepubsub.subscriptionId=<gcp-subscription> \ + --set broker.googlepubsub.project_id=<gcp-project> \ + --set broker.googlepubsub.subscription_id=<gcp-subscription> \ --set broker.googlepubsub.topic=<gcp-topic> \ - --set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic> + --set broker.googlepubsub.dead_letter_topic=<gcp-dlq-topic>Also applies to: 62-69, 87-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` around lines 46 - 49, Update the example YAML/JSON keys to the new snake_case standard: replace nestedDiscoveries -> nested_discoveries, projectId -> project_id, subscriptionId -> subscription_id, deadLetterTopic -> dead_letter_topic (and any other camelCase keys in the same examples) so all examples consistently use snake_case; search for occurrences of those identifiers (e.g., "nestedDiscoveries", "projectId", "subscriptionId", "deadLetterTopic") in the README examples and update them and any related references (documentation text or example descriptions) to the corresponding snake_case names.cmd/adapter/main.go (1)
383-390:⚠️ Potential issue | 🟠 MajorAvoid logging raw merged config in debug mode.
At Line 389, debug logging marshals the full config. This can leak sensitive values to logs. Use the redacted config view before serialization.
🔧 Proposed fix
- configBytes, err := yaml.Marshal(config) + configBytes, err := yaml.Marshal(config.Redacted()) if err != nil { errCtx := logger.WithErrorField(ctx, err) log.Warnf(errCtx, "Failed to marshal adapter configuration for logging") } else { - log.Infof(ctx, "Loaded adapter configuration:\n%s", string(configBytes)) + log.Infof(ctx, "Loaded adapter configuration (redacted):\n%s", string(configBytes)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/adapter/main.go` around lines 383 - 390, The debug path currently marshals and logs the full config (see config.DebugConfig and yaml.Marshal(config)), which can leak secrets; instead obtain or build a redacted view of the configuration (e.g., call the existing redaction helper or create a shallow copy with sensitive fields removed/obfuscated), then marshal that redacted object and log it (replace yaml.Marshal(config) with yaml.Marshal(redactedConfig)), and keep the existing error handling using logger.WithErrorField and log.Infof for the redacted output.internal/config_loader/validator.go (1)
370-377:⚠️ Potential issue | 🟠 MajorGuard manifest template validation by transport type.
At Line 372, manifest template validation runs unconditionally. That contradicts the inline note and can incorrectly reject valid Maestro ManifestWork templates that rely on framework-provided runtime variables.
🔧 Proposed fix
- if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) - } + if !resource.IsMaestroTransport() { + if manifest, ok := resource.Manifest.(map[string]interface{}); ok { + v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 370 - 377, The manifest template validation is currently unconditional inside the loop over v.config.Resources and should be skipped for Maestro transport resources; change the loop in validator.go so that before calling v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) you check the resource's transport/type (e.g., resource.Transport or equivalent field on the resource struct) and only run validateTemplateMap when the transport is not "maestro" (case-insensitive) — this preserves the existing inline note about skipping ManifestWork templates while leaving validation for other transports intact.
♻️ Duplicate comments (9)
internal/config_loader/README.md (2)
118-123:⚠️ Potential issue | 🟡 MinorAdd a language tag to the fenced validation-output block.
This block is still opened with a bare fence and continues to trigger MD040.
🔧 Proposed fix
-``` +```text params[0].name is required preconditions[1].api_call.method "INVALID" is invalid (allowed: GET, POST, PUT, PATCH, DELETE) resources[0].name "my-resource": must start with lowercase letter and contain only letters, numbers, underscores (no hyphens) preconditions[0].capture[0]: must have either 'field' or 'expression' set</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@internal/config_loader/README.mdaround lines 118 - 123, Add a language tag
to the fenced validation-output block so the markdown linter MD040 is satisfied:
locate the fenced block containing the validation output lines (the code fence
that starts withand lists params[0].name..., preconditions[1].api_call.method..., resources[0].name..., preconditions[0].capture[0]...) and change the opening fence to include a language identifier (e.g., usetext) while leaving the block contents
unchanged.</details> --- `43-43`: _⚠️ Potential issue_ | _🟡 Minor_ **Update the stale flattened-struct accessor in the code sample.** The example still uses `config.Spec...`, which no longer matches the flattened config shape. <details> <summary>🔧 Proposed fix</summary> ```diff -timeout := config.Spec.Clients.HyperfleetAPI.Timeout +timeout := config.Clients.HyperfleetAPI.Timeout ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/README.md` at line 43, The README example uses the outdated accessor config.Spec.Clients.HyperfleetAPI.Timeout; update the sample to use the flattened config shape (e.g., timeout := config.Clients.HyperfleetAPI.Timeout) — locate the line with timeout := config.Spec.Clients.HyperfleetAPI.Timeout and replace the accessor to the new top-level/flattened field (confirm the exact flattened field name, such as config.Clients or config.HyperfleetAPI, and update the example accordingly). ``` </details> </blockquote></details> <details> <summary>test/integration/maestro_client/client_tls_config_integration_test.go (1)</summary><blockquote> `200-200`: _⚠️ Potential issue_ | _🟡 Minor_ **Add nil assertions before dereferencing `cfg.Clients.Maestro` in the remaining tests.** These call sites can still panic instead of failing with a clear test assertion when Maestro config is absent. <details> <summary>🔧 Proposed fix</summary> ```diff @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro assert.True(t, maestroCfg.Insecure) @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro assert.Equal(t, env.TLSMaestroGRPCAddr, maestroCfg.GRPCServerAddress, "Env should override YAML") ``` </details> Also applies to: 237-237, 285-285 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 200, The tests call buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) without checking cfg or cfg.Clients.Maestro and can panic; before any dereference add explicit nil assertions (e.g., require.NotNil/ assert.NotNil) for cfg and cfg.Clients.Maestro in the affected tests so they fail with a clear assertion instead of panicking; locate the call sites referencing buildMaestroClientConfigFromLoaded and cfg.Clients.Maestro (including the other occurrences noted) and insert the nil checks at the top of each test function before invoking buildMaestroClientConfigFromLoaded. ``` </details> </blockquote></details> <details> <summary>test/testdata/task-config.yaml (1)</summary><blockquote> `29-31`: _⚠️ Potential issue_ | _🔴 Critical_ **Fix invalid CEL ternary syntax in `reason.expression`.** The false branch ends with an extra `:`, which makes the CEL expression invalid. <details> <summary>🔧 Proposed fix</summary> ```diff reason: expression: | has(resources.clusterConfigMap.metadata.resourceVersion) ? "ConfigMapApplied" - : "ConfigMapPending": + : "ConfigMapPending" ``` </details> ```shell #!/bin/bash rg -n 'ConfigMapPending":' test/testdata/task-config.yaml ``` <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/testdata/task-config.yaml` around lines 29 - 31, The CEL ternary in reason.expression currently has an extra trailing colon making it invalid; locate the expression using the snippet has(resources.clusterConfigMap.metadata.resourceVersion) ? "ConfigMapApplied" : "ConfigMapPending": (search for "ConfigMapPending\":") and remove the stray trailing ":" so the ternary becomes a proper two-branch expression, ensuring reason.expression contains only the valid CEL conditional without the extra colon. ``` </details> </blockquote></details> <details> <summary>test/integration/config-loader/testdata/adapter-config-template.yaml (2)</summary><blockquote> `307-319`: _⚠️ Potential issue_ | _🟠 Major_ **Guard `filter(...)[0]` before indexing in CEL expressions.** These expressions index `[0]` without proving the filtered list is non-empty, so payload building can fail when `Available` is absent. <details> <summary>🔧 Proposed fix</summary> ```diff expression: | resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "ResourcesCreated" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "All Kubernetes resources created successfully" @@ reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "DeploymentReady" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "Deployment is available and serving traffic" ``` </details> Also applies to: 324-336 <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 307 - 319, The CEL expressions index filter(...)[0] without ensuring the filtered list is non-empty; update each expression that accesses resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] (e.g., in the expression for reason and message and the one accessing .status) to first check the filtered list length or use a ternary that guards access (for example: let avail = resources.clusterController.status.conditions.filter(c, c.type == 'Available'); avail.size() > 0 ? avail[0].reason : "ResourcesCreated" and similarly for .message and .status) so you never index [0] on an empty list. ``` </details> --- `211-213`: _⚠️ Potential issue_ | _🟠 Major_ **Drop `discovery.namespace` for cluster-scoped `Namespace`.** Line 212 scopes discovery to a namespace for a cluster-scoped object, which can break discovery behavior. <details> <summary>🔧 Proposed fix</summary> ```diff discovery: - namespace: "cluster-{{ .clusterId }}" by_selectors: label_selector: ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 211 - 213, Remove the discovery.namespace entry so cluster-scoped discovery is not restricted to a namespace: delete the line with discovery.namespace ("cluster-{{ .clusterId }}") in the adapter config template and leave the discovery.by_selectors block intact (or make the namespace conditional only for namespace-scoped adapters) so cluster-scoped resources are discovered across the entire cluster. ``` </details> </blockquote></details> <details> <summary>internal/config_loader/accessors.go (1)</summary><blockquote> `27-28`: _⚠️ Potential issue_ | _🟡 Minor_ **Keep built-in variable docs in sync with implementation.** The comment at Line 27 omits `config`, but `builtinVariables` includes it at Line 13. Please update the comment so it matches runtime behavior. <details> <summary>📝 Suggested fix</summary> ```diff -// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, The file's comment about built-in variables is out of sync: the comment lists "adapter, now, date" but the code defines builtinVariables including "config"; update the comment to include "config" (and any other built-ins present in the builtinVariables slice) so documentation matches runtime behavior; locate the builtinVariables declaration and the comment above it in accessors.go and add "config" to the inline list (ensuring wording/order reflects the actual slice). ``` </details> </blockquote></details> <details> <summary>test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)</summary><blockquote> `330-333`: _⚠️ Potential issue_ | _🟡 Minor_ **Fix `dig()` to target a key that actually exists.** At Line 333, `dig(resources, "configmap0.data.cluster_metadata")` references a key not produced by `configmap0.data` (which currently emits `cluster_id` and `cluster_name`). This can silently resolve empty in the showcase payload. <details> <summary>💡 Suggested fix</summary> ```diff - cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name") ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 333, The dig() call is targeting a non-existent key "configmap0.data.cluster_metadata" and thus resolves empty; update the expression in the cluster_metadata_region entry to reference a real key emitted by configmap0.data (for example use "configmap0.data.cluster_id" or "configmap0.data.cluster_name" depending on which value you want) so the dig(resources, ...) call returns actual data. ``` </details> </blockquote></details> <details> <summary>internal/config_loader/validator.go (1)</summary><blockquote> `582-586`: _⚠️ Potential issue_ | _🟡 Minor_ **Report invalid `metadata` type explicitly.** At Line 582, if `metadata` exists but is not an object, validation silently skips the `name` requirement. This lets structurally invalid manifests bypass semantic checks. <details> <summary>🔧 Proposed fix</summary> ```diff - if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { - if _, hasName := metadata[FieldName]; !hasName { - v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) - } - } + if metadataValue, exists := manifest["metadata"]; exists { + metadata, ok := metadataValue.(map[string]interface{}) + if !ok { + v.errors.Add(path+".metadata", "metadata must be an object") + } else if _, hasName := metadata[FieldName]; !hasName { + v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q", FieldName)) + } + } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently skips checking for the required metadata.name when the "metadata" key exists but is not an object; modify the validation around manifest and metadata so that if manifest["metadata"] exists and is not a map[string]interface{} you add a validation error via v.errors.Add (same path+"."+"metadata") with a message like "invalid type for field \"metadata\": expected object"; retain the existing check for missing FieldName when metadata is a map and keep using FieldName for the missing-field error. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (1)</summary><blockquote> <details> <summary>internal/criteria/README.md (1)</summary><blockquote> `131-131`: **Clarify `.field` vs `{.field}` in the syntax table.** This row may read as mixed notation (`.field` in Syntax, `{.name}` in Example). Consider explicitly stating both forms are accepted (shorthand and full JSONPath) in this row or a one-line note below the table. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@internal/criteria/README.md` at line 131, Clarify the notation difference in the table row for `.field` by stating both accepted forms: keep `.field` in the Syntax column and change the Example (currently `{.name}`) to show both shorthand and full JSONPath (e.g., `.name` and `{.name}`) or add a one-line note under the table explaining that `.field` is the shorthand and `{.field}` is the full JSONPath wrapper accepted by the parser; reference the `.field` and `{.name}` tokens so the README makes both formats explicit. ``` </details> </blockquote></details> </blockquote></details> <details> <summary>🤖 Prompt for all review comments with AI agents</summary>Verify each finding against the current code and only fix it if needed.
Inline comments:
In@charts/examples/maestro/adapter-task-resource-manifestwork.yaml:
- Around line 26-27: The manifest sets the same semantic key
hyperfleet.io/cluster-name inconsistently (labels use {{ .clusterName }} while
annotations use {{ .clusterId }}); update the annotation occurrences that
reference {{ .clusterId }} (including the instances around the second and third
occurrences you noted) to use {{ .clusterName }} so the
hyperfleet.io/cluster-name value is consistent across both labels and
annotations (ensure you update all occurrences at the annotation positions
corresponding to the label changes).In
@internal/config_loader/types.go:
- Around line 54-77: The redactedClients function currently only redacts Maestro
TLS paths; update it to also redact sensitive HyperFleet API default headers by
checking if clients.HyperfleetAPI (or HyperFleetAPI) is non-nil, copying its
config, iterating over DefaultHeaders (or default_headers) and replacing header
values whose names match authorization-like patterns (e.g., case-insensitive
matches for "authorization", "auth", "token", "api[-_ ]?key", "x-?api-?key")
with the redactedValue constant, then assign the modified copy back to
copy.HyperfleetAPI before returning; this ensures ClientsConfig default headers
are not leaked in logs.In
@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml:
- Around line 282-283: The ConfigMap metadata.namespace is not being normalized
to lowercase while the Namespace resource uses name: '{{ .clusterId | lower }}',
causing mismatches for mixed-case cluster IDs; update the ConfigMap templates
(the metadata.namespace fields) to use the same lowercasing template expression
(e.g., '{{ .clusterId | lower }}') wherever metadata.namespace is set (including
the other occurrence around the same block) so the ConfigMap targets the actual
created Namespace.
Outside diff comments:
In@charts/examples/README.md:
- Around line 46-49: Update the example YAML/JSON keys to the new snake_case
standard: replace nestedDiscoveries -> nested_discoveries, projectId ->
project_id, subscriptionId -> subscription_id, deadLetterTopic ->
dead_letter_topic (and any other camelCase keys in the same examples) so all
examples consistently use snake_case; search for occurrences of those
identifiers (e.g., "nestedDiscoveries", "projectId", "subscriptionId",
"deadLetterTopic") in the README examples and update them and any related
references (documentation text or example descriptions) to the corresponding
snake_case names.In
@cmd/adapter/main.go:
- Around line 383-390: The debug path currently marshals and logs the full
config (see config.DebugConfig and yaml.Marshal(config)), which can leak
secrets; instead obtain or build a redacted view of the configuration (e.g.,
call the existing redaction helper or create a shallow copy with sensitive
fields removed/obfuscated), then marshal that redacted object and log it
(replace yaml.Marshal(config) with yaml.Marshal(redactedConfig)), and keep the
existing error handling using logger.WithErrorField and log.Infof for the
redacted output.In
@internal/config_loader/validator.go:
- Around line 370-377: The manifest template validation is currently
unconditional inside the loop over v.config.Resources and should be skipped for
Maestro transport resources; change the loop in validator.go so that before
calling v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) you
check the resource's transport/type (e.g., resource.Transport or equivalent
field on the resource struct) and only run validateTemplateMap when the
transport is not "maestro" (case-insensitive) — this preserves the existing
inline note about skipping ManifestWork templates while leaving validation for
other transports intact.
Duplicate comments:
In@internal/config_loader/accessors.go:
- Around line 27-28: The file's comment about built-in variables is out of sync:
the comment lists "adapter, now, date" but the code defines builtinVariables
including "config"; update the comment to include "config" (and any other
built-ins present in the builtinVariables slice) so documentation matches
runtime behavior; locate the builtinVariables declaration and the comment above
it in accessors.go and add "config" to the inline list (ensuring wording/order
reflects the actual slice).In
@internal/config_loader/README.md:
- Around line 118-123: Add a language tag to the fenced validation-output block
so the markdown linter MD040 is satisfied: locate the fenced block containing
the validation output lines (the code fence that starts withand lists params[0].name..., preconditions[1].api_call.method..., resources[0].name..., preconditions[0].capture[0]...) and change the opening fence to include a language identifier (e.g., usetext) while leaving the block contents
unchanged.- Line 43: The README example uses the outdated accessor
config.Spec.Clients.HyperfleetAPI.Timeout; update the sample to use the
flattened config shape (e.g., timeout := config.Clients.HyperfleetAPI.Timeout) —
locate the line with timeout := config.Spec.Clients.HyperfleetAPI.Timeout and
replace the accessor to the new top-level/flattened field (confirm the exact
flattened field name, such as config.Clients or config.HyperfleetAPI, and update
the example accordingly).In
@internal/config_loader/validator.go:
- Around line 582-586: The validator currently skips checking for the required
metadata.name when the "metadata" key exists but is not an object; modify the
validation around manifest and metadata so that if manifest["metadata"] exists
and is not a map[string]interface{} you add a validation error via v.errors.Add
(same path+"."+"metadata") with a message like "invalid type for field
"metadata": expected object"; retain the existing check for missing FieldName
when metadata is a map and keep using FieldName for the missing-field error.In
@test/integration/config-loader/testdata/adapter-config-template.yaml:
- Around line 307-319: The CEL expressions index filter(...)[0] without ensuring
the filtered list is non-empty; update each expression that accesses
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] (e.g., in the expression for reason and message and the one
accessing .status) to first check the filtered list length or use a ternary that
guards access (for example: let avail =
resources.clusterController.status.conditions.filter(c, c.type == 'Available');
avail.size() > 0 ? avail[0].reason : "ResourcesCreated" and similarly for
.message and .status) so you never index [0] on an empty list.- Around line 211-213: Remove the discovery.namespace entry so cluster-scoped
discovery is not restricted to a namespace: delete the line with
discovery.namespace ("cluster-{{ .clusterId }}") in the adapter config template
and leave the discovery.by_selectors block intact (or make the namespace
conditional only for namespace-scoped adapters) so cluster-scoped resources are
discovered across the entire cluster.In
@test/integration/maestro_client/client_tls_config_integration_test.go:
- Line 200: The tests call
buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) without checking cfg or
cfg.Clients.Maestro and can panic; before any dereference add explicit nil
assertions (e.g., require.NotNil/ assert.NotNil) for cfg and cfg.Clients.Maestro
in the affected tests so they fail with a clear assertion instead of panicking;
locate the call sites referencing buildMaestroClientConfigFromLoaded and
cfg.Clients.Maestro (including the other occurrences noted) and insert the nil
checks at the top of each test function before invoking
buildMaestroClientConfigFromLoaded.In
@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml:
- Around line 330-333: The dig() call is targeting a non-existent key
"configmap0.data.cluster_metadata" and thus resolves empty; update the
expression in the cluster_metadata_region entry to reference a real key emitted
by configmap0.data (for example use "configmap0.data.cluster_id" or
"configmap0.data.cluster_name" depending on which value you want) so the
dig(resources, ...) call returns actual data.In
@test/testdata/task-config.yaml:
- Around line 29-31: The CEL ternary in reason.expression currently has an extra
trailing colon making it invalid; locate the expression using the snippet
has(resources.clusterConfigMap.metadata.resourceVersion) ? "ConfigMapApplied" :
"ConfigMapPending": (search for "ConfigMapPending":") and remove the stray
trailing ":" so the ternary becomes a proper two-branch expression, ensuring
reason.expression contains only the valid CEL conditional without the extra
colon.
Nitpick comments:
In@internal/criteria/README.md:
- Line 131: Clarify the notation difference in the table row for
.fieldby
stating both accepted forms: keep.fieldin the Syntax column and change the
Example (currently{.name}) to show both shorthand and full JSONPath (e.g.,
.nameand{.name}) or add a one-line note under the table explaining that
.fieldis the shorthand and{.field}is the full JSONPath wrapper accepted
by the parser; reference the.fieldand{.name}tokens so the README makes
both formats explicit.</details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Organization UI **Review profile**: CHILL **Plan**: Pro **Run ID**: `f5ad9536-ab75-4cd8-a03a-9b0d8d1aa2ea` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 9e688d8a19a5e1bdf41f26d9d37543e3f344ffba and df4b543fe948258a89a407406deb9ce21420eff5. </details> <details> <summary>📒 Files selected for processing (68)</summary> * `README.md` * `charts/examples/README.md` * `charts/examples/kubernetes/adapter-config.yaml` * `charts/examples/kubernetes/adapter-task-config.yaml` * `charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml` * `charts/examples/kubernetes/values.yaml` * `charts/examples/maestro-kubernetes/adapter-config.yaml` * `charts/examples/maestro-kubernetes/adapter-task-config.yaml` * `charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml` * `charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml` * `charts/examples/maestro/adapter-config.yaml` * `charts/examples/maestro/adapter-task-config.yaml` * `charts/examples/maestro/adapter-task-resource-manifestwork.yaml` * `charts/examples/maestro/values.yaml` * `charts/templates/configmap-broker.yaml` * `charts/templates/deployment.yaml` * `charts/values.yaml` * `cmd/adapter/main.go` * `configs/adapter-config-template.yaml` * `configs/adapter-deployment-config.yaml` * `configs/adapter-task-config-template.yaml` * `configs/templates/cluster-status-payload.yaml` * `configs/templates/job.yaml` * `configuration.md` * `docs/adapter-authoring-guide.md` * `docs/configuration.md` * `internal/config_loader/README.md` * `internal/config_loader/accessors.go` * `internal/config_loader/constants.go` * `internal/config_loader/loader.go` * `internal/config_loader/loader_test.go` * `internal/config_loader/types.go` * `internal/config_loader/validator.go` * `internal/config_loader/validator_test.go` * `internal/config_loader/viper_loader.go` * `internal/criteria/README.md` * `internal/criteria/evaluator_test.go` * `internal/executor/README.md` * `internal/executor/executor.go` * `internal/executor/executor_test.go` * `internal/executor/param_extractor.go` * `internal/executor/types.go` * `internal/executor/utils.go` * `internal/executor/utils_test.go` * `internal/hyperfleet_api/types.go` * `internal/manifest/generation.go` * `internal/manifest/manifest.go` * `test/integration/config-loader/config_criteria_integration_test.go` * `test/integration/config-loader/loader_template_test.go` * `test/integration/config-loader/testdata/adapter-config-template.yaml` * `test/integration/config-loader/testdata/adapter_config_valid.yaml` * `test/integration/config-loader/testdata/templates/cluster-status-payload.yaml` * `test/integration/config-loader/testdata/templates/job.yaml` * `test/integration/executor/executor_integration_test.go` * `test/integration/executor/executor_k8s_integration_test.go` * `test/integration/executor/testdata/test-adapter-config.yaml` * `test/integration/maestro_client/client_tls_config_integration_test.go` * `test/integration/testutil/mock_api_server.go` * `test/testdata/adapter-config.yaml` * `test/testdata/adapter_config_valid.yaml` * `test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` * `test/testdata/dryrun/dryrun-kubernetes-adapter-config.yaml` * `test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml` * `test/testdata/dryrun/dryrun-kubernetes-task-config.yaml` * `test/testdata/dryrun/dryrun-maestro-adapter-config.yaml` * `test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` * `test/testdata/task-config.yaml` * `test/testdata/templates/deployment.yaml` </details> <details> <summary>💤 Files with no reviewable changes (7)</summary> * internal/manifest/generation.go * charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml * internal/manifest/manifest.go * configuration.md * charts/examples/maestro-kubernetes/adapter-config.yaml * charts/examples/maestro-kubernetes/adapter-task-config.yaml * charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml </details> <details> <summary>✅ Files skipped from review due to trivial changes (1)</summary> * test/testdata/dryrun/dryrun-kubernetes-task-config.yaml </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (17)</summary> * configs/templates/job.yaml * internal/executor/utils_test.go * internal/executor/utils.go * internal/config_loader/loader.go * charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml * configs/templates/cluster-status-payload.yaml * internal/executor/README.md * docs/configuration.md * test/integration/testutil/mock_api_server.go * README.md * charts/templates/configmap-broker.yaml * test/integration/config-loader/loader_template_test.go * test/integration/config-loader/testdata/templates/cluster-status-payload.yaml * internal/executor/executor.go * charts/examples/kubernetes/values.yaml * internal/config_loader/viper_loader.go * test/testdata/adapter-config.yaml </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
7d8e7a3 to
b57edf5
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
charts/examples/README.md (1)
62-69:⚠️ Potential issue | 🟠 MajorREADME examples still use deprecated camelCase broker keys.
The sample YAML and
helm --setpaths still useprojectId/subscriptionId/deadLetterTopic, but the chart now expectsproject_id/subscription_id/dead_letter_topic. Copy-pasting these commands will set the wrong values.💡 Proposed fix
broker: googlepubsub: - projectId: CHANGE_ME - subscriptionId: CHANGE_ME + project_id: CHANGE_ME + subscription_id: CHANGE_ME topic: CHANGE_ME - deadLetterTopic: CHANGE_ME + dead_letter_topic: CHANGE_ME- --set broker.googlepubsub.projectId=<gcp-project> \ - --set broker.googlepubsub.subscriptionId=<gcp-subscription> \ + --set broker.googlepubsub.project_id=<gcp-project> \ + --set broker.googlepubsub.subscription_id=<gcp-subscription> \ --set broker.googlepubsub.topic=<gcp-topic> \ - --set broker.googlepubsub.deadLetterTopic=<gcp-dlq-topic> + --set broker.googlepubsub.dead_letter_topic=<gcp-dlq-topic>Also applies to: 84-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` around lines 62 - 69, Update the README example YAML and any helm --set examples to use the chart's current snake_case broker keys instead of deprecated camelCase: replace projectId → project_id, subscriptionId → subscription_id, topic → topic (if unchanged) and deadLetterTopic → dead_letter_topic; ensure both the block shown at lines 62-69 and the similar block at 84-90 are updated so copy-pasted values map to the chart's expected keys.
♻️ Duplicate comments (11)
test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
260-262:⚠️ Potential issue | 🟡 MinorNormalize ConfigMap identifiers consistently to lowercase.
Line 294 still uses
{{ .clusterId }}while the Namespace is lowercased (Line 282), and the ConfigMapresourceIdentifieralready expects lowercase (Line 260). For mixed-case IDs, this can break apply/discovery consistency.🔧 Suggested fix
- resourceIdentifier: group: "" name: '{{ .clusterId | lower }}-config-symbol2222' namespace: '{{ .clusterId | lower }}' resource: configmaps @@ metadata: @@ - name: '{{ .clusterId }}-config-symbol2222' - namespace: '{{ .clusterId }}' + name: '{{ .clusterId | lower }}-config-symbol2222' + namespace: '{{ .clusterId | lower }}' @@ - discovery: - by_name: '{{ .clusterId }}-config-symbol2222' + by_name: '{{ .clusterId | lower }}-config-symbol2222' name: configmap0Also applies to: 293-295, 306-307
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 260 - 262, Several template fields mix cased and lowercased cluster IDs causing inconsistency; update all occurrences of "{{ .clusterId }}" used in resource identifiers, names, and namespaces (notably the resourceIdentifier entries and the lines referenced around 293-295 and 306-307) to use the lowercase filter "{{ .clusterId | lower }}" so the ConfigMap name, namespace and resourceIdentifier are consistently normalized to lowercase.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-334:⚠️ Potential issue | 🟡 Minor
cluster_metadata_regionstill points to a non-existent key inconfigmap0.data.
dig(resources, "configmap0.data.cluster_metadata")doesn’t match keys produced byconfigmap0, so this value can stay empty.🔧 Proposed fix
cluster_metadata_region: expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 334, cluster_metadata_region is using dig(resources, "configmap0.data.cluster_metadata") which doesn't match any key emitted by configmap0; update the cluster_metadata_region entry so it is empty (remove or replace the expression value) rather than pointing to the non-existent key—edit the cluster_metadata_region block that currently contains dig(resources, "configmap0.data.cluster_metadata") so it contains no expression or an explicit empty value.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorUpdate built-in variable docs to include
config.Line 27 lists
(adapter, now, date), but Line 13 includesconfiginbuiltinVariables; this comment is stale.Suggested fix
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, The documentation comment listing built-in variables is stale: update the comment that currently lists "(adapter, now, date)" to include "config" so it matches the actual builtinVariables value; locate the comment near the builtinVariables declaration in accessors.go and change the text to "(adapter, now, date, config)" (or otherwise include config) so the docs and the variable declaration are consistent.charts/examples/maestro/adapter-task-resource-manifestwork.yaml (1)
50-53:⚠️ Potential issue | 🟡 MinorFix conflicting
hyperfleet.io/cluster-namemetadata value.Line 52 still uses
{{ .clusterId }}while labels use{{ .clusterName }}, creating inconsistent identity metadata in the same resource.🛠️ Suggested fix
- hyperfleet.io/cluster-name: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/maestro/adapter-task-resource-manifestwork.yaml` around lines 50 - 53, The metadata label hyperfleet.io/cluster-name is inconsistent with the rest of the resource (it uses the template variable .clusterId while other labels use .clusterName); update the value for hyperfleet.io/cluster-name to use {{ .clusterName }} so the label key "hyperfleet.io/cluster-name" and the template variable .clusterName are consistent across the resource manifest.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟡 MinorValidate non-object
metadataexplicitly.At Line 582, when
metadatais present but not an object, validation silently skips type/name checks.🔧 Proposed fix
if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", fmt.Sprintf("field %q must be an object", "metadata")) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently only checks metadata when manifest["metadata"] asserts to map[string]interface{}, silently ignoring cases where metadata exists but is not an object; update the check in validator.go so that when manifest contains a "metadata" key whose value is not a map[string]interface{} you call v.errors.Add(path+".metadata", fmt.Sprintf("field %q must be an object", FieldName)) and otherwise preserve the existing branch that verifies presence of FieldName in the map; refer to manifest, metadata, FieldName and v.errors.Add to locate and modify the logic.test/integration/maestro_client/client_tls_config_integration_test.go (3)
285-285:⚠️ Potential issue | 🟡 MinorAdd nil guard for
cfg.Clients.Maestrobefore dereferencing.Line 285 assigns
cfg.Clients.MaestrotomaestroCfgand immediately accesses fields without a nil check.🛡️ Proposed fix
require.NoError(t, err) maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") assert.Equal(t, env.TLSMaestroGRPCAddr, maestroCfg.GRPCServerAddress, "Env should override YAML")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 285, Add a nil guard before dereferencing cfg.Clients.Maestro: check that cfg != nil and cfg.Clients != nil and cfg.Clients.Maestro != nil before assigning to maestroCfg (or immediately after assignment) and return or fail the test if Maestro is nil; update usage sites in this test (the maestroCfg variable) so they only run after the nil check to avoid a panic when accessing Maestro fields.
237-237:⚠️ Potential issue | 🟡 MinorAdd nil guard for
cfg.Clients.Maestrobefore dereferencing.Line 237 assigns
cfg.Clients.MaestrotomaestroCfgand immediately accesses.Insecureon line 238 without a nil check.🛡️ Proposed fix
require.NoError(t, err) maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") assert.True(t, maestroCfg.Insecure)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 237, The code dereferences cfg.Clients.Maestro without nil checking when assigning maestroCfg; add a nil guard before accessing maestroCfg.Insecure: check if cfg == nil or cfg.Clients == nil or cfg.Clients.Maestro == nil and handle the nil case (return error, set defaults, or skip TLS checks as appropriate) in the test setup where maestroCfg is used, updating any logic that assumes maestroCfg non-nil so subsequent access to maestroCfg.Insecure is safe.
200-200:⚠️ Potential issue | 🟡 MinorAdd nil guard for
cfg.Clients.Maestrobefore dereferencing.Line 200 passes
cfg.Clients.MaestrotobuildMaestroClientConfigFromLoadedwithout checking for nil. If the config loading returns a partial object or the Maestro section is missing, this will cause a nil pointer dereference inside the function at line 21 when accessingmaestroConfig.HTTPServerAddress.🛡️ Proposed fix
require.NoError(t, err) + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 200, Add a nil guard for cfg.Clients.Maestro before calling buildMaestroClientConfigFromLoaded: check if cfg.Clients == nil or cfg.Clients.Maestro == nil and handle the case (return/require/fail the test with a clear error) instead of passing a nil pointer into buildMaestroClientConfigFromLoaded; update the test to assert or fail early with a descriptive message referencing cfg.Clients.Maestro so the function buildMaestroClientConfigFromLoaded and its use of maestroConfig.HTTPServerAddress cannot dereference nil.test/integration/config-loader/testdata/adapter-config-template.yaml (3)
307-318:⚠️ Potential issue | 🟠 MajorGuard CEL filter expressions before indexing
[0].Lines 308, 311-312, and 316-317 access
.filter(c, c.type == 'Available')[0]without first checking that the filtered list is non-empty. This will fail at runtime if noAvailablecondition exists.🔧 Proposed fix for the applied condition block
applied: status: expression: | resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" reason: expression: | + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "ResourcesCreated" message: expression: | + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "All Kubernetes resources created successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 307 - 318, The CEL expressions index into resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] without ensuring the filtered list is non-empty; update the expressions used for reason and message to guard the index by first checking the filtered list size or existence (e.g., assign filtered = resources.clusterController.status.conditions.filter(...); use has(filtered) or filtered.size() > 0 before accessing filtered[0]) and return the fallback strings ("ResourcesCreated" / "All Kubernetes resources created successfully") when the filtered list is empty; adjust both the reason and message expressions to use this guarded pattern so you never index [0] on an empty list.
211-216:⚠️ Potential issue | 🟠 MajorRemove
namespacefrom discovery for cluster-scopedNamespaceresource.The
Namespacekind is cluster-scoped and does not belong to any namespace. Settingdiscovery.namespacehere is incorrect and may cause discovery issues.🔧 Proposed fix
discovery: - namespace: "cluster-{{ .clusterId }}" by_selectors: label_selector:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 211 - 216, The discovery block currently includes a namespace for the cluster-scoped Namespace kind; remove any discovery.namespace setting so discovery is not scoped to a namespace for the cluster-scoped resource — update the discovery.by_selectors block (label_selector keys like hyperfleet.io/cluster-id, hyperfleet.io/resource-type, hyperfleet.io/managed-by) to remain but ensure no discovery.namespace is present for the Namespace resource template.
325-336:⚠️ Potential issue | 🟠 MajorGuard CEL filter expressions before indexing
[0]inavailablecondition.Same issue as the
appliedblock - lines 328-329 and 333-334 access filtered arrays without checking size first.🔧 Proposed fix
reason: expression: | + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "DeploymentReady" message: expression: | + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "Deployment is available and serving traffic"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 325 - 336, The CEL expressions in the reason and message blocks access resources.clusterController.status.conditions.filter(...)[0] without ensuring the filtered array has elements; update the reason and message expressions to first check the filtered array size (e.g. filter(...).size() > 0) before indexing or wrap the index access in a conditional that uses the size() test, so both the reason expression and the message expression safely fall back to "DeploymentReady" and "Deployment is available and serving traffic" when no Available condition exists.
🧹 Nitpick comments (3)
test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml (1)
18-43: Consider isolating each invalid scenario into separate fixtures.This block currently mixes multiple failure modes (schema invalid + template variable invalid). If validator error ordering changes, assertions can become fragile. Splitting by failure type will make tests more stable and easier to diagnose.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml` around lines 18 - 43, The test fixture combines two distinct failure cases ("missingFields" ConfigMap missing metadata.name and "undefinedVar" ConfigMap with template variable {{ .nonExistentVariable }}), which makes assertions brittle; split them into two separate fixtures (e.g., one YAML containing only the "missingFields" resource and another containing only the "undefinedVar" resource), update the tests that reference this fixture to load the appropriate new file for each scenario, and adjust each test's assertions to expect the specific validation/template error for the corresponding resource (use the resource names "missingFields" and "undefinedVar" and the manifest fields metadata.name/annotations to locate the problematic entries when updating the tests).test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
166-167: Track or resolve the timestamp-expression TODO in test data.Leaving this TODO in fixture logic can hide parser/templating limitations over time. Please either link an issue or replace it with a documented deterministic test fixture approach.
If you want, I can draft a small follow-up issue template with acceptance criteria for this TODO.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 166 - 167, The TODO about using {{ now }} in the test fixture should be resolved: either replace the placeholder with a deterministic, documented value (e.g., a fixed ISO8601 timestamp) in the expression field and update the surrounding comment to state why a static value is used, or add a link to an existing issue/bug ticket that tracks enabling dynamic timestamps and include a brief acceptance criteria note. Locate the YAML entry with the expression key (the line containing "- expression: '\"2006-01-02T15:04:05Z07:00\"'") and either swap it for a fixed test timestamp and update the comment, or remove the TODO and add the issue reference and expected follow-up steps so the fixture is explicit and repeatable.test/integration/config-loader/loader_template_test.go (1)
111-117: Strengthen post-merge assertions to avoid false positives.At Line 111 and Line 116, conditional checks allow this test to pass even when
Postcontent is missing after merge, which weakens coverage.🔧 Proposed test hardening
- if config.Post != nil { - assert.NotEmpty(t, config.Post.Payloads) - assert.NotEmpty(t, config.Post.PostActions) - - // Check post action - if len(config.Post.PostActions) > 0 { - firstAction := config.Post.PostActions[0] - assert.NotEmpty(t, firstAction.Name) - if firstAction.APICall != nil { - assert.NotEmpty(t, firstAction.APICall.Method) - assert.NotEmpty(t, firstAction.APICall.URL) - } - } - } + require.NotNil(t, config.Post, "post config should be present from task config") + require.NotEmpty(t, config.Post.Payloads, "post payloads should be present") + require.NotEmpty(t, config.Post.PostActions, "post actions should be present") + + firstAction := config.Post.PostActions[0] + assert.NotEmpty(t, firstAction.Name) + require.NotNil(t, firstAction.APICall, "first post action should define api_call") + assert.NotEmpty(t, firstAction.APICall.Method) + assert.NotEmpty(t, firstAction.APICall.URL)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/loader_template_test.go` around lines 111 - 117, The test currently guards assertions with conditionals which lets it pass when Post is missing; change the checks to be strict assertions: assert.NotNil(t, config.Post) (or assert.True/require) then assert.NotEmpty(t, config.Post.Payloads) and assert.NotEmpty(t, config.Post.PostActions) without wrapping them in if blocks, remove the conditional around len(config.Post.PostActions) and directly grab firstAction := config.Post.PostActions[0] and assert expected properties on firstAction so the test fails when Post content is absent after merge.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/executor/param_extractor.go`:
- Around line 85-96: The YAML round-trip in configToMap loses zero-valued fields
because of yaml:",omitempty"; stop using yaml.Marshal/yaml.Unmarshal for this
conversion and instead derive config.* values directly from the typed cfg
parameter (or convert cfg to a map using a marshaler that doesn't drop
zero-values or a reflection-based traversal). Modify configToMap to read fields
from the provided *config_loader.Config (cfg) rather than relying on the YAML
round-trip so keys like config.debug_config (false) are preserved; reference the
configToMap function and the cfg parameter when making the change.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Line 140: The YAML value for the key 'field' contains a mismatched quote
("field: name\""); fix it by correcting the scalar so quotes are balanced—either
wrap the value in matching quotes (e.g., field: "name") or remove the stray
trailing quote (field: name) to restore valid YAML parsing.
---
Outside diff comments:
In `@charts/examples/README.md`:
- Around line 62-69: Update the README example YAML and any helm --set examples
to use the chart's current snake_case broker keys instead of deprecated
camelCase: replace projectId → project_id, subscriptionId → subscription_id,
topic → topic (if unchanged) and deadLetterTopic → dead_letter_topic; ensure
both the block shown at lines 62-69 and the similar block at 84-90 are updated
so copy-pasted values map to the chart's expected keys.
---
Duplicate comments:
In `@charts/examples/maestro/adapter-task-resource-manifestwork.yaml`:
- Around line 50-53: The metadata label hyperfleet.io/cluster-name is
inconsistent with the rest of the resource (it uses the template variable
.clusterId while other labels use .clusterName); update the value for
hyperfleet.io/cluster-name to use {{ .clusterName }} so the label key
"hyperfleet.io/cluster-name" and the template variable .clusterName are
consistent across the resource manifest.
In `@internal/config_loader/accessors.go`:
- Around line 27-28: The documentation comment listing built-in variables is
stale: update the comment that currently lists "(adapter, now, date)" to include
"config" so it matches the actual builtinVariables value; locate the comment
near the builtinVariables declaration in accessors.go and change the text to
"(adapter, now, date, config)" (or otherwise include config) so the docs and the
variable declaration are consistent.
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently only checks metadata when
manifest["metadata"] asserts to map[string]interface{}, silently ignoring cases
where metadata exists but is not an object; update the check in validator.go so
that when manifest contains a "metadata" key whose value is not a
map[string]interface{} you call v.errors.Add(path+".metadata",
fmt.Sprintf("field %q must be an object", FieldName)) and otherwise preserve the
existing branch that verifies presence of FieldName in the map; refer to
manifest, metadata, FieldName and v.errors.Add to locate and modify the logic.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 307-318: The CEL expressions index into
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] without ensuring the filtered list is non-empty; update the
expressions used for reason and message to guard the index by first checking the
filtered list size or existence (e.g., assign filtered =
resources.clusterController.status.conditions.filter(...); use has(filtered) or
filtered.size() > 0 before accessing filtered[0]) and return the fallback
strings ("ResourcesCreated" / "All Kubernetes resources created successfully")
when the filtered list is empty; adjust both the reason and message expressions
to use this guarded pattern so you never index [0] on an empty list.
- Around line 211-216: The discovery block currently includes a namespace for
the cluster-scoped Namespace kind; remove any discovery.namespace setting so
discovery is not scoped to a namespace for the cluster-scoped resource — update
the discovery.by_selectors block (label_selector keys like
hyperfleet.io/cluster-id, hyperfleet.io/resource-type, hyperfleet.io/managed-by)
to remain but ensure no discovery.namespace is present for the Namespace
resource template.
- Around line 325-336: The CEL expressions in the reason and message blocks
access resources.clusterController.status.conditions.filter(...)[0] without
ensuring the filtered array has elements; update the reason and message
expressions to first check the filtered array size (e.g. filter(...).size() > 0)
before indexing or wrap the index access in a conditional that uses the size()
test, so both the reason expression and the message expression safely fall back
to "DeploymentReady" and "Deployment is available and serving traffic" when no
Available condition exists.
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Line 285: Add a nil guard before dereferencing cfg.Clients.Maestro: check that
cfg != nil and cfg.Clients != nil and cfg.Clients.Maestro != nil before
assigning to maestroCfg (or immediately after assignment) and return or fail the
test if Maestro is nil; update usage sites in this test (the maestroCfg
variable) so they only run after the nil check to avoid a panic when accessing
Maestro fields.
- Line 237: The code dereferences cfg.Clients.Maestro without nil checking when
assigning maestroCfg; add a nil guard before accessing maestroCfg.Insecure:
check if cfg == nil or cfg.Clients == nil or cfg.Clients.Maestro == nil and
handle the nil case (return error, set defaults, or skip TLS checks as
appropriate) in the test setup where maestroCfg is used, updating any logic that
assumes maestroCfg non-nil so subsequent access to maestroCfg.Insecure is safe.
- Line 200: Add a nil guard for cfg.Clients.Maestro before calling
buildMaestroClientConfigFromLoaded: check if cfg.Clients == nil or
cfg.Clients.Maestro == nil and handle the case (return/require/fail the test
with a clear error) instead of passing a nil pointer into
buildMaestroClientConfigFromLoaded; update the test to assert or fail early with
a descriptive message referencing cfg.Clients.Maestro so the function
buildMaestroClientConfigFromLoaded and its use of
maestroConfig.HTTPServerAddress cannot dereference nil.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: cluster_metadata_region is using dig(resources,
"configmap0.data.cluster_metadata") which doesn't match any key emitted by
configmap0; update the cluster_metadata_region entry so it is empty (remove or
replace the expression value) rather than pointing to the non-existent key—edit
the cluster_metadata_region block that currently contains dig(resources,
"configmap0.data.cluster_metadata") so it contains no expression or an explicit
empty value.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 260-262: Several template fields mix cased and lowercased cluster
IDs causing inconsistency; update all occurrences of "{{ .clusterId }}" used in
resource identifiers, names, and namespaces (notably the resourceIdentifier
entries and the lines referenced around 293-295 and 306-307) to use the
lowercase filter "{{ .clusterId | lower }}" so the ConfigMap name, namespace and
resourceIdentifier are consistently normalized to lowercase.
---
Nitpick comments:
In `@test/integration/config-loader/loader_template_test.go`:
- Around line 111-117: The test currently guards assertions with conditionals
which lets it pass when Post is missing; change the checks to be strict
assertions: assert.NotNil(t, config.Post) (or assert.True/require) then
assert.NotEmpty(t, config.Post.Payloads) and assert.NotEmpty(t,
config.Post.PostActions) without wrapping them in if blocks, remove the
conditional around len(config.Post.PostActions) and directly grab firstAction :=
config.Post.PostActions[0] and assert expected properties on firstAction so the
test fails when Post content is absent after merge.
In `@test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml`:
- Around line 18-43: The test fixture combines two distinct failure cases
("missingFields" ConfigMap missing metadata.name and "undefinedVar" ConfigMap
with template variable {{ .nonExistentVariable }}), which makes assertions
brittle; split them into two separate fixtures (e.g., one YAML containing only
the "missingFields" resource and another containing only the "undefinedVar"
resource), update the tests that reference this fixture to load the appropriate
new file for each scenario, and adjust each test's assertions to expect the
specific validation/template error for the corresponding resource (use the
resource names "missingFields" and "undefinedVar" and the manifest fields
metadata.name/annotations to locate the problematic entries when updating the
tests).
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 166-167: The TODO about using {{ now }} in the test fixture should
be resolved: either replace the placeholder with a deterministic, documented
value (e.g., a fixed ISO8601 timestamp) in the expression field and update the
surrounding comment to state why a static value is used, or add a link to an
existing issue/bug ticket that tracks enabling dynamic timestamps and include a
brief acceptance criteria note. Locate the YAML entry with the expression key
(the line containing "- expression: '\"2006-01-02T15:04:05Z07:00\"'") and either
swap it for a fixed test timestamp and update the comment, or remove the TODO
and add the issue reference and expected follow-up steps so the fixture is
explicit and repeatable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1c6ecbbe-bdd2-4193-990c-151c4a902a17
📒 Files selected for processing (68)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- configuration.md
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
- internal/manifest/manifest.go
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
🚧 Files skipped from review as they are similar to previous changes (12)
- test/integration/testutil/mock_api_server.go
- internal/criteria/evaluator_test.go
- test/testdata/adapter-config.yaml
- internal/executor/utils.go
- docs/configuration.md
- internal/executor/types.go
- charts/examples/kubernetes/values.yaml
- configs/templates/job.yaml
- internal/executor/README.md
- charts/examples/kubernetes/adapter-config.yaml
- internal/config_loader/constants.go
- README.md
test/integration/config-loader/testdata/adapter-config-template.yaml
Outdated
Show resolved
Hide resolved
a82ceaa to
6732cd8
Compare
| exchange: {{ .Values.broker.rabbitmq.exchange | quote }} | ||
| routing_key: {{ .Values.broker.rabbitmq.routingKey | quote }} | ||
| exchange_type: {{ .Values.broker.rabbitmq.exchangeType | default "topic" | quote }} | ||
| routing_key: {{ .Values.broker.rabbitmq.routing_key | quote }} |
There was a problem hiding this comment.
Priority: Bug
The RabbitMQ section of the generated broker ConfigMap dropped exchange_type
entirely (it was previously rendered with a default of "topic"). The
routing_key rename was fixed, but exchange_type seems to have been
accidentally removed rather than renamed. If this was intentional, values.yaml
should also drop the commented-out exchange_type reference. If not, it should
be added back:
| routing_key: {{ .Values.broker.rabbitmq.routing_key | quote }} | |
| routing_key: {{ .Values.broker.rabbitmq.routing_key | quote }} | |
| exchange_type: {{ .Values.broker.rabbitmq.exchange_type | default "topic" | quote }} |
6732cd8 to
985f515
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (7)
charts/examples/README.md (1)
48-48:⚠️ Potential issue | 🟡 MinorUse snake_case key name in the docs example (
nested_discoveries).Line 48 uses
nested_Discoveries, which is inconsistent with the renamed field and can confuse readers.Suggested fix
-- Resource discovery by name and by label selectors (`nested_Discoveries`) +- Resource discovery by name and by label selectors (`nested_discoveries`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` at line 48, Update the docs example to use the snake_case field name: replace the incorrect key `nested_Discoveries` with `nested_discoveries` in the README example so it matches the renamed field and avoids confusion for readers; ensure any other occurrences in the same example block are updated as well.test/integration/maestro_client/client_tls_config_integration_test.go (1)
139-139:⚠️ Potential issue | 🟡 MinorAdd
cfg.Clients/cfg.Clients.Maestroguards at every dereference site.A partial loaded config can still panic these tests. Guard both levels before use so failures remain assertion-driven.
🔧 Suggested patch
require.NoError(t, err, "Config loading should succeed") - require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") @@ require.NoError(t, err) + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ require.NoError(t, err) + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro @@ require.NoError(t, err) + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.MaestroAlso applies to: 200-200, 237-237, 285-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 139, Add explicit nil guards for both cfg.Clients and cfg.Clients.Maestro before any dereference to avoid panics: replace direct accesses with require.NotNil(t, cfg.Clients, "...") followed by require.NotNil(t, cfg.Clients.Maestro, "...") in the test and apply the same pattern at the other dereference sites noted (around the current checks at lines near 200, 237, 285). Ensure every place that reads cfg.Clients.* or cfg.Clients.Maestro.* is preceded by these two require.NotNil assertions so failures remain assertion-driven rather than causing a panic.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-333:⚠️ Potential issue | 🟡 Minor
cluster_metadata_regionreads a key thatconfigmap0never produces.
configmap0.datadefinescluster_idandcluster_name;cluster_metadataresolves empty and weakens this showcase field.Proposed fix
- cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 333, The expression for cluster_metadata_region is reading a non-existent key "configmap0.data.cluster_metadata"; update the expression in cluster_metadata_region to reference an existing key from configmap0.data (e.g., "configmap0.data.cluster_id" or "configmap0.data.cluster_name") or otherwise handle the optional/empty value (e.g., use a dig into "configmap0.data.cluster_id" with a fallback) so the field resolves to real data instead of empty; locate the expression that currently calls dig(resources, "configmap0.data.cluster_metadata") and replace it with the correct key or add an explicit fallback.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorKeep built-in variable docs in sync with implementation.
The comment still omits
config, butbuiltinVariablesincludes it.🔧 Suggested patch
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, The file's doc comment listing built-in variables is out of sync with the implementation: update the comment that lists "Built-in variables (adapter, now, date)" to include "config" so it matches the builtinVariables declaration; locate the doc near the top of internal/config_loader/accessors.go and add "config" to the list (ensure the comment text and the builtinVariables symbol remain consistent).internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟠 MajorReport invalid
metadatatypes instead of silently passing.If
metadatais present but not an object, this branch skips validation entirely and no error is recorded.🔧 Suggested patch
- if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { + if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", "metadata must be an object") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The code silently skips validation when manifest["metadata"] exists but is not an object; update the check around manifest and metadata in validator.go to detect when metadata is present and not a map[string]interface{} and call v.errors.Add(path+".metadata", fmt.Sprintf("expected object for field %q but got %T", FieldName, manifest["metadata"])). Keep the existing branch that checks for the missing FieldName inside the map (using metadata and FieldName) so you both report wrong types and still validate missing name when metadata is an object.test/integration/config-loader/testdata/adapter-config-template.yaml (1)
307-318:⚠️ Potential issue | 🟠 MajorGuard filtered condition indexing before accessing
[0].The current expressions can fail when no
Availablecondition exists, causing payload build failures.🔧 Suggested fix
- resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" @@ - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) @@ - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message)Also applies to: 328-335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 307 - 318, The expressions access resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] without ensuring the filtered array contains an element; change the reason and message expressions to first capture the filtered array (e.g., available = resources.clusterController.status.conditions.filter(c, c.type == 'Available')) and then guard access with a presence check (e.g., has(available[0]) or length(available) > 0) before reading available[0].reason / available[0].message, otherwise return the existing defaults ("ResourcesCreated" and "All Kubernetes resources created successfully"); apply the same fix to the other occurrence (the message/reason block referenced in the comment).test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
293-295:⚠️ Potential issue | 🟡 MinorNormalize ConfigMap identifiers to match the lowercased namespace convention.
These fields still use raw
clusterId, which can break for mixed-case IDs and drift from the normalized naming already used in this file.🔧 Suggested fix
- name: '{{ .clusterId }}-config-symbol2222' - namespace: '{{ .clusterId }}' + name: '{{ .clusterId | lower }}-config-symbol2222' + namespace: '{{ .clusterId | lower }}' @@ - by_name: '{{ .clusterId }}-config-symbol2222' + by_name: '{{ .clusterId | lower }}-config-symbol2222'Also applies to: 306-307
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 293 - 295, The ConfigMap identifiers use raw {{ .clusterId }} which breaks the lowercase namespace convention; update the name and namespace fields (the lines assigning name: '{{ .clusterId }}-config-symbol2222' and namespace: '{{ .clusterId }}', and the duplicate occurrences around the later block) to use a lowercased clusterId (e.g., use the templating lowercase function on .clusterId) so both the namespace and the derived name (config-symbol2222) are generated from the normalized, lowercased .clusterId.
🧹 Nitpick comments (4)
internal/executor/executor.go (1)
214-223: Avoid marshaling config on every event execution.
configToMapruns for each event even though config is static. Caching it in the executor reduces per-event overhead and removes repeated runtime failure points.Suggested direction
func (e *Executor) executeParamExtraction(execCtx *ExecutionContext) error { - // Pre-compute the config map once; shared by param extraction and adapter params - configMap, err := configToMap(e.config.Config) - if err != nil { - return NewExecutorError(PhaseParamExtraction, "config", "failed to marshal config", err) - } - - // Add adapter params first so config. param sources can reuse the pre-computed map - addAdapterParams(e.config.Config, execCtx, configMap) - - return extractConfigParams(e.config.Config, execCtx, configMap) + // Use precomputed configMap stored on Executor at construction time + addAdapterParams(e.config.Config, execCtx, e.configMap) + return extractConfigParams(e.config.Config, execCtx, e.configMap) }(Initialize
e.configMaponce inNewExecutor.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/executor/executor.go` around lines 214 - 223, Move the configToMap call out of the per-event execution path and initialize e.configMap once during executor construction (e.g., in NewExecutor): call configToMap(e.config.Config) in NewExecutor, store the result on the Executor struct as e.configMap, and surface any error there instead of returning NewExecutorError(PhaseParamExtraction, ... ) at runtime; then update the execution path to remove the configToMap call and pass the cached e.configMap into addAdapterParams and extractConfigParams (which currently use configMap) so per-event execution no longer re-marshals the static config.configs/adapter-task-config-template.yaml (1)
52-65: Prefer relative API paths and a single source of truth for API config.This template duplicates base URL/version in params and builds absolute URLs. Using relative paths (
/clusters/...) keeps behavior aligned withclients.hyperfleet_apiand avoids config drift.Suggested simplification
- url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}" + url: "/clusters/{{ .clusterId }}" ... - url: "{{ .hyperfleetApiBaseUrl }}/api/hyperfleet/{{ .hyperfleetApiVersion }}/clusters/{{ .clusterId }}/statuses" + url: "/clusters/{{ .clusterId }}/statuses"Also applies to: 113-114, 280-281
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 52 - 65, The params block duplicates API addressing; remove or consolidate hyperfleetApiBaseUrl/hyperfleetApiVersion and switch any absolute URL construction to use relative paths (e.g., "/clusters/...") so the service uses the single source of truth from clients.hyperfleet_api; update references that build absolute URLs to instead read clients.hyperfleet_api base/version (or omit version) and join with relative endpoints, ensuring params only expose a single configurable host or none if clients.hyperfleet_api already provides it (apply same change for the other occurrences at lines referenced by the reviewer).internal/config_loader/validator_test.go (1)
295-318: Add a regression test for non-objectmetadata.This test block covers missing metadata/name, but not the case where
metadatais present with the wrong type.🧪 Suggested test case
t.Run("missing metadata in manifest", func(t *testing.T) { cfg := withResource(map[string]interface{}{ "apiVersion": "v1", "kind": "Namespace", }) v := newTaskValidator(cfg) _ = v.ValidateStructure() err := v.ValidateSemantic() require.Error(t, err) assert.Contains(t, err.Error(), "missing required Kubernetes field \"metadata\"") }) + + t.Run("invalid metadata type in manifest", func(t *testing.T) { + cfg := withResource(map[string]interface{}{ + "apiVersion": "v1", + "kind": "Namespace", + "metadata": "not-an-object", + }) + v := newTaskValidator(cfg) + _ = v.ValidateStructure() + err := v.ValidateSemantic() + require.Error(t, err) + assert.Contains(t, err.Error(), "metadata must be an object") + })test/integration/executor/executor_integration_test.go (1)
934-937: AlignhyperfleetApiVersiontest params with schema by settingsource.These test configs rely on an empty
source+ default fallback. The production schema requiressource, so this creates a validation gap in integration tests.♻️ Suggested fix
- {Name: "hyperfleetApiVersion", Default: "v1"}, + {Name: "hyperfleetApiVersion", Source: "env.HYPERFLEET_API_VERSION", Default: "v1"},Apply this in all three occurrences.
Also applies to: 1181-1184, 1352-1355
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_integration_test.go` around lines 934 - 937, The test Parameter entries for "hyperfleetApiVersion" in the Params slices lack a Source and rely on default fallback, which diverges from production schema; update each occurrence of the Parameter with Name: "hyperfleetApiVersion" (three places around the current diff spans) to include an explicit Source (e.g., Source: "env.HYPERFLEET_API_VERSION" or the appropriate source used in production) so the test validation matches the schema; ensure you modify all three occurrences (around the shown block and the other two at ~1181-1184 and ~1352-1355) so they are consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@configs/adapter-task-config-template.yaml`:
- Around line 241-252: The Health condition currently uses adapter.?errorReason
and adapter.?errorMessage only, which can mark runs with resourcesSkipped or
executionError as "Healthy"; update the expressions for status, reason and
message (the block referencing adapter.?executionStatus, adapter.?errorReason,
adapter.?errorMessage) to also consider adapter.?resourcesSkipped (orValue(0))
and adapter.?executionError (orValue("")): set status to "False" when
executionStatus == "failed" OR executionError is non-empty OR resourcesSkipped >
0 (keep "Unknown" for empty/other states), set reason to prefer
adapter.executionError if present, else adapter.errorReason, else
"ResourcesSkipped" when resourcesSkipped > 0, else "Healthy", and set message to
include adapter.executionError or adapter.errorMessage or a generated message
noting how many resources were skipped when resourcesSkipped > 0.
In `@test/testdata/adapter_config_valid.yaml`:
- Around line 71-73: The observed_generation field is currently mapped to '{{
.resourceId }}' instead of the numeric generation; update the mapping for
observed_generation to use the generation parameter (e.g., '{{ .generation }}')
and ensure generation is populated from event.generation in the fixture
parameters so observed_generation is a numeric value for proper status
reconciliation (check where generation is defined/propagated and add it if
missing).
- Around line 79-80: The Authorization header template references an undefined
template variable `hyperfleetApiToken`; add a definition/capture for
`.hyperfleetApiToken` so rendering provides a value. Locate the adapter config
that contains the header entry (the block with "- name: Authorization" and
"value: Bearer {{ .hyperfleetApiToken }}") and add the corresponding
parameter/capture/key (e.g., a param entry, environment capture, or secrets
reference) that populates `hyperfleetApiToken` at runtime so the template
renders a non-empty bearer token.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 35-38: The parameter "generation" (source: "event.generation") is
declared with type "string" but is used to populate observed_generation, which
expects a numeric value; update the parameter type for name "generation" from
"string" to a numeric type (e.g., "int" or "number") so numeric payloads are
produced, and make the same change at the other occurrence referenced (the block
around lines 300-302) to keep both definitions consistent.
In `@test/testdata/dryrun/dryrun-kubernetes-task-config.yaml`:
- Around line 176-177: The observed_generation field currently sets expression:
"generation" which treats it as a literal string; update the
observed_generation.expression to reference the generation variable without
quotes (use unquoted generation) so the captured generation value is used
(locate the observed_generation block and the expression key to change the
quoted "generation" to an unquoted generation).
---
Duplicate comments:
In `@charts/examples/README.md`:
- Line 48: Update the docs example to use the snake_case field name: replace the
incorrect key `nested_Discoveries` with `nested_discoveries` in the README
example so it matches the renamed field and avoids confusion for readers; ensure
any other occurrences in the same example block are updated as well.
In `@internal/config_loader/accessors.go`:
- Around line 27-28: The file's doc comment listing built-in variables is out of
sync with the implementation: update the comment that lists "Built-in variables
(adapter, now, date)" to include "config" so it matches the builtinVariables
declaration; locate the doc near the top of internal/config_loader/accessors.go
and add "config" to the list (ensure the comment text and the builtinVariables
symbol remain consistent).
In `@internal/config_loader/validator.go`:
- Around line 582-586: The code silently skips validation when
manifest["metadata"] exists but is not an object; update the check around
manifest and metadata in validator.go to detect when metadata is present and not
a map[string]interface{} and call v.errors.Add(path+".metadata",
fmt.Sprintf("expected object for field %q but got %T", FieldName,
manifest["metadata"])). Keep the existing branch that checks for the missing
FieldName inside the map (using metadata and FieldName) so you both report wrong
types and still validate missing name when metadata is an object.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 307-318: The expressions access
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] without ensuring the filtered array contains an element; change
the reason and message expressions to first capture the filtered array (e.g.,
available = resources.clusterController.status.conditions.filter(c, c.type ==
'Available')) and then guard access with a presence check (e.g.,
has(available[0]) or length(available) > 0) before reading available[0].reason /
available[0].message, otherwise return the existing defaults ("ResourcesCreated"
and "All Kubernetes resources created successfully"); apply the same fix to the
other occurrence (the message/reason block referenced in the comment).
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Line 139: Add explicit nil guards for both cfg.Clients and cfg.Clients.Maestro
before any dereference to avoid panics: replace direct accesses with
require.NotNil(t, cfg.Clients, "...") followed by require.NotNil(t,
cfg.Clients.Maestro, "...") in the test and apply the same pattern at the other
dereference sites noted (around the current checks at lines near 200, 237, 285).
Ensure every place that reads cfg.Clients.* or cfg.Clients.Maestro.* is preceded
by these two require.NotNil assertions so failures remain assertion-driven
rather than causing a panic.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-333: The expression for cluster_metadata_region is reading a
non-existent key "configmap0.data.cluster_metadata"; update the expression in
cluster_metadata_region to reference an existing key from configmap0.data (e.g.,
"configmap0.data.cluster_id" or "configmap0.data.cluster_name") or otherwise
handle the optional/empty value (e.g., use a dig into
"configmap0.data.cluster_id" with a fallback) so the field resolves to real data
instead of empty; locate the expression that currently calls dig(resources,
"configmap0.data.cluster_metadata") and replace it with the correct key or add
an explicit fallback.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 293-295: The ConfigMap identifiers use raw {{ .clusterId }} which
breaks the lowercase namespace convention; update the name and namespace fields
(the lines assigning name: '{{ .clusterId }}-config-symbol2222' and namespace:
'{{ .clusterId }}', and the duplicate occurrences around the later block) to use
a lowercased clusterId (e.g., use the templating lowercase function on
.clusterId) so both the namespace and the derived name (config-symbol2222) are
generated from the normalized, lowercased .clusterId.
---
Nitpick comments:
In `@configs/adapter-task-config-template.yaml`:
- Around line 52-65: The params block duplicates API addressing; remove or
consolidate hyperfleetApiBaseUrl/hyperfleetApiVersion and switch any absolute
URL construction to use relative paths (e.g., "/clusters/...") so the service
uses the single source of truth from clients.hyperfleet_api; update references
that build absolute URLs to instead read clients.hyperfleet_api base/version (or
omit version) and join with relative endpoints, ensuring params only expose a
single configurable host or none if clients.hyperfleet_api already provides it
(apply same change for the other occurrences at lines referenced by the
reviewer).
In `@internal/executor/executor.go`:
- Around line 214-223: Move the configToMap call out of the per-event execution
path and initialize e.configMap once during executor construction (e.g., in
NewExecutor): call configToMap(e.config.Config) in NewExecutor, store the result
on the Executor struct as e.configMap, and surface any error there instead of
returning NewExecutorError(PhaseParamExtraction, ... ) at runtime; then update
the execution path to remove the configToMap call and pass the cached
e.configMap into addAdapterParams and extractConfigParams (which currently use
configMap) so per-event execution no longer re-marshals the static config.
In `@test/integration/executor/executor_integration_test.go`:
- Around line 934-937: The test Parameter entries for "hyperfleetApiVersion" in
the Params slices lack a Source and rely on default fallback, which diverges
from production schema; update each occurrence of the Parameter with Name:
"hyperfleetApiVersion" (three places around the current diff spans) to include
an explicit Source (e.g., Source: "env.HYPERFLEET_API_VERSION" or the
appropriate source used in production) so the test validation matches the
schema; ensure you modify all three occurrences (around the shown block and the
other two at ~1181-1184 and ~1352-1355) so they are consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f1aa4a96-8148-4c56-a935-d038d2bbabf5
📒 Files selected for processing (68)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- internal/manifest/manifest.go
- charts/examples/maestro-kubernetes/adapter-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
- configuration.md
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
🚧 Files skipped from review as they are similar to previous changes (18)
- internal/executor/types.go
- charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml
- test/integration/testutil/mock_api_server.go
- README.md
- internal/criteria/evaluator_test.go
- docs/configuration.md
- internal/executor/README.md
- test/integration/config-loader/testdata/templates/cluster-status-payload.yaml
- configs/templates/cluster-status-payload.yaml
- test/integration/config-loader/config_criteria_integration_test.go
- charts/examples/maestro/adapter-config.yaml
- test/testdata/adapter-config.yaml
- charts/examples/maestro/values.yaml
- test/testdata/dryrun/dryrun-kubernetes-adapter-config.yaml
- test/integration/config-loader/testdata/templates/job.yaml
- internal/criteria/README.md
- internal/executor/utils.go
- charts/templates/deployment.yaml
985f515 to
10555c9
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (8)
test/integration/maestro_client/client_tls_config_integration_test.go (1)
139-141:⚠️ Potential issue | 🟡 MinorNil-guard coverage is still incomplete before Maestro dereferences.
Line 139 partially addresses this, but Line 200, Line 237, and Line 285 still dereference
cfg.Clients.Maestrowithout guardingcfg/cfg.Clientsfirst. These tests can still panic instead of failing with a clear assertion.✅ Suggested patch
@@ require.NoError(t, err, "Config loading should succeed") - require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present")Also applies to: 200-200, 237-237, 285-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 139 - 141, The tests dereference cfg.Clients.Maestro without fully guarding the chain; after the initial require.NotNil(t, cfg.Clients.Maestro) replace direct dereferences (e.g., the maestroCfg assignment and subsequent uses of cfg.Clients.Maestro in the tests) with a nil-safe pattern: first assert cfg and cfg.Clients are non-nil (require.NotNil(t, cfg) and require.NotNil(t, cfg.Clients)), then require.NotNil(t, cfg.Clients.Maestro) before assigning maestroCfg := cfg.Clients.Maestro and using it in the rest of the test (references: cfg, cfg.Clients.Maestro, maestroCfg). Ensure every test block that currently accesses cfg.Clients.Maestro includes these guards to prevent panics.test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
293-295:⚠️ Potential issue | 🟡 MinorNormalize ConfigMap namespace to match lowercased Namespace naming.
Line 294 uses
{{ .clusterId }}while Namespace creation uses{{ .clusterId | lower }}(Line 282). Mixed-case IDs can miss the target namespace.Suggested fix
- namespace: '{{ .clusterId }}' + namespace: '{{ .clusterId | lower }}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 293 - 295, The ConfigMap entry uses inconsistent casing for the namespace: update the ConfigMap fields "name: '{{ .clusterId }}-config-symbol2222'" and "namespace: '{{ .clusterId }}'" to use the lowercased clusterId template (i.e. '{{ .clusterId | lower }}') so they match the Namespace creation which uses '{{ .clusterId | lower }}', ensuring resources target the same lowercased namespace.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-334:⚠️ Potential issue | 🟡 Minor
dig()path references a key not produced byconfigmap0.At Lines 330-334,
configmap0.data.cluster_metadatadoes not exist;configmap0currently emitscluster_idandcluster_name(Lines 188-189).Suggested fix
- cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 334, The dig() call references a non-existent key "configmap0.data.cluster_metadata"; update the expression in the cluster_metadata_region field to reference an actual key emitted by configmap0 (e.g., "configmap0.data.cluster_id" or "configmap0.data.cluster_name") or adjust configmap0 to emit cluster_metadata; specifically edit the expression using dig(resources, "...") to point to the correct key name produced by configmap0 so the lookup succeeds.configs/adapter-task-config-template.yaml (1)
241-251:⚠️ Potential issue | 🟠 MajorHealth condition logic can misreport skipped executions as healthy.
Line 245 reports
"True"on success without checkingadapter.resourcesSkipped, and Lines 247-251 rely onadapter.errorReason/errorMessageinstead of the richeradapter.executionErrorpath.Suggested fix
- type: "Health" status: expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") + adapter.?executionStatus.orValue("") == "success" + && !adapter.?resourcesSkipped.orValue(false) + ? "True" + : "False" reason: expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : "Healthy" message: expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" + adapter.?executionStatus.orValue("") != "success" + ? "Adapter failed at phase [" + + adapter.?executionError.?phase.orValue("unknown") + + "] step [" + + adapter.?executionError.?step.orValue("unknown") + + "]: " + + adapter.?executionError.?message.orValue(adapter.?errorMessage.orValue("no details")) + : adapter.?resourcesSkipped.orValue(false) + ? "Resources skipped: " + adapter.?skipReason.orValue("unknown reason") + : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 241 - 251, The Health condition currently treats any executionStatus == "success" as "True" and uses adapter.errorReason/errorMessage, which misreports runs that skipped resources and ignores the richer executionError path; update the expressions so status becomes "False" (or another non-True value) when adapter.?resourcesSkipped.orValue(0) > 0 even if executionStatus == "success", and change the reason and message expressions to prefer adapter.?executionError.reason / adapter.?executionError.message (falling back to adapter.?errorReason and adapter.?errorMessage and finally default strings) so executionError is used when present; target the expressions referencing adapter.?executionStatus, adapter.?resourcesSkipped, adapter.?executionError, adapter.?errorReason and adapter.?errorMessage.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorKeep built-in variable docs synced with implementation.
Line 27 omits
configeven though it is part ofbuiltinVariables(Line 13). Please update the comment to avoid confusion.🔧 Proposed fix
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, Update the comment listing built-in variables so it matches the actual builtinVariables declaration: include "config" alongside "adapter, now, date" and note that parameters from params are also included; change the comment near the top of accessors.go that currently reads "- Built-in variables (adapter, now, date)" to "- Built-in variables (adapter, now, date, config)" so the doc and the builtinVariables variable are in sync.charts/examples/README.md (1)
48-48:⚠️ Potential issue | 🟡 MinorFix naming to snake_case in the feature bullet.
Line 48 uses
nested_Discoveries; this should benested_discoveriesto match the config key naming used elsewhere in this PR.🔧 Proposed fix
-- Resource discovery by name and by label selectors (`nested_Discoveries`) +- Resource discovery by name and by label selectors (`nested_discoveries`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` at line 48, Update the feature bullet that currently reads `nested_Discoveries` to use snake_case `nested_discoveries` so it matches the config key naming used elsewhere in this PR; locate the bullet text in the README where `nested_Discoveries` appears and replace it with `nested_discoveries`.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟠 MajorReject non-object
metadatainstead of silently skipping validation.At Line 582, if
metadataexists but is not an object, validation currently skips thenamecheck and reports no error. This allows malformed manifests through semantic validation.🔧 Proposed fix
- if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { - if _, hasName := metadata[FieldName]; !hasName { - v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) - } - } + if metadataRaw, exists := manifest["metadata"]; exists { + metadata, ok := metadataRaw.(map[string]interface{}) + if !ok { + v.errors.Add(path+".metadata", fmt.Sprintf("field %q must be an object", "metadata")) + } else if _, hasName := metadata[FieldName]; !hasName { + v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q", FieldName)) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently ignores a present but non-object "metadata" value; change the logic in the manifest metadata check to explicitly detect presence first, then assert its type: if metadata exists but is not a map[string]interface{} add an error via v.errors.Add(path+".metadata", "must be an object"), and only when it is a map perform the existing FieldName check (i.e. when manifest["metadata"] is a map check for FieldName and call v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q", FieldName)) if absent). This update touches the manifest metadata handling around the metadata, FieldName, and v.errors.Add usage in validator.go.test/integration/config-loader/testdata/adapter-config-template.yaml (1)
307-318:⚠️ Potential issue | 🟠 MajorGuard filtered
Availablecondition before[0]access.Line 307/311/316 and Line 328/333 index
filter(...)[0]without checking the filtered list size, which can break payload building whenAvailableis missing.🔧 Suggested fix
applied: status: expression: | resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "ResourcesCreated" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "All Kubernetes resources created successfully" @@ available: reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "DeploymentReady" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "Deployment is available and serving traffic"Also applies to: 328-335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 307 - 318, The templates access resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] directly in the readiness condition, reason expression, and message expression which can fail if no Available condition exists; update those expressions (the reason and message expression blocks and the initial availability check) to first check that resources.clusterController.status.conditions.filter(c, c.type == 'Available') has at least one element (e.g., length > 0 or use a safe-first helper) before indexing [0], and fall back to the default strings ("ResourcesCreated" and "All Kubernetes resources created successfully") when the filtered list is empty so payload construction never indexes into an empty array.
🧹 Nitpick comments (3)
test/testdata/task-config.yaml (1)
67-68: Update stale inline comments to match ConfigMap logic.These comments still say “Job” while the conditions/resources now evaluate a ConfigMap, which can confuse future edits.
Also applies to: 82-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/task-config.yaml` around lines 67 - 68, Update the stale inline comments that reference "Job" to reflect the current resource type "ConfigMap": replace comments like "# Applied: Job successfully created" adjacent to the "type: Applied" condition with "# Applied: ConfigMap successfully created" (and make the same change for the other occurrence around the "type: Applied" at lines 82-83) so the inline text matches the ConfigMap logic used by the conditions/resources.test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
11-14: Prefer numericgenerationtype when feedingobserved_generation.
generationis declared as string at Line 14, but it is used forobserved_generationat Line 142. Usingintkeeps payload typing aligned with generation semantics.Suggested fix
- name: generation required: true source: event.generation - type: string + type: intAlso applies to: 141-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 11 - 14, The field named "generation" is declared with type: string but is used to feed "observed_generation" (numeric semantics); change the "generation" parameter's type from string to int in the task config so payload typing matches observed_generation, ensuring any schema/validation for the "generation" field aligns with integer values and update any related examples or comments that assume a string.test/integration/config-loader/loader_template_test.go (1)
111-124: Strengthen this block to fail whenpostis unexpectedly missing.At Line 111, the conditional allows the test to pass even if merge logic drops
post. For this fixture test, requirepostand at least onepost_actionexplicitly.🔧 Suggested test hardening
- if config.Post != nil { - assert.NotEmpty(t, config.Post.Payloads) - assert.NotEmpty(t, config.Post.PostActions) - - // Check post action - if len(config.Post.PostActions) > 0 { - firstAction := config.Post.PostActions[0] - assert.NotEmpty(t, firstAction.Name) - if firstAction.APICall != nil { - assert.NotEmpty(t, firstAction.APICall.Method) - assert.NotEmpty(t, firstAction.APICall.URL) - } - } - } + require.NotNil(t, config.Post, "post config should exist") + require.NotEmpty(t, config.Post.Payloads, "post.payloads should not be empty") + require.NotEmpty(t, config.Post.PostActions, "post.post_actions should not be empty") + + // Check first post action + firstAction := config.Post.PostActions[0] + assert.NotEmpty(t, firstAction.Name) + if firstAction.APICall != nil { + assert.NotEmpty(t, firstAction.APICall.Method) + assert.NotEmpty(t, firstAction.APICall.URL) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/loader_template_test.go` around lines 111 - 124, Change the permissive conditional that lets the test pass when config.Post is nil into explicit required assertions: use require.NotNil(t, config.Post) (instead of if config.Post != nil) to fail immediately if post was dropped, then require.Greater(t, len(config.Post.PostActions), 0) (or require.NotEmpty) to ensure at least one post action exists, and keep/assert payloads and first action fields (e.g., assert.NotEmpty(t, config.Post.Payloads), assert.NotEmpty(t, firstAction.Name), assert.NotEmpty(t, firstAction.APICall.Method), assert.NotEmpty(t, firstAction.APICall.URL)); reference config.Post, config.Post.PostActions, firstAction and firstAction.APICall when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@charts/examples/README.md`:
- Line 48: Update the feature bullet that currently reads `nested_Discoveries`
to use snake_case `nested_discoveries` so it matches the config key naming used
elsewhere in this PR; locate the bullet text in the README where
`nested_Discoveries` appears and replace it with `nested_discoveries`.
In `@configs/adapter-task-config-template.yaml`:
- Around line 241-251: The Health condition currently treats any executionStatus
== "success" as "True" and uses adapter.errorReason/errorMessage, which
misreports runs that skipped resources and ignores the richer executionError
path; update the expressions so status becomes "False" (or another non-True
value) when adapter.?resourcesSkipped.orValue(0) > 0 even if executionStatus ==
"success", and change the reason and message expressions to prefer
adapter.?executionError.reason / adapter.?executionError.message (falling back
to adapter.?errorReason and adapter.?errorMessage and finally default strings)
so executionError is used when present; target the expressions referencing
adapter.?executionStatus, adapter.?resourcesSkipped, adapter.?executionError,
adapter.?errorReason and adapter.?errorMessage.
In `@internal/config_loader/accessors.go`:
- Around line 27-28: Update the comment listing built-in variables so it matches
the actual builtinVariables declaration: include "config" alongside "adapter,
now, date" and note that parameters from params are also included; change the
comment near the top of accessors.go that currently reads "- Built-in variables
(adapter, now, date)" to "- Built-in variables (adapter, now, date, config)" so
the doc and the builtinVariables variable are in sync.
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently ignores a present but non-object
"metadata" value; change the logic in the manifest metadata check to explicitly
detect presence first, then assert its type: if metadata exists but is not a
map[string]interface{} add an error via v.errors.Add(path+".metadata", "must be
an object"), and only when it is a map perform the existing FieldName check
(i.e. when manifest["metadata"] is a map check for FieldName and call
v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q",
FieldName)) if absent). This update touches the manifest metadata handling
around the metadata, FieldName, and v.errors.Add usage in validator.go.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 307-318: The templates access
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] directly in the readiness condition, reason expression, and
message expression which can fail if no Available condition exists; update those
expressions (the reason and message expression blocks and the initial
availability check) to first check that
resources.clusterController.status.conditions.filter(c, c.type == 'Available')
has at least one element (e.g., length > 0 or use a safe-first helper) before
indexing [0], and fall back to the default strings ("ResourcesCreated" and "All
Kubernetes resources created successfully") when the filtered list is empty so
payload construction never indexes into an empty array.
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Around line 139-141: The tests dereference cfg.Clients.Maestro without fully
guarding the chain; after the initial require.NotNil(t, cfg.Clients.Maestro)
replace direct dereferences (e.g., the maestroCfg assignment and subsequent uses
of cfg.Clients.Maestro in the tests) with a nil-safe pattern: first assert cfg
and cfg.Clients are non-nil (require.NotNil(t, cfg) and require.NotNil(t,
cfg.Clients)), then require.NotNil(t, cfg.Clients.Maestro) before assigning
maestroCfg := cfg.Clients.Maestro and using it in the rest of the test
(references: cfg, cfg.Clients.Maestro, maestroCfg). Ensure every test block that
currently accesses cfg.Clients.Maestro includes these guards to prevent panics.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: The dig() call references a non-existent key
"configmap0.data.cluster_metadata"; update the expression in the
cluster_metadata_region field to reference an actual key emitted by configmap0
(e.g., "configmap0.data.cluster_id" or "configmap0.data.cluster_name") or adjust
configmap0 to emit cluster_metadata; specifically edit the expression using
dig(resources, "...") to point to the correct key name produced by configmap0 so
the lookup succeeds.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 293-295: The ConfigMap entry uses inconsistent casing for the
namespace: update the ConfigMap fields "name: '{{ .clusterId
}}-config-symbol2222'" and "namespace: '{{ .clusterId }}'" to use the lowercased
clusterId template (i.e. '{{ .clusterId | lower }}') so they match the Namespace
creation which uses '{{ .clusterId | lower }}', ensuring resources target the
same lowercased namespace.
---
Nitpick comments:
In `@test/integration/config-loader/loader_template_test.go`:
- Around line 111-124: Change the permissive conditional that lets the test pass
when config.Post is nil into explicit required assertions: use require.NotNil(t,
config.Post) (instead of if config.Post != nil) to fail immediately if post was
dropped, then require.Greater(t, len(config.Post.PostActions), 0) (or
require.NotEmpty) to ensure at least one post action exists, and keep/assert
payloads and first action fields (e.g., assert.NotEmpty(t,
config.Post.Payloads), assert.NotEmpty(t, firstAction.Name), assert.NotEmpty(t,
firstAction.APICall.Method), assert.NotEmpty(t, firstAction.APICall.URL));
reference config.Post, config.Post.PostActions, firstAction and
firstAction.APICall when making these changes.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 11-14: The field named "generation" is declared with type: string
but is used to feed "observed_generation" (numeric semantics); change the
"generation" parameter's type from string to int in the task config so payload
typing matches observed_generation, ensuring any schema/validation for the
"generation" field aligns with integer values and update any related examples or
comments that assume a string.
In `@test/testdata/task-config.yaml`:
- Around line 67-68: Update the stale inline comments that reference "Job" to
reflect the current resource type "ConfigMap": replace comments like "# Applied:
Job successfully created" adjacent to the "type: Applied" condition with "#
Applied: ConfigMap successfully created" (and make the same change for the other
occurrence around the "type: Applied" at lines 82-83) so the inline text matches
the ConfigMap logic used by the conditions/resources.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ff42e732-a6b4-404d-aa45-31459661cfb6
📒 Files selected for processing (68)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-config.yaml
- internal/manifest/manifest.go
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- configuration.md
✅ Files skipped from review due to trivial changes (1)
- internal/executor/types.go
🚧 Files skipped from review as they are similar to previous changes (18)
- internal/executor/executor.go
- test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml
- test/testdata/dryrun/dryrun-kubernetes-adapter-config.yaml
- configs/templates/job.yaml
- internal/config_loader/viper_loader.go
- internal/executor/utils.go
- charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml
- internal/hyperfleet_api/types.go
- test/testdata/dryrun/dryrun-maestro-adapter-config.yaml
- charts/templates/deployment.yaml
- internal/executor/executor_test.go
- test/testdata/adapter-config.yaml
- test/integration/config-loader/testdata/templates/job.yaml
- charts/examples/kubernetes/adapter-config.yaml
- configs/templates/cluster-status-payload.yaml
- docs/configuration.md
- internal/criteria/evaluator_test.go
- charts/templates/configmap-broker.yaml
10555c9 to
063ad07
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
cmd/adapter/main.go (1)
273-289:⚠️ Potential issue | 🟡 MinorAdd mutual exclusivity validation for transport clients.
The
createTransportClientfunction at line 273 gives implicit precedence to Maestro and silently ignores Kubernetes when both are configured. Since this migration aligns on a single-client design, add validation in the config load pipeline to enforce that exactly one transport client is configured. This validation should fail duringLoadConfig()rather than silently selecting Maestro at runtime.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/adapter/main.go` around lines 273 - 289, The config currently allows both Maestro and Kubernetes to be set and createTransportClient prefers Maestro silently; update the configuration validation in LoadConfig to enforce mutual exclusivity so exactly one transport client is configured (i.e., exactly one of config.Clients.Maestro and config.Clients.Kubernetes is non-nil). In LoadConfig, add checks to return a validation error if neither is set or if both are set (include clear messages like "must configure exactly one transport client: Maestro or Kubernetes"); keep createTransportClient unchanged to rely on validated config. Also update any callers/tests of LoadConfig to expect the new validation error paths.internal/config_loader/validator.go (2)
617-623:⚠️ Potential issue | 🟡 MinorGuard
ValidateAdapterVersionagainst nil config input.This exported function dereferences
configdirectly and will panic on nil input.🔧 Proposed fix
func ValidateAdapterVersion(config *AdapterConfig, expectedVersion string) error { if expectedVersion == "" { return nil } + if config == nil { + return fmt.Errorf("config is nil") + } configVersion := config.Adapter.Version🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 617 - 623, ValidateAdapterVersion currently dereferences config and will panic if nil; add an early nil-check at the top of ValidateAdapterVersion to detect a nil *AdapterConfig and return a descriptive error (e.g. "nil config" or similar) instead of proceeding to read config.Adapter.Version; this prevents panics and documents the failure mode of the exported function.
370-378:⚠️ Potential issue | 🟠 MajorMaestro manifest template validation is not actually skipped.
validateTemplateMapstill runs for everyresource.Manifest, including Maestro resources, so runtime-provided framework variables can be incorrectly flagged as undefined.🔧 Proposed fix
for i, resource := range v.config.Resources { resourcePath := fmt.Sprintf("%s[%d]", FieldResources, i) if manifest, ok := resource.Manifest.(map[string]interface{}); ok { - v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + if !resource.IsMaestroTransport() { + v.validateTemplateMap(manifest, resourcePath+"."+FieldManifest) + } } // NOTE: For maestro transport, we skip template variable validation for manifest content.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 370 - 378, The code currently calls validateTemplateMap for every resource.Manifest (in the loop over v.config.Resources), which still runs for Maestro resources; change the logic so that template validation is skipped for Maestro-discovered resources by detecting Maestro in resource.Discovery and NOT calling validateTemplateMap for those entries. Concretely, in the loop that references v.config.Resources, FieldResources, FieldManifest and calls validateTemplateMap(manifest, ...), add a guard that checks resource.Discovery for the Maestro transport (e.g., inspect resource.Discovery.Transport or the appropriate discriminator used in your discovery struct) and only call validateTemplateMap when the transport is not Maestro; alternatively move the validateTemplateMap call below the existing if resource.Discovery != nil block and wrap it with an explicit "if !isMaestroDiscovery(resource.Discovery)" condition. Ensure you use the same field names (resource.Discovery, resource.Manifest) and the validateTemplateMap function name so the change is easy to locate.
♻️ Duplicate comments (8)
test/integration/maestro_client/client_tls_config_integration_test.go (1)
200-200:⚠️ Potential issue | 🟡 MinorRe-add nil guards before dereferencing
cfg.Clients.Maestro.Line 200, Line 237, and Line 285 still assume non-nil and can panic on partial load results; these should fail with clear test assertions instead.
Suggested fix
+ require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") assert.True(t, maestroCfg.Insecure) @@ maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") assert.Equal(t, env.TLSMaestroGRPCAddr, maestroCfg.GRPCServerAddress, "Env should override YAML")Also applies to: 237-238, 285-286
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` at line 200, The test dereferences cfg.Clients.Maestro without nil checks which can panic; update the test to assert cfg.Clients is non-nil and cfg.Clients.Maestro is non-nil before calling buildMaestroClientConfigFromLoaded (and before any other access at the other occurrences), and if either is nil call t.Fatalf or require.FailNow with a clear message so the test fails cleanly rather than panicking; locate the checks around the buildMaestroClientConfigFromLoaded call and the other two spots where cfg.Clients.Maestro is used and add the nil-guard assertions referencing cfg.Clients and cfg.Clients.Maestro.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-334:⚠️ Potential issue | 🟡 Minor
cluster_metadata_regionpoints to a non-existent key.
configmap0.data.cluster_metadatais not produced by this config (cluster_idandcluster_nameare), so this expression will resolve empty.Suggested fix
- cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 334, The expression for cluster_metadata_region (the dig(...) call) points to a non-existent key "configmap0.data.cluster_metadata" so it will always resolve empty; update the expression inside cluster_metadata_region to dig into an existing key produced by the config (for example "configmap0.data.cluster_id" or "configmap0.data.cluster_name") so the value resolves correctly, i.e., replace the path in the dig(...) expression accordingly.test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
293-295:⚠️ Potential issue | 🟡 MinorConfigMap namespace should match the normalized Namespace name.
Line 294 uses
{{ .clusterId }}while Namespace is created as{{ .clusterId | lower }}; this can break mixed-case runs.Suggested fix
- namespace: '{{ .clusterId }}' + namespace: '{{ .clusterId | lower }}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 293 - 295, The ConfigMap resource with name "resource0" has its namespace set to '{{ .clusterId }}' but the Namespace resource is created with '{{ .clusterId | lower }}', causing a mismatch for mixed-case cluster IDs; update the ConfigMap's namespace field to use '{{ .clusterId | lower }}' so it matches the normalized Namespace name (i.e., change the namespace template in the ConfigMap for resource0 to the lowercased form).configs/adapter-task-config-template.yaml (1)
241-251:⚠️ Potential issue | 🟠 MajorHealth condition still masks skipped/error outcomes.
The current logic can emit healthy defaults for skipped runs or execution errors because it only relies on
executionStatus+errorReason/errorMessage.Suggested fix
- type: "Health" status: expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") + adapter.?executionStatus.orValue("") == "success" + && !adapter.?resourcesSkipped.orValue(false) + ? "True" + : "False" reason: expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : "Healthy" message: expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" + adapter.?executionStatus.orValue("") != "success" + ? "Adapter failed at phase [" + + adapter.?executionError.?phase.orValue("unknown") + + "] step [" + + adapter.?executionError.?step.orValue("unknown") + + "]: " + + adapter.?executionError.?message.orValue(adapter.?errorMessage.orValue("no details")) + : adapter.?resourcesSkipped.orValue(false) + ? "Resources skipped: " + adapter.?skipReason.orValue("unknown reason") + : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 241 - 251, Health logic currently only inspects adapter.?executionStatus and falls back to default "Healthy"/success messages, which can mask skipped or errored runs; update the expressions for status, reason, and message (referencing adapter.?executionStatus, adapter.?errorReason, adapter.?errorMessage) to explicitly handle "skipped" and "failed" outcomes: if adapter.?executionStatus == "success" set status "True" with appropriate success reason/message; if "failed" set status "False" and use adapter.?errorReason and adapter.?errorMessage (no healthy defaults); if "skipped" set status "Unknown" and a "Skipped" reason/message (or use errorMessage if present); ensure no path falls back to a healthy message when executionStatus is not success.charts/examples/README.md (1)
48-48:⚠️ Potential issue | 🟡 MinorFix docs key typo: use
nested_discoveries.Line 48 currently says
nested_Discoveries, which is inconsistent with the migrated snake_case field name.✏️ Proposed fix
-- Resource discovery by name and by label selectors (`nested_Discoveries`) +- Resource discovery by name and by label selectors (`nested_discoveries`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/README.md` at line 48, Update the docs to use the correct snake_case key: replace the incorrect `nested_Discoveries` occurrence with `nested_discoveries` in the README example so it matches the migrated field name; ensure any other references in the same file or examples also use `nested_discoveries` for consistency.internal/config_loader/accessors.go (1)
27-28:⚠️ Potential issue | 🟡 MinorKeep built-in variable docs in sync with implementation.
Line 27 still lists only
adapter, now, date, but Line 13 includesconfigas a built-in.📝 Suggested fix
-// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 27 - 28, Update the top-of-file built-in variables doc so it matches the implementation: change the existing comment line that reads "Built-in variables (adapter, now, date)" to include config as well (i.e., "adapter, now, date, config") so the documented built-ins match the actual built-in `config` used by the accessor implementation.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟡 MinorReport invalid
metadatatypes instead of silently passing them.If
metadataexists but is not an object, Line 582 skips validation and no type error is reported.🔧 Proposed fix
if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", "metadata must be an object") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The code currently ignores non-object `metadata` values; update the validation to detect when `metadata` exists but is not a map and report a type error: read the raw value from manifest["metadata"], if it is present and is a map[string]interface{} then keep the existing check for FieldName, otherwise call v.errors.Add(path+".metadata", "expected object") (use the same v.errors.Add mechanism and reference FieldName, manifest and v.errors.Add to locate the logic).test/integration/config-loader/testdata/adapter-config-template.yaml (1)
307-318:⚠️ Potential issue | 🟠 MajorGuard filtered-condition indexing before accessing
[0].These expressions can fail when no
Availablecondition exists, breaking payload build at runtime.🔧 Proposed fix
- resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" ... - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ... - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message)#!/bin/bash set -euo pipefail file="test/integration/config-loader/testdata/adapter-config-template.yaml" # Verify all direct [0] accesses after Available-condition filters. rg -n "filter\(c, c.type == 'Available'\)\[0\]" "$file"Also applies to: 328-335
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 307 - 318, Guard the filtered "Available" condition before indexing into [0]: compute the filtered array once (e.g. available = resources.clusterController.status.conditions.filter(c, c.type == 'Available')) and then use a conditional that checks available.length > 0 (or has(available[0])) before accessing available[0].status / .reason / .message, returning the defaults ("ResourcesCreated" and "All Kubernetes resources created successfully") when the array is empty; apply this change to the status check and the reason/message expression blocks that reference filter(c, c.type == 'Available')[0] (including the other occurrences noted).
🧹 Nitpick comments (2)
charts/examples/kubernetes/adapter-task-config.yaml (1)
64-64: Normalize Namespace name with| lowerfor safety.Using raw
clusterIdcan produce invalid namespace names for mixed-case input.Suggested fix
- name: "{{ .clusterId }}" + name: "{{ .clusterId | lower }}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/kubernetes/adapter-task-config.yaml` at line 64, The namespace name is created from the template variable clusterId using the line containing name: "{{ .clusterId }}", which can produce invalid mixed-case namespaces; update that template to normalize to lowercase by applying the template filter (use .clusterId | lower) so the rendered namespace name is always lowercase and Kubernetes-safe.test/integration/executor/executor_k8s_integration_test.go (1)
177-180: Align test params with schema by providingsourcefields.Line 177 and Line 886 define params without
Source, while the schema requires it. This makes tests drift from real-world validated config behavior.Suggested fix
- { - Name: "testNamespace", - Default: testNamespace, - Required: false, - }, + { + Name: "testNamespace", + Source: "env.TEST_NAMESPACE", + Default: testNamespace, + Required: false, + }, @@ - {Name: "hyperfleetApiVersion", Default: "v1"}, + {Name: "hyperfleetApiVersion", Source: "env.HYPERFLEET_API_VERSION", Default: "v1"},Also applies to: 886-887
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_k8s_integration_test.go` around lines 177 - 180, The test params are missing the required Source field; update the param struct(s) where Name: "testNamespace" (and the other similar param around the later occurrence) in executor_k8s_integration_test.go to include the Source field set to the schema-appropriate value (for example Source: schema.SourceRuntime or the correct enum/value your codebase uses), so the test params match the validated config schema; ensure you add Source to every param literal that currently has Name/Default/Required only.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@charts/examples/kubernetes/adapter-task-config.yaml`:
- Around line 182-191: The health condition is too optimistic: update the
expressions under type "Health" to consider resourcesSkipped and structured
executionError in addition to adapter.?executionStatus, adapter.?errorReason and
adapter.?errorMessage so skipped or errored runs are not reported as healthy;
specifically, change the status expression to return "False" if
adapter.?resourcesSkipped.orValue(0) > 0 or adapter.?executionError != null or
adapter.?executionStatus.orValue("") != "success", update reason to prefer
adapter.?executionError.reason or adapter.?errorReason when present, and update
message to include adapter.?executionError.message or adapter.?errorMessage and
mention resourcesSkipped when >0 so the Health condition accurately reflects
skipped/error paths.
In `@configs/adapter-task-config-template.yaml`:
- Around line 132-133: The template currently maps generationId only from the
precondition capture (field "generation"), which leaves generationId unset if
the precondition API fails; change the mapping for generationId so it falls back
to observed_generation when precondition.generation is missing (e.g., set
generationId to observed_generation or use a conditional/default expression that
uses precondition.generation if present otherwise observed_generation); apply
the same fallback change to the other occurrence of generationId in the template
(the second mapping block using "generation") so post payload construction
always has a populated generationId for error reporting.
- Around line 54-58: The template defines a required env param
"hyperfleetApiBaseUrl" that is unused (while clients.hyperfleet_api.base_url is
used with a relative URL), causing config load to fail; either remove the
"hyperfleetApiBaseUrl" entry or mark it not required, or wire it into the
clients.hyperfleet_api.base_url setting so clients.hyperfleet_api.base_url reads
from env.HYPERFLEET_API_BASE_URL; update the "name: hyperfleetApiBaseUrl" entry
or the "clients.hyperfleet_api.base_url" reference accordingly to ensure the
required env var is actually consumed.
In `@internal/config_loader/viper_loader.go`:
- Around line 80-88: The code that picks a fallback config file (variable
filePath in the viper loader) can pick a path from standardConfigPaths but that
resolved path is not propagated to LoadConfig, so adapterBaseDir inside
LoadConfig is computed without the fallback path; update the call/site so the
resolved filePath is forwarded into LoadConfig (or change LoadConfig signature
to accept an explicit configPath param) and then compute adapterBaseDir from
that passed-in path instead of deriving it only from options/env; adjust
references to filePath and adapterBaseDir (and any callers) so the selected
config path is consistently used.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml`:
- Around line 7-14: The fixture places hyperfleet_api and kubernetes at the root
but the loader/schema expect clients.hyperfleet_api and clients.kubernetes;
update the YAML so both entries are nested under a top-level clients mapping
(i.e., move the existing hyperfleet_api and kubernetes blocks under a new
clients key) so the adapter config loader will recognize them.
---
Outside diff comments:
In `@cmd/adapter/main.go`:
- Around line 273-289: The config currently allows both Maestro and Kubernetes
to be set and createTransportClient prefers Maestro silently; update the
configuration validation in LoadConfig to enforce mutual exclusivity so exactly
one transport client is configured (i.e., exactly one of config.Clients.Maestro
and config.Clients.Kubernetes is non-nil). In LoadConfig, add checks to return a
validation error if neither is set or if both are set (include clear messages
like "must configure exactly one transport client: Maestro or Kubernetes"); keep
createTransportClient unchanged to rely on validated config. Also update any
callers/tests of LoadConfig to expect the new validation error paths.
In `@internal/config_loader/validator.go`:
- Around line 617-623: ValidateAdapterVersion currently dereferences config and
will panic if nil; add an early nil-check at the top of ValidateAdapterVersion
to detect a nil *AdapterConfig and return a descriptive error (e.g. "nil config"
or similar) instead of proceeding to read config.Adapter.Version; this prevents
panics and documents the failure mode of the exported function.
- Around line 370-378: The code currently calls validateTemplateMap for every
resource.Manifest (in the loop over v.config.Resources), which still runs for
Maestro resources; change the logic so that template validation is skipped for
Maestro-discovered resources by detecting Maestro in resource.Discovery and NOT
calling validateTemplateMap for those entries. Concretely, in the loop that
references v.config.Resources, FieldResources, FieldManifest and calls
validateTemplateMap(manifest, ...), add a guard that checks resource.Discovery
for the Maestro transport (e.g., inspect resource.Discovery.Transport or the
appropriate discriminator used in your discovery struct) and only call
validateTemplateMap when the transport is not Maestro; alternatively move the
validateTemplateMap call below the existing if resource.Discovery != nil block
and wrap it with an explicit "if !isMaestroDiscovery(resource.Discovery)"
condition. Ensure you use the same field names (resource.Discovery,
resource.Manifest) and the validateTemplateMap function name so the change is
easy to locate.
---
Duplicate comments:
In `@charts/examples/README.md`:
- Line 48: Update the docs to use the correct snake_case key: replace the
incorrect `nested_Discoveries` occurrence with `nested_discoveries` in the
README example so it matches the migrated field name; ensure any other
references in the same file or examples also use `nested_discoveries` for
consistency.
In `@configs/adapter-task-config-template.yaml`:
- Around line 241-251: Health logic currently only inspects
adapter.?executionStatus and falls back to default "Healthy"/success messages,
which can mask skipped or errored runs; update the expressions for status,
reason, and message (referencing adapter.?executionStatus, adapter.?errorReason,
adapter.?errorMessage) to explicitly handle "skipped" and "failed" outcomes: if
adapter.?executionStatus == "success" set status "True" with appropriate success
reason/message; if "failed" set status "False" and use adapter.?errorReason and
adapter.?errorMessage (no healthy defaults); if "skipped" set status "Unknown"
and a "Skipped" reason/message (or use errorMessage if present); ensure no path
falls back to a healthy message when executionStatus is not success.
In `@internal/config_loader/accessors.go`:
- Around line 27-28: Update the top-of-file built-in variables doc so it matches
the implementation: change the existing comment line that reads "Built-in
variables (adapter, now, date)" to include config as well (i.e., "adapter, now,
date, config") so the documented built-ins match the actual built-in `config`
used by the accessor implementation.
In `@internal/config_loader/validator.go`:
- Around line 582-586: The code currently ignores non-object `metadata` values;
update the validation to detect when `metadata` exists but is not a map and
report a type error: read the raw value from manifest["metadata"], if it is
present and is a map[string]interface{} then keep the existing check for
FieldName, otherwise call v.errors.Add(path+".metadata", "expected object") (use
the same v.errors.Add mechanism and reference FieldName, manifest and
v.errors.Add to locate the logic).
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 307-318: Guard the filtered "Available" condition before indexing
into [0]: compute the filtered array once (e.g. available =
resources.clusterController.status.conditions.filter(c, c.type == 'Available'))
and then use a conditional that checks available.length > 0 (or
has(available[0])) before accessing available[0].status / .reason / .message,
returning the defaults ("ResourcesCreated" and "All Kubernetes resources created
successfully") when the array is empty; apply this change to the status check
and the reason/message expression blocks that reference filter(c, c.type ==
'Available')[0] (including the other occurrences noted).
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Line 200: The test dereferences cfg.Clients.Maestro without nil checks which
can panic; update the test to assert cfg.Clients is non-nil and
cfg.Clients.Maestro is non-nil before calling buildMaestroClientConfigFromLoaded
(and before any other access at the other occurrences), and if either is nil
call t.Fatalf or require.FailNow with a clear message so the test fails cleanly
rather than panicking; locate the checks around the
buildMaestroClientConfigFromLoaded call and the other two spots where
cfg.Clients.Maestro is used and add the nil-guard assertions referencing
cfg.Clients and cfg.Clients.Maestro.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: The expression for cluster_metadata_region (the dig(...)
call) points to a non-existent key "configmap0.data.cluster_metadata" so it will
always resolve empty; update the expression inside cluster_metadata_region to
dig into an existing key produced by the config (for example
"configmap0.data.cluster_id" or "configmap0.data.cluster_name") so the value
resolves correctly, i.e., replace the path in the dig(...) expression
accordingly.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 293-295: The ConfigMap resource with name "resource0" has its
namespace set to '{{ .clusterId }}' but the Namespace resource is created with
'{{ .clusterId | lower }}', causing a mismatch for mixed-case cluster IDs;
update the ConfigMap's namespace field to use '{{ .clusterId | lower }}' so it
matches the normalized Namespace name (i.e., change the namespace template in
the ConfigMap for resource0 to the lowercased form).
---
Nitpick comments:
In `@charts/examples/kubernetes/adapter-task-config.yaml`:
- Line 64: The namespace name is created from the template variable clusterId
using the line containing name: "{{ .clusterId }}", which can produce invalid
mixed-case namespaces; update that template to normalize to lowercase by
applying the template filter (use .clusterId | lower) so the rendered namespace
name is always lowercase and Kubernetes-safe.
In `@test/integration/executor/executor_k8s_integration_test.go`:
- Around line 177-180: The test params are missing the required Source field;
update the param struct(s) where Name: "testNamespace" (and the other similar
param around the later occurrence) in executor_k8s_integration_test.go to
include the Source field set to the schema-appropriate value (for example
Source: schema.SourceRuntime or the correct enum/value your codebase uses), so
the test params match the validated config schema; ensure you add Source to
every param literal that currently has Name/Default/Required only.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c39643e4-7fd1-4abd-b766-5f9eb2cbe913
📒 Files selected for processing (68)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gotest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- internal/manifest/manifest.go
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- internal/manifest/generation.go
- configuration.md
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
✅ Files skipped from review due to trivial changes (1)
- internal/executor/types.go
🚧 Files skipped from review as they are similar to previous changes (14)
- docs/configuration.md
- internal/hyperfleet_api/types.go
- internal/executor/README.md
- test/integration/testutil/mock_api_server.go
- internal/executor/utils.go
- internal/executor/executor_test.go
- configs/templates/job.yaml
- internal/executor/utils_test.go
- charts/templates/deployment.yaml
- configs/templates/cluster-status-payload.yaml
- charts/examples/maestro/values.yaml
- test/testdata/adapter-config.yaml
- README.md
- test/integration/config-loader/testdata/templates/cluster-status-payload.yaml
| - name: "hyperfleetApiBaseUrl" | ||
| source: "env.HYPERFLEET_API_BASE_URL" | ||
| type: "string" | ||
| description: "Base URL for the HyperFleet API" | ||
| required: true |
There was a problem hiding this comment.
Remove or wire the required hyperfleetApiBaseUrl param.
Line 54 defines a required env param that is never used because Line 113 uses a relative URL. This can fail config loading even when clients.hyperfleet_api.base_url is correctly set.
Suggested fix
- - name: "hyperfleetApiBaseUrl"
- source: "env.HYPERFLEET_API_BASE_URL"
- type: "string"
- description: "Base URL for the HyperFleet API"
- required: trueBased on learnings: environment variable validation for required env.* params is enforced at config load time (fail-fast).
Also applies to: 113-114
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@configs/adapter-task-config-template.yaml` around lines 54 - 58, The template
defines a required env param "hyperfleetApiBaseUrl" that is unused (while
clients.hyperfleet_api.base_url is used with a relative URL), causing config
load to fail; either remove the "hyperfleetApiBaseUrl" entry or mark it not
required, or wire it into the clients.hyperfleet_api.base_url setting so
clients.hyperfleet_api.base_url reads from env.HYPERFLEET_API_BASE_URL; update
the "name: hyperfleetApiBaseUrl" entry or the "clients.hyperfleet_api.base_url"
reference accordingly to ensure the required env var is actually consumed.
| - name: "generationId" | ||
| field: "generation" |
There was a problem hiding this comment.
observed_generation should not depend solely on precondition capture.
If the precondition API call fails, generationId may be unset, and post payload construction can fail right when we need error reporting most.
Suggested fix
params:
+ - name: "generation"
+ source: "event.generation"
+ type: "int"
+ required: true
@@
- observed_generation:
- expression: "generationId"
+ observed_generation:
+ expression: "generation"Also applies to: 254-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@configs/adapter-task-config-template.yaml` around lines 132 - 133, The
template currently maps generationId only from the precondition capture (field
"generation"), which leaves generationId unset if the precondition API fails;
change the mapping for generationId so it falls back to observed_generation when
precondition.generation is missing (e.g., set generationId to
observed_generation or use a conditional/default expression that uses
precondition.generation if present otherwise observed_generation); apply the
same fallback change to the other occurrence of generationId in the template
(the second mapping block using "generation") so post payload construction
always has a populated generationId for error reporting.
| hyperfleet_api: | ||
| timeout: 2s | ||
| retry_attempts: 3 | ||
| retry_backoff: exponential | ||
|
|
||
| kubernetes: | ||
| api_version: "v1" | ||
|
|
There was a problem hiding this comment.
Wrap client settings under clients in this “valid” fixture.
Line 7-Line 14 places hyperfleet_api and kubernetes at the root, but the current adapter config schema and loaders use clients.hyperfleet_api and clients.kubernetes. This fixture can be silently misread/ignored as currently structured.
🔧 Suggested fix
-hyperfleet_api:
- timeout: 2s
- retry_attempts: 3
- retry_backoff: exponential
-
-kubernetes:
- api_version: "v1"
+clients:
+ hyperfleet_api:
+ timeout: 2s
+ retry_attempts: 3
+ retry_backoff: exponential
+
+ kubernetes:
+ api_version: "v1"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| hyperfleet_api: | |
| timeout: 2s | |
| retry_attempts: 3 | |
| retry_backoff: exponential | |
| kubernetes: | |
| api_version: "v1" | |
| clients: | |
| hyperfleet_api: | |
| timeout: 2s | |
| retry_attempts: 3 | |
| retry_backoff: exponential | |
| kubernetes: | |
| api_version: "v1" | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml` around
lines 7 - 14, The fixture places hyperfleet_api and kubernetes at the root but
the loader/schema expect clients.hyperfleet_api and clients.kubernetes; update
the YAML so both entries are nested under a top-level clients mapping (i.e.,
move the existing hyperfleet_api and kubernetes blocks under a new clients key)
so the adapter config loader will recognize them.
ciaranRoche
left a comment
There was a problem hiding this comment.
Really solid work — the config flattening and snake_case migration is consistently applied across all 68 files, and all CI is green. I went through the config loader, executor, charts/docs, and test coverage in detail. A few things worth addressing below.
One thing I couldn't leave an inline comment on: the OwnerReferences rename in internal/executor/types.go:55 — the JSON tag changed from owned_reference to owner_references. If the upstream event producers (Sentinel?) are still sending owned_reference in their CloudEvent payloads, this field will silently be nil and the owner reference logging at executor.go:86 will be skipped. Worth checking that the upstream schema matches, or adding a temporary alias so both work during the transition.
| case strings.HasPrefix(source, "event."): | ||
| return extractFromEvent(source[6:], eventData) | ||
| return utils.GetNestedValue(eventData, source[6:]) | ||
| case strings.HasPrefix(source, "config."): |
There was a problem hiding this comment.
Nice feature — being able to pull config values into params is really useful. I couldn't find any test coverage for this config. source path though. The existing param extractor tests only cover event. and env. sources.
Would be good to add at least one test case, something like:
{Name: "adapterName", Source: "config.adapter.name"}Just to make sure the path resolution works end-to-end.
There was a problem hiding this comment.
Added tests for config
| "name": config.Adapter.Name, | ||
| "version": config.Adapter.Version, | ||
| } | ||
| execCtx.Params["config"] = configMap |
There was a problem hiding this comment.
This exposes the full unredacted config as a template variable — so templates could access things like {{ .config.clients.maestro.auth.tls_config.ca_file }}. Probably fine since adapter authors are trusted, but worth being aware of. The Redacted() method exists on Config if you ever want to limit what's visible here.
There was a problem hiding this comment.
The injected config is redacted.
My concern is that if not careful configs could be logged and leaked
test/testdata/task-config.yaml
Outdated
| method: POST | ||
| url: /clusters/{{ .clusterId }}/statuses | ||
| name: updateStatus | ||
| when: |
There was a problem hiding this comment.
This when: block is silently ignored — the PostAction struct doesn't have a When field. Is this a leftover from a planned feature, or should it be cleaned up? As-is it's a bit misleading since someone reading the testdata might think when is supported.
There was a problem hiding this comment.
Removed, was a leftover, I also removed some files in config that where old
| data: | ||
| clusterReady: | ||
| version: 0.1.0 | ||
| hyperfleet_api: |
There was a problem hiding this comment.
Heads up — this file has hyperfleet_api: at the root level instead of nested under clients:. With KnownFields(true) in the viper loader, this would fail to parse as an AdapterConfig. Might need updating to match the new schema, or if it's not actually loaded as an adapter config anywhere, maybe just add a comment saying so.
There was a problem hiding this comment.
I added the KnownFields option to validate that all config keys should exist in files
charts/examples/README.md
Outdated
| - Maestro transport client configuration (gRPC + HTTP) | ||
| - ManifestWork template with external file reference (`ref:`) | ||
| - Resource discovery by name and by label selectors (`nestedDiscoveries`) | ||
| - Resource discovery by name and by label selectors (`nested_Discoveries`) |
There was a problem hiding this comment.
Tiny typo: nested_Discoveries should be nested_discoveries (lowercase D).
charts/values.yaml
Outdated
| # key: secret-key | ||
|
|
||
| # envFrom for loading entire ConfigMaps/Secrets as environment variables | ||
| envFrom: [] |
There was a problem hiding this comment.
envFrom is defined here but never referenced in deployment.yaml. It'll just sit here doing nothing — either wire it up in the deployment template or remove it to keep things tidy.
There was a problem hiding this comment.
I removed it. If we need more flexibility in helm charts, we should do for all servives the same
docs/configuration.md
Outdated
| ### Top-level fields | ||
|
|
||
| - `adapter.name` (string, required): Adapter name. | ||
| - `adapter.version` (string, required): Adapter version expected by the binary. |
There was a problem hiding this comment.
This says adapter.version is required, but the struct tag in types.go is yaml:"version,omitempty" with no validate:"required" — and ValidateAdapterVersion early-returns nil when the version is empty. So it's actually optional now.
Might want to say something like "optional — when set, the binary validates it matches the running version."
Also server_healthiness_timeout for the Maestro client exists in the struct but isn't mentioned in this doc.
| @@ -25,8 +24,8 @@ func BuiltinVariables() []string { | |||
|
|
|||
| // GetDefinedVariables returns all variables defined in the config that can be used | |||
There was a problem hiding this comment.
Good that this is updated to adapter and config. Just worth calling out in the PR description or migration guide that any existing task configs using {{ .metadata.name }} or {{ .metadata.labels.X }} need to switch to {{ .adapter.name }}. The labels field has no replacement, so anyone relying on it will need to find another approach.
There was a problem hiding this comment.
I don't think there is need for migration for now, I will add to the PR description
Together with this PR, we need to merge openshift-hyperfleet/hyperfleet-infra#18 that introduces the correct message_data payload |
063ad07 to
bf1a628
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml (1)
1-6:⚠️ Potential issue | 🟡 MinorTypo in filename:
adateprshould beadapter.The filename
dryrun-kubernetes-adatepr-task-config-invalid.yamlcontains a typo. This could cause confusion and may affect test discovery or documentation references.Suggested rename
mv test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml \ test/testdata/dryrun/dryrun-kubernetes-adapter-task-config-invalid.yaml🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml` around lines 1 - 6, The test data filename contains a typo: "dryrun-kubernetes-adatepr-task-config-invalid.yaml" should be "dryrun-kubernetes-adapter-task-config-invalid.yaml"; rename the file accordingly and update any references to the misnamed file string (e.g., test harness, fixtures, or docs) so all usages of "dryrun-kubernetes-adatepr-task-config-invalid.yaml" are replaced with "dryrun-kubernetes-adapter-task-config-invalid.yaml" to ensure tests and documentation discover the correct file.test/integration/config-loader/loader_template_test.go (1)
78-117:⚠️ Potential issue | 🟡 MinorUse
requirebefore dereferencing merged sections.
assert.NotEmptydoes not stop the test, so Lines 82 and 105 can still panic on[0], and theif config.Post != nilguard means a broken post merge would quietly pass.requirewill turn these into clear merge-regression failures.🧪 Suggested test tightening
- assert.NotEmpty(t, config.Preconditions) - assert.GreaterOrEqual(t, len(config.Preconditions), 1, "should have at least 1 precondition") + require.NotEmpty(t, config.Preconditions, "should have at least 1 precondition") firstPrecond := config.Preconditions[0] ... - assert.NotEmpty(t, config.Resources) - assert.GreaterOrEqual(t, len(config.Resources), 1, "should have at least 1 resource") + require.NotEmpty(t, config.Resources, "should have at least 1 resource") firstResource := config.Resources[0] ... - if config.Post != nil { - assert.NotEmpty(t, config.Post.Payloads) - assert.NotEmpty(t, config.Post.PostActions) - - // Check post action - if len(config.Post.PostActions) > 0 { - firstAction := config.Post.PostActions[0] - assert.NotEmpty(t, firstAction.Name) - if firstAction.APICall != nil { - assert.NotEmpty(t, firstAction.APICall.Method) - assert.NotEmpty(t, firstAction.APICall.URL) - } - } - } + require.NotNil(t, config.Post) + require.NotEmpty(t, config.Post.Payloads) + require.NotEmpty(t, config.Post.PostActions) + + firstAction := config.Post.PostActions[0] + assert.NotEmpty(t, firstAction.Name) + if firstAction.APICall != nil { + assert.NotEmpty(t, firstAction.APICall.Method) + assert.NotEmpty(t, firstAction.APICall.URL) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/loader_template_test.go` around lines 78 - 117, The test dereferences merged sections without halting on failures: replace non-fatal asserts with require checks before any [0] or dereference access—use require.NotEmpty/require.GreaterOrEqual for config.Preconditions before using firstPrecond := config.Preconditions[0], require.NotEmpty/require.GreaterOrEqual for config.Resources before firstResource := config.Resources[0], and replace the if config.Post != nil guard with require.NotNil(t, config.Post) (then require.NotEmpty on config.Post.Payloads and config.Post.PostActions) so failures fail the test immediately and avoid panics when inspecting APICall, Capture, Conditions, Manifest, Discovery, and PostActions.test/integration/executor/executor_integration_test.go (1)
279-290:⚠️ Potential issue | 🟠 MajorUse a
Ready=Falsecondition here instead ofstatus.phase.
createTestConfig()now derivesreadyConditionStatusfromstatus.conditions[...], so these mocks no longer represent a plain “precondition not met” case. They now depend on whatever the executor does with missingstatus.conditions, which can turn this into a capture/API failure instead of a deterministicMatched=false.🛠️ Suggested update
- "status": map[string]interface{}{ - "phase": "Terminating", - }, + "status": map[string]interface{}{ + "conditions": []map[string]interface{}{ + { + "type": "Ready", + "status": "False", + }, + }, + },Also applies to: 664-670
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_integration_test.go` around lines 279 - 290, The test's mock cluster uses status.phase to simulate a non-ready cluster but createTestConfig() now derives readyConditionStatus from status.conditions, so update the mock responses passed to mockAPI.SetClusterResponse to include a conditions array containing a Ready condition with status "False" (e.g., "status": {"conditions": [{"type":"Ready","status":"False", ...}]}) instead of or in addition to "phase"; ensure both occurrences referenced in the diff (around the first SetClusterResponse and the other at lines ~664-670) are changed so the executor's readyConditionStatus logic (used by createTestConfig) deterministically yields Matched=false.
♻️ Duplicate comments (13)
charts/examples/kubernetes/adapter-task-config.yaml (1)
182-191:⚠️ Potential issue | 🟠 MajorHealth condition does not account for
resourcesSkippedor structuredexecutionError.The current Health condition implementation only checks
adapter.?executionStatusandadapter.?errorReason/adapter.?errorMessage. When resources are skipped due to unmet preconditions, this condition may still report"True"status and"Healthy"reason, which is misleading.Consider aligning with the standard Health condition boilerplate documented in the adapter authoring guide (Section 8), which properly handles:
adapter.resourcesSkipped→ reports"False"with reason"ResourcesSkipped"adapter.executionError.phase/step/message→ provides detailed failure context🛠️ Suggested fix aligning with the documented boilerplate
- type: "Health" status: expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : "False" + adapter.?executionStatus.orValue("") == "success" + && !adapter.?resourcesSkipped.orValue(false) + ? "True" + : "False" reason: expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : "Healthy" message: expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations in progress or completed successfully" + adapter.?executionStatus.orValue("") != "success" + ? "Adapter failed at phase [" + + adapter.?executionError.?phase.orValue("unknown") + + "] step [" + + adapter.?executionError.?step.orValue("unknown") + + "]: " + + adapter.?executionError.?message.orValue(adapter.?errorMessage.orValue("no details")) + : adapter.?resourcesSkipped.orValue(false) + ? "Resources skipped: " + adapter.?skipReason.orValue("unknown reason") + : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/kubernetes/adapter-task-config.yaml` around lines 182 - 191, Health condition only checks adapter.?executionStatus and errorReason/errorMessage and thus misses resourcesSkipped and structured executionError; update the Health expressions to also evaluate adapter.resourcesSkipped (return "False" status with reason "ResourcesSkipped" when non-empty) and to surface adapter.executionError.phase/step/message (use those fields as reason/message when present) while preserving the existing executionStatus fallback (adapter.?executionStatus.orValue("") == "success" ? "True" : "False")—locate and modify the Health block using identifiers adapter.resourcesSkipped and adapter.executionError.phase|step|message alongside the existing adapter.?executionStatus, adapter.?errorReason and adapter.?errorMessage expressions so the condition matches the adapter authoring guide boilerplate.configs/adapter-task-config-template.yaml (3)
132-133:⚠️ Potential issue | 🟠 Major
observed_generationdepends solely on precondition capture, which may fail.
generationIdis captured from the precondition API response (line 132-133). If the precondition API call fails,generationIdwill be undefined when building the post payload at line 254-255, potentially causing payload construction to fail during error reporting.Consider extracting
generationdirectly from the CloudEvent as a param to ensure it's always available:🔧 Suggested fix
params: + - name: "generation" + source: "event.generation" + type: "int" + required: true + - name: "hyperfleetApiBaseUrl"Then update the observed_generation expression:
observed_generation: - expression: "generationId" + expression: "generation"Also applies to: 254-256
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 132 - 133, generationId is currently sourced only from the precondition API response so if that call fails the post payload’s observed_generation will be undefined; update the template to accept generation as an explicit parameter sourced from the CloudEvent and fall back to the precondition value: add a new parameter (e.g., generationParam) to capture CloudEvent.metadata.generation and change the mapping that sets generationId/observed_generation to use generationParam first, then the precondition-derived value, ensuring the post payload construction (observed_generation) always has a value even when the precondition API fails.
54-58:⚠️ Potential issue | 🟠 MajorRequired
hyperfleetApiBaseUrlparam appears unused.The
hyperfleetApiBaseUrlparam is marked as required (line 58), but theapi_callat line 113 uses a relative URL (/clusters/{{ .clusterId }}). The base URL is configured inclients.hyperfleet_api.base_urlin the adapter config, not from this task param.This required param will cause config loading to fail if
HYPERFLEET_API_BASE_URLis not set, even though it's not actually used. Consider either:
- Removing this param, or
- Marking it as
required: false🔧 Suggested fix
- - name: "hyperfleetApiBaseUrl" - source: "env.HYPERFLEET_API_BASE_URL" - type: "string" - description: "Base URL for the HyperFleet API" - required: trueBased on learnings: environment variable validation for required
env.*params is enforced at config load time (fail-fast).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 54 - 58, The task param "hyperfleetApiBaseUrl" is declared required but never used by the api_call (which uses a relative path like "/clusters/{{ .clusterId }}") because the adapter uses clients.hyperfleet_api.base_url from the adapter config; update the task param declaration in configs/adapter-task-config-template.yaml by either removing the "hyperfleetApiBaseUrl" entry entirely or changing its "required: true" to "required: false" so config loading won't fail when HYPERFLEET_API_BASE_URL is unset.
241-252:⚠️ Potential issue | 🟠 MajorHealth condition can misreport skipped runs as healthy.
The current Health condition only checks
adapter.executionStatusandadapter.errorReason/adapter.errorMessage. It does not account for:
adapter.resourcesSkipped— when preconditions aren't metadapter.executionError— structured error informationThis can result in reporting
"Healthy"when resources were actually skipped.🛠️ Suggested fix aligning with documented boilerplate
- type: "Health" status: expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") + adapter.?executionStatus.orValue("") == "success" + && !adapter.?resourcesSkipped.orValue(false) + ? "True" + : "False" reason: expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : "Healthy" message: expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" + adapter.?executionStatus.orValue("") != "success" + ? "Adapter failed at phase [" + + adapter.?executionError.?phase.orValue("unknown") + + "] step [" + + adapter.?executionError.?step.orValue("unknown") + + "]: " + + adapter.?executionError.?message.orValue(adapter.?errorMessage.orValue("no details")) + : adapter.?resourcesSkipped.orValue(false) + ? "Resources skipped: " + adapter.?skipReason.orValue("unknown reason") + : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 241 - 252, Update the Health condition expressions to treat skipped resources and structured execution errors as non‑healthy: change the status expression to first check for adapter.executionError (present => "False"), then check adapter.resourcesSkipped.orValue(0) > 0 (=> "False"), and otherwise fall back to the existing adapter.?executionStatus mapping; update the reason expression to prefer adapter.executionError.message or adapter.errorReason; update the message expression to prefer adapter.executionError.details or adapter.errorMessage and include the resourcesSkipped count when >0 so skipped runs are not reported as "Healthy" (references: adapter.executionStatus, adapter.errorReason, adapter.errorMessage, adapter.resourcesSkipped, adapter.executionError, Health).test/integration/maestro_client/client_tls_config_integration_test.go (3)
285-286:⚠️ Potential issue | 🟡 MinorAdd nil guard before accessing
cfg.Clients.Maestro.Same issue — line 285 accesses
cfg.Clients.Maestrowithout a nil check.🛡️ Suggested fix
require.NoError(t, err) + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 285 - 286, Add a nil guard before accessing cfg.Clients.Maestro: ensure cfg.Clients is not nil and cfg.Clients.Maestro is not nil before assigning maestroCfg (or use require.NotNil(t, cfg.Clients) and require.NotNil(t, cfg.Clients.Maestro) at the start of the test) so the line assigning maestroCfg and the subsequent assert.Equal call do not panic; reference the maestroCfg variable and cfg.Clients.Maestro when adding the checks.
200-201:⚠️ Potential issue | 🟡 MinorAdd nil guard before accessing
cfg.Clients.Maestro.Line 200 dereferences
cfg.Clients.Maestrowithout a prior nil assertion. If config loading returns a partial object, this test will panic instead of failing with a clear assertion message.🛡️ Suggested fix
require.NoError(t, err) + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 200 - 201, The test dereferences cfg.Clients.Maestro without a nil check; add guards before calling buildMaestroClientConfigFromLoaded. Specifically, assert that cfg and cfg.Clients and cfg.Clients.Maestro are non-nil (e.g., require.NotNil(t, cfg); require.NotNil(t, cfg.Clients); require.NotNil(t, cfg.Clients.Maestro)) or add an explicit if-check that fails the test with t.Fatalf if any are nil, then call buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro).
237-238:⚠️ Potential issue | 🟡 MinorAdd nil guard before accessing
cfg.Clients.Maestro.Same issue as above — line 237 accesses
cfg.Clients.Maestrowithout a nil check.🛡️ Suggested fix
require.NoError(t, err) + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") maestroCfg := cfg.Clients.Maestro🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 237 - 238, Add nil guards before dereferencing cfg.Clients.Maestro: ensure cfg, cfg.Clients and cfg.Clients.Maestro are non-nil (e.g., using require.NotNil(t, cfg), require.NotNil(t, cfg.Clients) and require.NotNil(t, cfg.Clients.Maestro)) before assigning maestroCfg := cfg.Clients.Maestro and calling assert.True(t, maestroCfg.Insecure); alternatively use an if-check and t.Fatal/t.Fatalf to fail early if any are nil.internal/config_loader/accessors.go (1)
25-31:⚠️ Potential issue | 🟡 MinorComment still missing "config" in built-in variables list.
Line 13 correctly includes
"adapter", "config", "now", "date"inbuiltinVariables, but the comment on line 27 only lists "adapter, now, date". This inconsistency was flagged in a previous review and marked as addressed, but the comment appears to still be incomplete.📝 Suggested fix
// GetDefinedVariables returns all variables defined in the config that can be used // in templates and CEL expressions. This includes: -// - Built-in variables (adapter, now, date) +// - Built-in variables (adapter, config, now, date) // - Parameters from params🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/accessors.go` around lines 25 - 31, The function GetDefinedVariables has a mismatched comment: builtinVariables actually includes "config" but the comment above GetDefinedVariables only lists "adapter, now, date"; update that comment to include "config" so it matches the builtinVariables definition (refer to builtinVariables and GetDefinedVariables in accessors.go) — change the comment line that enumerates built-in variables to read "adapter, config, now, date".internal/config_loader/viper_loader.go (1)
109-117:⚠️ Potential issue | 🟠 MajorReturn the resolved fallback path to the caller.
When Lines 109-117 select a path from
standardConfigPaths, the caller still only receives the original input. That leaves base-dir-dependent validation anchored to the empty/original path whenever the fallback is used, so relative refs resolve incorrectly.🔧 Suggested direction
-func loadAdapterConfigWithViper(filePath string, flags *pflag.FlagSet) (*AdapterConfig, error) { +func loadAdapterConfigWithViper(filePath string, flags *pflag.FlagSet) (*AdapterConfig, string, error) { ... - return nil, fmt.Errorf(...) + return nil, "", fmt.Errorf(...) ... - return &config, nil + return &config, filePath, nil }Then thread the resolved path through
loadAdapterConfigWithViperGeneric/LoadConfigwhen computing the adapter base dir.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/viper_loader.go` around lines 109 - 117, The fallback selection loop sets filePath from standardConfigPaths but never returns or propagates the resolved path; update the code so the resolved filePath is returned to the caller and threaded into the caller functions (specifically ensure loadAdapterConfigWithViperGeneric and LoadConfig receive the resolved filePath when computing the adapter base dir) so relative references use the actual resolved config path; locate the selection logic around the filePath/standardConfigPaths block and ensure the function signature and call sites (LoadConfig and loadAdapterConfigWithViperGeneric) are adjusted to accept and pass the resolved path through when computing base-dir-dependent validation.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟡 MinorReject non-object
metadatavalues.If
metadatais present but not a map, the current branch skips thenamecheck instead of reporting a type error, so malformed manifests can pass semantic validation.🔧 Suggested fix
if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { - v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) + v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", fmt.Sprintf("field %q must be an object", "metadata")) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently ignores non-object metadata values; update the metadata handling so that if manifest has a "metadata" key but its value is not a map[string]interface{} you call v.errors.Add(path+"."+"metadata", fmt.Sprintf("field %q must be an object", FieldName)) to report a type error, and retain the existing branch that checks for the missing name when metadata is a map (variables to reference: manifest, metadata, FieldName, v.errors.Add, path).test/integration/config-loader/testdata/adapter-config-template.yaml (2)
324-333:⚠️ Potential issue | 🟡 MinorSame guard needed for the
availablecondition block.The
available.reasonandavailable.messageexpressions also filter forAvailableconditions and access[0]without size checks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 324 - 333, The available.reason and available.message expressions access resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] without guarding for empty results; update the available block to mirror the existing guard used elsewhere (e.g., use has(...) or a size check) so the expression returns the filtered [0].reason or a default string (like "DeploymentReady") and similarly for .message (default "Deployment is available and serving traffic"), referencing the available.reason and available.message keys and the resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason/message expressions to locate and update the logic.
304-316:⚠️ Potential issue | 🟡 MinorGuard filtered condition indexing before accessing
[0].The CEL expressions access the first filtered
Availablecondition without verifying the filtered list is non-empty. If noAvailablecondition exists, accessing[0]will fail at runtime.🔧 Suggested fix pattern
status: expression: | - resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterNamespace.status.phase == "Active" && + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True"Apply similar guards to reason (lines 308-311) and message (lines 313-316) expressions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 304 - 316, The CEL expressions access resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] without ensuring the filtered list is non-empty; update the three expressions (the main expression that checks .status == "True", and the reason and message ternaries) to first check the filtered list size (e.g. resources.clusterController.status.conditions.filter(...).size() > 0) before indexing [0], and only then read .status, .reason or .message, so replace direct [0] accesses with guarded checks around resources.clusterController.status.conditions.filter(c, c.type == 'Available').test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-333:⚠️ Potential issue | 🟡 Minor
cluster_metadata_regionreferences a key not produced byconfigmap0.The expression
dig(resources, "configmap0.data.cluster_metadata")references a path that doesn't exist. Theconfigmap0.dataonly containscluster_idandcluster_name(defined at lines 188-189), notcluster_metadata. This reduces the showcase value of this pattern demonstration.💡 Suggested fix
- cluster_metadata_region: + cluster_name: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 333, The expression for cluster_metadata_region uses a non-existent key; update the dig call in cluster_metadata_region to reference an actual key from configmap0.data (e.g., "configmap0.data.cluster_id" or "configmap0.data.cluster_name") or add a matching cluster_metadata entry to configmap0.data so the path dig(resources, "configmap0.data.cluster_metadata") resolves; locate the cluster_metadata_region entry and the configmap0.data block to make the change (look for the dig(...) expression and the configmap0.data definitions).
🧹 Nitpick comments (2)
charts/examples/kubernetes/adapter-task-config.yaml (1)
193-194: Aligngenerationparam sourcing with test configuration patterns.The chart extracts
generationfrom the precondition capture (line 37-38), while test files (dryrun-kubernetes-task-config.yaml) source it directly from the event withsource: "event.generation"and mark itrequired: true. To match the established pattern and ensuregenerationis always available for status reporting, add it as a top-level param:params: # ... existing params ... - name: "generation" source: "event.generation" type: "int" required: trueThis approach is more direct and resilient than relying on precondition capture.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/kubernetes/adapter-task-config.yaml` around lines 193 - 194, The chart currently sets observed_generation via expression "generation" but tests source generation from the event and mark it required; update the chart's top-level params to declare a param named "generation" with source "event.generation", type "int", and required:true so observed_generation will resolve reliably; modify the params block (the same section where other top-level params are defined) so the generation param is present, and ensure observed_generation (used in status) continues to reference that param rather than relying on precondition capture.pkg/health/server.go (1)
49-52: Consider documenting thread-safety guarantees for the new field.The
configYAMLfield is correctly protected bymu, but the comment only mentions "set only when debug_config is true". Consider adding a brief thread-safety note for maintainers.📝 Suggested documentation improvement
mu sync.RWMutex checks map[string]CheckStatus - configYAML []byte // set only when debug_config is true + configYAML []byte // guarded by mu; set only when debug_config is true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/health/server.go` around lines 49 - 52, Update the inline comment for the struct field configYAML to document its thread-safety: note that configYAML is only populated when debug_config is true and must be accessed/modified while holding the mutex mu (or under mu.RLock()/mu.Lock() as appropriate); reference the mu field and the CheckStatus usage in the same struct to make clear that all mutable state in this struct (including checks and configYAML) is protected by mu.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/config_loader/loader_test.go`:
- Around line 1140-1155: The test fixtures named "resource with maestro
transport" are using transport.maestro with a nested manifestWork block, but
transport.maestro now only exposes target_cluster; move the existing
manifestWork structure up under a top-level manifest key (replace
transport.maestro.manifestWork → manifest) so the test actually covers the
ManifestWork path; update both occurrences of this fixture (the one with name
"resource with maestro transport" and the other similar fixture later) to use
manifest: { apiVersion, kind, metadata, ... } at top-level instead of under
transport.maestro.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml`:
- Around line 32-45: The template references observed_generation via "{{
.generation }}" but the fixture never defines generation; add a params entry to
the YAML (e.g., under params) that maps generation from the event (source:
"event.generation") or add a capture that extracts generation so the template
has a defined .generation value; update the params/capture section near the
existing capture entries (e.g., alongside clusterName, readyConditionStatus,
region, cloudProvider, vpcId) to provide the missing symbol used by
observed_generation.
In `@test/integration/executor/executor_integration_test.go`:
- Around line 939-959: The two in-memory preconditions
(config_loader.Precondition entries with ActionBase Name "logStart" and
"logBeforeAPICall") are not valid when passed through LoadConfig because they
only set Log and lack api_call, conditions, or expression; update those
Precondition objects to include a trivial loadable predicate (e.g., add a
non-nil Expression or a simple Conditions entry that always evaluates true) so
they satisfy the public validator in the real config path while keeping the Log
action intact.
- Around line 983-990: The post-action LogAction "logCompletion" uses template
variable {{ .resourceId }} which is never set in this test, so the rendered
message can't match the asserted "log-test-resource"; either change the template
to only use an existing field (e.g., remove "{{ .resourceId }}" and keep "{{
.clusterId }}") or explicitly set resourceId in the test setup (e.g., add a
resource, param, or capture named "resourceId" in the fixture before executing
the post-actions) so ActionBase/LogAction can render the expected value.
In `@test/integration/executor/testdata/test-adapter-config.yaml`:
- Around line 5-8: The fixture currently defines a top-level hyperfleet_api
block (keys: timeout, retry_attempts, retry_backoff) but the app now expects
client settings under clients.hyperfleet_api; move the entire hyperfleet_api
mapping to be nested under a new clients key (i.e., clients: { hyperfleet_api: {
timeout: ..., retry_attempts: ..., retry_backoff: ... } }) so the test uses the
correct post-migration schema, or if this fixture isn't used as an adapter/full
config remove the unused hyperfleet_api block entirely; update the
hyperfleet_api mapping location referenced in tests that load this fixture
(search for hyperfleet_api and clients) to match the new structure.
---
Outside diff comments:
In `@test/integration/config-loader/loader_template_test.go`:
- Around line 78-117: The test dereferences merged sections without halting on
failures: replace non-fatal asserts with require checks before any [0] or
dereference access—use require.NotEmpty/require.GreaterOrEqual for
config.Preconditions before using firstPrecond := config.Preconditions[0],
require.NotEmpty/require.GreaterOrEqual for config.Resources before
firstResource := config.Resources[0], and replace the if config.Post != nil
guard with require.NotNil(t, config.Post) (then require.NotEmpty on
config.Post.Payloads and config.Post.PostActions) so failures fail the test
immediately and avoid panics when inspecting APICall, Capture, Conditions,
Manifest, Discovery, and PostActions.
In `@test/integration/executor/executor_integration_test.go`:
- Around line 279-290: The test's mock cluster uses status.phase to simulate a
non-ready cluster but createTestConfig() now derives readyConditionStatus from
status.conditions, so update the mock responses passed to
mockAPI.SetClusterResponse to include a conditions array containing a Ready
condition with status "False" (e.g., "status": {"conditions":
[{"type":"Ready","status":"False", ...}]}) instead of or in addition to "phase";
ensure both occurrences referenced in the diff (around the first
SetClusterResponse and the other at lines ~664-670) are changed so the
executor's readyConditionStatus logic (used by createTestConfig)
deterministically yields Matched=false.
In `@test/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yaml`:
- Around line 1-6: The test data filename contains a typo:
"dryrun-kubernetes-adatepr-task-config-invalid.yaml" should be
"dryrun-kubernetes-adapter-task-config-invalid.yaml"; rename the file
accordingly and update any references to the misnamed file string (e.g., test
harness, fixtures, or docs) so all usages of
"dryrun-kubernetes-adatepr-task-config-invalid.yaml" are replaced with
"dryrun-kubernetes-adapter-task-config-invalid.yaml" to ensure tests and
documentation discover the correct file.
---
Duplicate comments:
In `@charts/examples/kubernetes/adapter-task-config.yaml`:
- Around line 182-191: Health condition only checks adapter.?executionStatus and
errorReason/errorMessage and thus misses resourcesSkipped and structured
executionError; update the Health expressions to also evaluate
adapter.resourcesSkipped (return "False" status with reason "ResourcesSkipped"
when non-empty) and to surface adapter.executionError.phase/step/message (use
those fields as reason/message when present) while preserving the existing
executionStatus fallback (adapter.?executionStatus.orValue("") == "success" ?
"True" : "False")—locate and modify the Health block using identifiers
adapter.resourcesSkipped and adapter.executionError.phase|step|message alongside
the existing adapter.?executionStatus, adapter.?errorReason and
adapter.?errorMessage expressions so the condition matches the adapter authoring
guide boilerplate.
In `@configs/adapter-task-config-template.yaml`:
- Around line 132-133: generationId is currently sourced only from the
precondition API response so if that call fails the post payload’s
observed_generation will be undefined; update the template to accept generation
as an explicit parameter sourced from the CloudEvent and fall back to the
precondition value: add a new parameter (e.g., generationParam) to capture
CloudEvent.metadata.generation and change the mapping that sets
generationId/observed_generation to use generationParam first, then the
precondition-derived value, ensuring the post payload construction
(observed_generation) always has a value even when the precondition API fails.
- Around line 54-58: The task param "hyperfleetApiBaseUrl" is declared required
but never used by the api_call (which uses a relative path like "/clusters/{{
.clusterId }}") because the adapter uses clients.hyperfleet_api.base_url from
the adapter config; update the task param declaration in
configs/adapter-task-config-template.yaml by either removing the
"hyperfleetApiBaseUrl" entry entirely or changing its "required: true" to
"required: false" so config loading won't fail when HYPERFLEET_API_BASE_URL is
unset.
- Around line 241-252: Update the Health condition expressions to treat skipped
resources and structured execution errors as non‑healthy: change the status
expression to first check for adapter.executionError (present => "False"), then
check adapter.resourcesSkipped.orValue(0) > 0 (=> "False"), and otherwise fall
back to the existing adapter.?executionStatus mapping; update the reason
expression to prefer adapter.executionError.message or adapter.errorReason;
update the message expression to prefer adapter.executionError.details or
adapter.errorMessage and include the resourcesSkipped count when >0 so skipped
runs are not reported as "Healthy" (references: adapter.executionStatus,
adapter.errorReason, adapter.errorMessage, adapter.resourcesSkipped,
adapter.executionError, Health).
In `@internal/config_loader/accessors.go`:
- Around line 25-31: The function GetDefinedVariables has a mismatched comment:
builtinVariables actually includes "config" but the comment above
GetDefinedVariables only lists "adapter, now, date"; update that comment to
include "config" so it matches the builtinVariables definition (refer to
builtinVariables and GetDefinedVariables in accessors.go) — change the comment
line that enumerates built-in variables to read "adapter, config, now, date".
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently ignores non-object metadata
values; update the metadata handling so that if manifest has a "metadata" key
but its value is not a map[string]interface{} you call
v.errors.Add(path+"."+"metadata", fmt.Sprintf("field %q must be an object",
FieldName)) to report a type error, and retain the existing branch that checks
for the missing name when metadata is a map (variables to reference: manifest,
metadata, FieldName, v.errors.Add, path).
In `@internal/config_loader/viper_loader.go`:
- Around line 109-117: The fallback selection loop sets filePath from
standardConfigPaths but never returns or propagates the resolved path; update
the code so the resolved filePath is returned to the caller and threaded into
the caller functions (specifically ensure loadAdapterConfigWithViperGeneric and
LoadConfig receive the resolved filePath when computing the adapter base dir) so
relative references use the actual resolved config path; locate the selection
logic around the filePath/standardConfigPaths block and ensure the function
signature and call sites (LoadConfig and loadAdapterConfigWithViperGeneric) are
adjusted to accept and pass the resolved path through when computing
base-dir-dependent validation.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 324-333: The available.reason and available.message expressions
access resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] without guarding for empty results; update the available block
to mirror the existing guard used elsewhere (e.g., use has(...) or a size check)
so the expression returns the filtered [0].reason or a default string (like
"DeploymentReady") and similarly for .message (default "Deployment is available
and serving traffic"), referencing the available.reason and available.message
keys and the resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0].reason/message expressions to locate and update the logic.
- Around line 304-316: The CEL expressions access
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] without ensuring the filtered list is non-empty; update the
three expressions (the main expression that checks .status == "True", and the
reason and message ternaries) to first check the filtered list size (e.g.
resources.clusterController.status.conditions.filter(...).size() > 0) before
indexing [0], and only then read .status, .reason or .message, so replace direct
[0] accesses with guarded checks around
resources.clusterController.status.conditions.filter(c, c.type == 'Available').
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Around line 285-286: Add a nil guard before accessing cfg.Clients.Maestro:
ensure cfg.Clients is not nil and cfg.Clients.Maestro is not nil before
assigning maestroCfg (or use require.NotNil(t, cfg.Clients) and
require.NotNil(t, cfg.Clients.Maestro) at the start of the test) so the line
assigning maestroCfg and the subsequent assert.Equal call do not panic;
reference the maestroCfg variable and cfg.Clients.Maestro when adding the
checks.
- Around line 200-201: The test dereferences cfg.Clients.Maestro without a nil
check; add guards before calling buildMaestroClientConfigFromLoaded.
Specifically, assert that cfg and cfg.Clients and cfg.Clients.Maestro are
non-nil (e.g., require.NotNil(t, cfg); require.NotNil(t, cfg.Clients);
require.NotNil(t, cfg.Clients.Maestro)) or add an explicit if-check that fails
the test with t.Fatalf if any are nil, then call
buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro).
- Around line 237-238: Add nil guards before dereferencing cfg.Clients.Maestro:
ensure cfg, cfg.Clients and cfg.Clients.Maestro are non-nil (e.g., using
require.NotNil(t, cfg), require.NotNil(t, cfg.Clients) and require.NotNil(t,
cfg.Clients.Maestro)) before assigning maestroCfg := cfg.Clients.Maestro and
calling assert.True(t, maestroCfg.Insecure); alternatively use an if-check and
t.Fatal/t.Fatalf to fail early if any are nil.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-333: The expression for cluster_metadata_region uses a
non-existent key; update the dig call in cluster_metadata_region to reference an
actual key from configmap0.data (e.g., "configmap0.data.cluster_id" or
"configmap0.data.cluster_name") or add a matching cluster_metadata entry to
configmap0.data so the path dig(resources, "configmap0.data.cluster_metadata")
resolves; locate the cluster_metadata_region entry and the configmap0.data block
to make the change (look for the dig(...) expression and the configmap0.data
definitions).
---
Nitpick comments:
In `@charts/examples/kubernetes/adapter-task-config.yaml`:
- Around line 193-194: The chart currently sets observed_generation via
expression "generation" but tests source generation from the event and mark it
required; update the chart's top-level params to declare a param named
"generation" with source "event.generation", type "int", and required:true so
observed_generation will resolve reliably; modify the params block (the same
section where other top-level params are defined) so the generation param is
present, and ensure observed_generation (used in status) continues to reference
that param rather than relying on precondition capture.
In `@pkg/health/server.go`:
- Around line 49-52: Update the inline comment for the struct field configYAML
to document its thread-safety: note that configYAML is only populated when
debug_config is true and must be accessed/modified while holding the mutex mu
(or under mu.RLock()/mu.Lock() as appropriate); reference the mu field and the
CheckStatus usage in the same struct to make clear that all mutable state in
this struct (including checks and configYAML) is protected by mu.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 61edd7a9-a01f-4a5d-8d08-a6f86e7cfe5d
📒 Files selected for processing (70)
README.mdcharts/examples/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gopkg/health/server.goscripts/test-config-loading.shtest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (7)
- internal/manifest/manifest.go
- charts/examples/maestro-kubernetes/adapter-config.yaml
- configuration.md
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
🚧 Files skipped from review as they are similar to previous changes (19)
- configs/templates/job.yaml
- charts/templates/deployment.yaml
- internal/executor/README.md
- charts/examples/kubernetes/adapter-task-resource-job-rolebinding.yaml
- internal/config_loader/README.md
- test/integration/testutil/mock_api_server.go
- charts/templates/configmap-broker.yaml
- internal/executor/types.go
- docs/configuration.md
- charts/examples/kubernetes/adapter-config.yaml
- internal/criteria/README.md
- internal/hyperfleet_api/types.go
- test/integration/config-loader/testdata/templates/job.yaml
- test/integration/config-loader/config_criteria_integration_test.go
- charts/examples/maestro/values.yaml
- README.md
- internal/executor/utils.go
- configs/templates/cluster-status-payload.yaml
- test/testdata/templates/deployment.yaml
| capture: | ||
| - name: "clusterName" | ||
| field: "name" | ||
| - name: "readyConditionStatus" | ||
| expression: | | ||
| status.conditions.filter(c, c.type == "Ready").size() > 0 | ||
| ? status.conditions.filter(c, c.type == "Ready")[0].status | ||
| : "False" | ||
| - name: "region" | ||
| field: "spec.region" | ||
| - name: "cloudProvider" | ||
| field: "spec.provider" | ||
| - name: "vpcId" | ||
| field: "spec.vpc_id" |
There was a problem hiding this comment.
observed_generation references a variable this fixture never defines.
{{ .generation }} is used at Line 152, but there is no matching param or capture in the fixture. That makes this “valid” config fail template validation/rendering once observed_generation is evaluated.
🔧 Suggested fix
- name: "region"
field: "spec.region"
+ - name: "generation"
+ field: "generation"
- name: "cloudProvider"
field: "spec.provider"If generation should come from the event instead, add it under params with source: "event.generation" and keep the template as-is.
Also applies to: 152-154
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml` around
lines 32 - 45, The template references observed_generation via "{{ .generation
}}" but the fixture never defines generation; add a params entry to the YAML
(e.g., under params) that maps generation from the event (source:
"event.generation") or add a capture that extracts generation so the template
has a defined .generation value; update the params/capture section near the
existing capture entries (e.g., alongside clusterName, readyConditionStatus,
region, cloudProvider, vpcId) to provide the missing symbol used by
observed_generation.
| { | ||
| // Log action before API call | ||
| // Log action in post-actions | ||
| ActionBase: config_loader.ActionBase{ | ||
| Name: "logBeforeAPICall", | ||
| Name: "logCompletion", | ||
| Log: &config_loader.LogAction{ | ||
| Message: "About to check cluster status for {{ .clusterId }}", | ||
| Level: "debug", | ||
| Message: "Completed processing cluster {{ .clusterId }} with resource {{ .resourceId }}", | ||
| Level: "info", | ||
| }, |
There was a problem hiding this comment.
resourceId has no source in this test setup.
This config defines no resources and no param/capture named resourceId, so the post-action log template cannot render the asserted log-test-resource value. Use an existing field here or populate resourceId explicitly before asserting on the rendered message.
🛠️ Suggested update
Log: &config_loader.LogAction{
- Message: "Completed processing cluster {{ .clusterId }} with resource {{ .resourceId }}",
+ Message: "Completed processing cluster {{ .clusterId }}",
Level: "info",
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| // Log action before API call | |
| // Log action in post-actions | |
| ActionBase: config_loader.ActionBase{ | |
| Name: "logBeforeAPICall", | |
| Name: "logCompletion", | |
| Log: &config_loader.LogAction{ | |
| Message: "About to check cluster status for {{ .clusterId }}", | |
| Level: "debug", | |
| Message: "Completed processing cluster {{ .clusterId }} with resource {{ .resourceId }}", | |
| Level: "info", | |
| }, | |
| { | |
| // Log action in post-actions | |
| ActionBase: config_loader.ActionBase{ | |
| Name: "logCompletion", | |
| Log: &config_loader.LogAction{ | |
| Message: "Completed processing cluster {{ .clusterId }}", | |
| Level: "info", | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/integration/executor/executor_integration_test.go` around lines 983 -
990, The post-action LogAction "logCompletion" uses template variable {{
.resourceId }} which is never set in this test, so the rendered message can't
match the asserted "log-test-resource"; either change the template to only use
an existing field (e.g., remove "{{ .resourceId }}" and keep "{{ .clusterId }}")
or explicitly set resourceId in the test setup (e.g., add a resource, param, or
capture named "resourceId" in the fixture before executing the post-actions) so
ActionBase/LogAction can render the expected value.
0c049ac to
dbd42de
Compare
|
/retest |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
charts/README.md (1)
90-97:⚠️ Potential issue | 🟡 MinorDocument
broker.googlepubsub.dead_letter_topichere too.This table now covers the renamed Pub/Sub keys, but it omits
broker.googlepubsub.dead_letter_topic, which is already shown incharts/examples/maestro/values.yamlandcharts/examples/maestro/README.md. That leaves the public chart surface incomplete.📝 Proposed fix
| `broker.googlepubsub.project_id` | Google Cloud project ID | `""` | | `broker.googlepubsub.subscription_id` | Subscription ID override (HYPERFLEET_BROKER_SUBSCRIPTION_ID) | `""` | | `broker.googlepubsub.topic` | Topic name override (HYPERFLEET_BROKER_TOPIC) | `""` | +| `broker.googlepubsub.dead_letter_topic` | Dead-letter topic override | `""` | | `broker.yaml` | Broker YAML config content | `""` |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/README.md` around lines 90 - 97, Add documentation for the missing parameter broker.googlepubsub.dead_letter_topic to the parameters table in charts/README.md so the public chart surface matches examples; update the table row list (the same block that contains broker.googlepubsub.project_id, broker.googlepubsub.subscription_id, and broker.googlepubsub.topic) to include a row for broker.googlepubsub.dead_letter_topic with a brief description (e.g., "Dead letter topic name override (HYPERFLEET_BROKER_DEAD_LETTER_TOPIC)") and the appropriate default (likely `""`), ensuring consistency with charts/examples/maestro/values.yaml and charts/examples/maestro/README.md.charts/examples/maestro/README.md (1)
141-149:⚠️ Potential issue | 🟡 MinorFix the broken Helm install example.
Line 142 points to
charts/examples/values.yamlinstead of this example’scharts/examples/maestro/values.yaml, and Line 146 has a malformed placeholder. As written, the command is not copy/pasteable.🛠️ Proposed fix
-helm install <name> ./charts -f charts/examples/values.yaml \ +helm install <name> ./charts -f charts/examples/maestro/values.yaml \ --namespace <namespace> \ --set image.registry=quay.io/<developer-registry> \ --set broker.googlepubsub.project_id=<gcp-project> \ - --set broker.googlepubsub.subscription_id=<gcp-subscription? \ + --set broker.googlepubsub.subscription_id=<gcp-subscription> \ --set broker.googlepubsub.topic=<gcp-topic> \ --set broker.googlepubsub.dead_letter_topic=<gcp-dlq-topic>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@charts/examples/maestro/README.md` around lines 141 - 149, The Helm install example in the README has a wrong values file path and a malformed placeholder; update the command to reference charts/examples/maestro/values.yaml (not charts/examples/values.yaml) and fix the --set broker.googlepubsub.subscription_id placeholder to a proper form like <gcp-subscription> (remove the stray question mark and ensure all angle brackets close), keeping the other placeholders (e.g., image.registry=quay.io/<developer-registry>, broker.googlepubsub.project_id, broker.googlepubsub.topic, broker.googlepubsub.dead_letter_topic) intact so the command is copy/pasteable.
♻️ Duplicate comments (12)
test/integration/maestro_client/client_tls_config_integration_test.go (1)
139-141:⚠️ Potential issue | 🟡 MinorGuard
cfgandcfg.Clientsbefore dereferencing.Line 139 still evaluates
cfg.Clients.Maestrobeforerequire.NotNilruns, and Lines 200, 237, and 285 dereference the same path with no guard at all. IfLoadConfigreturns a nil config or nilClients, these tests panic instead of failing with a clear assertion. This is the same failure mode that was raised on the earlier revision.Suggested patch
@@ require.NoError(t, err, "Config loading should succeed") - require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") + require.NotNil(t, cfg.Clients.Maestro, "Maestro config should be present") clientCfg, err := buildMaestroClientConfigFromLoaded(cfg.Clients.Maestro) @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present") @@ require.NoError(t, err) + require.NotNil(t, cfg, "Loaded config should be present") + require.NotNil(t, cfg.Clients, "Clients config should be present") maestroCfg := cfg.Clients.Maestro + require.NotNil(t, maestroCfg, "Maestro config should be present")Also applies to: 200-200, 237-237, 285-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/maestro_client/client_tls_config_integration_test.go` around lines 139 - 141, The tests dereference cfg.Clients.Maestro before asserting cfg and cfg.Clients are non-nil; modify the test to first assert cfg is not nil and cfg.Clients is not nil using require.NotNil(t, cfg) and require.NotNil(t, cfg.Clients) before accessing cfg.Clients.Maestro (e.g., in the setup where LoadConfig is called and in the places referencing cfg.Clients.Maestro around the earlier lines), then proceed to require.NotNil(t, cfg.Clients.Maestro) and assign maestroCfg := cfg.Clients.Maestro so failures produce assertions instead of panics.test/integration/config-loader/testdata/adapter_config_valid.yaml (2)
7-14:⚠️ Potential issue | 🟠 Major
hyperfleet_apiandkubernetesshould be nested underclients.This fixture places
hyperfleet_apiandkubernetesat the root level, but the adapter config schema expects them underclients.hyperfleet_apiandclients.kubernetes. Since the PR enables unknown property validation, this "valid" fixture may fail to load or these settings may be silently ignored.🔧 Proposed fix
adapter: name: example-adapter version: "0.1.0" -hyperfleet_api: - timeout: 2s - retry_attempts: 3 - retry_backoff: exponential - -kubernetes: - api_version: "v1" +clients: + hyperfleet_api: + timeout: 2s + retry_attempts: 3 + retry_backoff: exponential + + kubernetes: + api_version: "v1"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter_config_valid.yaml` around lines 7 - 14, The fixture places hyperfleet_api and kubernetes at the root instead of under clients, which will fail schema validation; move the hyperfleet_api and kubernetes mappings inside a top-level clients mapping so the config contains clients.hyperfleet_api (with timeout, retry_attempts, retry_backoff) and clients.kubernetes (with api_version) to match the adapter config schema.
152-154:⚠️ Potential issue | 🟠 Major
observed_generationreferences undefined variable{{ .generation }}.The template uses
{{ .generation }}but there's no parameter withsource: "event.generation"or precondition capture that definesgeneration. This will cause template rendering to fail or produce an empty value.🔧 Proposed fix — add a capture for generation
capture: - name: "clusterName" field: "name" + - name: "generation" + field: "generation" - name: "readyConditionStatus"Or add a parameter if generation comes from the event:
params: - name: "clusterId" source: "event.id" type: "string" required: true + - name: "generation" + source: "event.generation" + type: "string" + required: true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter_config_valid.yaml` around lines 152 - 154, The template field observed_generation uses {{ .generation }} but no capture/parameter defines generation; fix by adding a capture named generation (e.g., in the template's captures block) that maps to source "event.generation" or by adding a parameter with source: "event.generation" so the template context provides generation for observed_generation; update the capture/parameter entries referenced by the template rather than changing the observed_generation key.test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml (1)
154-166:⚠️ Potential issue | 🟡 MinorConfigMap namespace not normalized to lowercase.
The Namespace resource at line 154 uses
{{ .clusterId | lower }}, but the ConfigMap at line 166 uses{{ .clusterId }}without lowercasing. This inconsistency will cause the ConfigMap to target a non-existent namespace for mixed-case cluster IDs.🔧 Proposed fix
kind: ConfigMap metadata: annotations: hyperfleet.io/generation: '{{ .generation }}' labels: hyperfleet.io/cluster-id: '{{ .clusterId }}' name: '{{ .clusterId }}-config-symbol2222' - namespace: '{{ .clusterId }}' + namespace: '{{ .clusterId | lower }}'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml` around lines 154 - 166, The ConfigMap's namespace field uses '{{ .clusterId }}' which is inconsistent with the Namespace resource using '{{ .clusterId | lower }}'; update the ConfigMap template so its namespace uses '{{ .clusterId | lower }}' (and similarly lowercase any other occurrences like name: '{{ .clusterId }}-config-symbol2222' if needed) to ensure the ConfigMap targets the same lowercase namespace as the Namespace resource.internal/config_loader/validator.go (1)
582-586:⚠️ Potential issue | 🟡 MinorFail when
metadatais present but not an object.The type assertion at line 582 silently skips validation when
metadataexists but isn't amap[string]interface{}. This allows malformed manifests to pass validation.🔧 Proposed fix
if metadata, ok := manifest["metadata"].(map[string]interface{}); ok { if _, hasName := metadata[FieldName]; !hasName { v.errors.Add(path+"."+"metadata", fmt.Sprintf("missing required field %q", FieldName)) } + } else if _, exists := manifest["metadata"]; exists { + v.errors.Add(path+".metadata", "metadata must be an object") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/validator.go` around lines 582 - 586, The validator currently uses a type assertion on manifest["metadata"] and silently skips validation if metadata exists but is not a map; update the logic around manifest, metadata, FieldName and v.errors.Add so that you first test existence of manifest["metadata"] (ok := manifest["metadata"]) and if present but not a map[string]interface{} add an error via v.errors.Add(path+".metadata", "field 'metadata' must be an object"), otherwise, when it's a map, perform the existing check for the required FieldName and call v.errors.Add(path+".metadata", fmt.Sprintf("missing required field %q", FieldName)) if missing.configs/adapter-task-config-template.yaml (2)
235-245:⚠️ Potential issue | 🟡 MinorHealth condition may misreport skipped runs as healthy.
The current
reason/messageexpressions only checkadapter.?errorReasonandadapter.?errorMessage, ignoringadapter.?resourcesSkippedandadapter.?executionError. This can produce false "Healthy" signals when resources were skipped.Proposed fix
- type: "Health" status: expression: | - adapter.?executionStatus.orValue("") == "success" ? "True" : (adapter.?executionStatus.orValue("") == "failed" ? "False" : "Unknown") + adapter.?executionStatus.orValue("") == "success" + && !adapter.?resourcesSkipped.orValue(false) + ? "True" + : "False" reason: expression: | - adapter.?errorReason.orValue("") != "" ? adapter.?errorReason.orValue("") : "Healthy" + adapter.?executionStatus.orValue("") != "success" + ? "ExecutionFailed:" + adapter.?executionError.?phase.orValue("unknown") + : adapter.?resourcesSkipped.orValue(false) + ? "ResourcesSkipped" + : "Healthy" message: expression: | - adapter.?errorMessage.orValue("") != "" ? adapter.?errorMessage.orValue("") : "All adapter operations completed successfully" + adapter.?executionStatus.orValue("") != "success" + ? "Adapter failed at phase [" + + adapter.?executionError.?phase.orValue("unknown") + + "] step [" + + adapter.?executionError.?step.orValue("unknown") + + "]: " + + adapter.?executionError.?message.orValue(adapter.?errorMessage.orValue("no details")) + : adapter.?resourcesSkipped.orValue(false) + ? "Resources skipped: " + adapter.?skipReason.orValue("unknown reason") + : "All adapter operations completed successfully"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 235 - 245, The Health condition's reason and message expressions currently only check adapter.?errorReason and adapter.?errorMessage which can mark runs with skipped resources or execution errors as "Healthy"; update the reason and message expressions (the blocks that compute reason and message for type "Health") to also check adapter.?resourcesSkipped and adapter.?executionError (and adapter.?executionStatus if needed) and prefer returning adapter.?executionError or a non-empty adapter.?errorReason when present, return a descriptive message when adapter.?resourcesSkipped > 0 (e.g., indicating how many/skipped resources), and only default to "Healthy" / success text when none of those fields are set and adapter.?executionStatus == "success".
46-52:⚠️ Potential issue | 🟡 MinorRemove or wire the required
hyperfleetApiBaseUrlparam.The param is defined as
required: truebut is never used - API calls at line 107 and 274 use relative URLs. This will fail config loading even whenclients.hyperfleet_api.base_urlis correctly set.Suggested fix
Either remove the unused param or mark it optional:
- name: "hyperfleetApiBaseUrl" source: "env.HYPERFLEET_API_BASE_URL" type: "string" description: "Base URL for the HyperFleet API" - required: true + required: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-task-config-template.yaml` around lines 46 - 52, The template declares a required param named "hyperfleetApiBaseUrl" but it's unused (API calls rely on clients.hyperfleet_api.base_url), causing config validation failures; either remove the "hyperfleetApiBaseUrl" param entry from the template, or change its "required: true" to optional/false, or wire it by using hyperfleetApiBaseUrl where the adapter builds HyperFleet requests (ensure code reads this param into clients.hyperfleet_api.base_url or the request base URL if the intent was to override it).test/integration/config-loader/testdata/adapter-config-template.yaml (2)
304-316:⚠️ Potential issue | 🟡 MinorCEL expressions access filtered array index without size guard.
These expressions can throw an index-out-of-bounds error when no
Availablecondition exists in the array. Thehas()function doesn't protect against accessing[0]on an empty filter result.Suggested fix
applied: status: expression: | resources.clusterNamespace.status.phase == "Active" && - resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 && + resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].status == "True" reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 + && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "ResourcesCreated" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 + && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "All Kubernetes resources created successfully",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 304 - 316, The CEL expressions read resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0] directly which can raise an index-out-of-bounds when the filter returns an empty array; update the expressions used in reason and message to first check the filtered array's size or existence (e.g., assign or re-evaluate the filtered result and assert its size > 0 or use a safe-existence check) before accessing [0], and then use that guarded element for .reason and .message; ensure the same guard is applied to the has(...) calls and the resources.clusterNamespace.status.phase check remains unchanged.
324-333:⚠️ Potential issue | 🟡 MinorSame unguarded CEL filter indexing in the
availablecondition.Apply the same size guard pattern to prevent index-out-of-bounds errors.
Suggested fix
reason: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 + && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason : "DeploymentReady" message: expression: | - has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) + resources.clusterController.status.conditions.filter(c, c.type == 'Available').size() > 0 + && has(resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message) ? resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].message : "Deployment is available and serving traffic",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/config-loader/testdata/adapter-config-template.yaml` around lines 324 - 333, The CEL expressions for the Available condition unguardedly index filter results (resources.clusterController.status.conditions.filter(c, c.type == 'Available')[0].reason and ...[0].message) which can cause OOB errors; update both expressions to first guard the filter with a size/length check (e.g., size(resources.clusterController.status.conditions.filter(c, c.type == 'Available')) > 0) before accessing [0], and only use the ternary to return the .reason or .message when that size guard succeeds, otherwise fall back to the default strings ("DeploymentReady" / "Deployment is available and serving traffic").test/integration/executor/testdata/test-adapter-config.yaml (1)
5-8:⚠️ Potential issue | 🟡 MinorStructural inconsistency:
hyperfleet_apishould be nested underclients.The new config schema expects client settings under
clients.hyperfleet_api, but this fixture hashyperfleet_apiat the root level. This was flagged in a previous review and appears unresolved.If this file is meant to represent a merged config for testing, the structure should match the production schema:
Suggested fix
adapter: name: test-adapter version: "1.0.0" -hyperfleet_api: - timeout: 10s - retry_attempts: 1 - retry_backoff: constant +clients: + hyperfleet_api: + timeout: 10s + retry_attempts: 1 + retry_backoff: constant,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/testdata/test-adapter-config.yaml` around lines 5 - 8, The YAML fixture places the hyperfleet_api settings at the root instead of under clients; move the hyperfleet_api mapping so it is nested beneath a top-level clients key, preserving its child keys (timeout, retry_attempts, retry_backoff) and values so the config matches the expected schema (clients -> hyperfleet_api -> {timeout, retry_attempts, retry_backoff}).test/integration/executor/executor_integration_test.go (1)
983-990:⚠️ Potential issue | 🟡 Minor
resourceIdis undefined in the test setup.The log message template references
{{ .resourceId }}, but no param, capture, or resource with that name exists in this test config. The assertion at line 1036 expects"log-test-resource"which cannot be rendered.Proposed fix: Remove undefined variable from template
Log: &config_loader.LogAction{ - Message: "Completed processing cluster {{ .clusterId }} with resource {{ .resourceId }}", + Message: "Completed processing cluster {{ .clusterId }}", Level: "info", },Also update the expected log at line 1036:
- "[config] Completed processing cluster log-test-cluster with resource log-test-resource", + "[config] Completed processing cluster log-test-cluster",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/executor/executor_integration_test.go` around lines 983 - 990, The log template used in the post-action "logCompletion" (ActionBase with Name "logCompletion" and LogAction.Message) references an undefined variable {{ .resourceId }}; remove that placeholder from LogAction.Message (e.g., change to "Completed processing cluster {{ .clusterId }}" or include a defined variable) and update the test's expected log string to match the new rendered message ("Completed processing cluster log-test-cluster" or the chosen output) so the assertion for the log entry passes.test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml (1)
330-334:⚠️ Potential issue | 🟡 Minor
cluster_metadata_regionreferences a key not produced byconfigmap0.The expression
dig(resources, "configmap0.data.cluster_metadata")references a path that doesn't exist. Looking at lines 187-189,configmap0.dataonly containscluster_idandcluster_name. This will always return an empty/default value.Consider updating to reference an existing key:
Proposed fix
cluster_metadata_region: # Pattern 12b: dig() — traversing into a nested path expression: | - dig(resources, "configmap0.data.cluster_metadata") + dig(resources, "configmap0.data.cluster_name")Or rename the field to match what it actually retrieves.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml` around lines 330 - 334, The cluster_metadata_region mapping uses dig(resources, "configmap0.data.cluster_metadata") but configmap0.data only contains cluster_id and cluster_name, so the path is invalid; update the expression in cluster_metadata_region to reference an existing key (e.g., dig(resources, "configmap0.data.cluster_id") or dig(resources, "configmap0.data.cluster_name")) or rename the target field to match the actual key produced by configmap0.data so the value is not always empty.
🧹 Nitpick comments (5)
scripts/test-config-loading.sh (3)
63-64: Consider trapping additional signals for robust cleanup.The cleanup trap only handles
EXIT. If the script is killed viaSIGINT(Ctrl+C) orSIGTERM, the temporary files may not be cleaned up on some shells/versions.Proposed fix
-trap cleanup EXIT +trap cleanup EXIT INT TERM🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-config-loading.sh` around lines 63 - 64, The cleanup function is only registered for EXIT so SIGINT/SIGTERM can skip cleanup; update the trap registration that references cleanup() (the trap line that currently reads "trap cleanup EXIT") to also trap SIGINT and SIGTERM (and optionally SIGHUP) so cleanup() runs on interrupt/termination signals; locate the cleanup() function and the trap invocation in scripts/test-config-loading.sh and change the trap to include the additional signals.
411-413: Redundant test assertions.Lines 411 and 413 test the exact same scenario ("cli > env > file" vs "env does not override cli") with identical inputs and expected outputs. Consider removing the duplicate or testing a distinct scenario.
Proposed fix - remove redundant assertion
assert_contains "priority: cli > env > file" "$(HYPERFLEET_API_BASE_URL=https://env.example.com cfg_dump "$CFG" --hyperfleet-api-base-url=https://cli.example.com)" "base_url: https://cli.example.com" -# Verify env does NOT bleed into CLI-set value -assert_contains "priority: env does not override cli" "$(HYPERFLEET_API_BASE_URL=https://env.example.com cfg_dump "$CFG" --hyperfleet-api-base-url=https://cli.example.com)" "base_url: https://cli.example.com"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-config-loading.sh` around lines 411 - 413, Duplicate assertions using assert_contains call cfg_dump with the same inputs (HYPERFLEET_API_BASE_URL and --hyperfleet-api-base-url) are present; remove the redundant assertion (the second assert_contains referencing "env does not override cli") or replace it with a distinct test (e.g., test env > file without CLI by calling cfg_dump without the --hyperfleet-api-base-url flag) to validate environment-only precedence; locate the assertions by the assert_contains invocation and the cfg_dump invocation using CFG and update accordingly.
79-82: Suppressing stderr may hide real errors.Redirecting
2>/dev/nullsilently discards all error output from the adapter binary. If the binary fails to start, has a panic, or encounters a config error, tests will fail with misleading "pattern not found" messages instead of the actual error.Consider capturing stderr separately or only suppressing expected warnings.
Proposed approach
cfg_dump() { local config="$1"; shift - "$ADAPTER_BIN" config-dump -c "$config" -t "$TASK_CONFIG" "$@" 2>/dev/null + "$ADAPTER_BIN" config-dump -c "$config" -t "$TASK_CONFIG" "$@" 2>&1 }Alternatively, capture stderr to a variable and print it on failure in the
fail()function for debugging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/test-config-loading.sh` around lines 79 - 82, The cfg_dump helper currently discards all stderr from the adapter binary (via 2>/dev/null), which hides real failures; change cfg_dump (and calls using ADAPTER_BIN config-dump) to capture stderr (e.g., into a variable or temp file) instead of discarding it, check the adapter exit status, and on failure call the existing fail() function including the captured stderr output (or only suppress known benign warnings), so test failures surface real adapter errors rather than silent "pattern not found" messages.configs/adapter-config-template.yaml (1)
128-128: Consider using a more explicit placeholder or comment forkube_config_path.The value
PATH_TO_KUBECONFIG_FILEcould be mistaken for a valid path. Consider using an empty string with a comment, or a clearly invalid placeholder like# kube_config_path: "" # Set for out-of-cluster access.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@configs/adapter-config-template.yaml` at line 128, Replace the ambiguous placeholder value for kube_config_path with a clearer default and guidance: set kube_config_path to an empty string ("") by default and add an inline comment showing when to populate it (e.g., for out-of-cluster access), or alternatively replace the value with an explicitly invalid placeholder like "<SET_KUBECONFIG_PATH>" and include a comment; update the kube_config_path line and its surrounding comment so users cannot mistake the template value for a real path.internal/config_loader/viper_loader.go (1)
177-186: Document the LOG_ environment variable convention.*The
LOG_LEVEL,LOG_FORMAT, andLOG_OUTPUTenv vars are handled without theHYPERFLEET_prefix, which differs from other config vars. The comment at line 177 mentions consistency with serve mode, but this convention should be documented in configuration docs to avoid user confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config_loader/viper_loader.go` around lines 177 - 186, Update the configuration docs to document the LOG_* environment variable convention: note that LOG_LEVEL, LOG_FORMAT, and LOG_OUTPUT are read without the HYPERFLEET_ prefix (as implemented by the os.Getenv calls and v.Set("log::level"/"log::format"/"log::output") in viper_loader.go) and explain that this matches serve mode behavior; add a short note in the configuration/environment variables section describing the LOG_* precedence and accepted values (e.g., level values, format, output) and reference these exact env var names so users aren’t confused by the differing prefix convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/config_loader/accessors.go`:
- Around line 13-14: The doc comment that describes built-in variables is out of
sync with the actual slice builtinVariables (which currently includes "adapter",
"config", "now", "date"); update the comment near the builtinVariables
declaration so it lists the same entries (include "config") or make the comment
generic to avoid future drift—locate the builtinVariables symbol and the
adjacent comment block (the comment at ~line 27) and make them consistent.
---
Outside diff comments:
In `@charts/examples/maestro/README.md`:
- Around line 141-149: The Helm install example in the README has a wrong values
file path and a malformed placeholder; update the command to reference
charts/examples/maestro/values.yaml (not charts/examples/values.yaml) and fix
the --set broker.googlepubsub.subscription_id placeholder to a proper form like
<gcp-subscription> (remove the stray question mark and ensure all angle brackets
close), keeping the other placeholders (e.g.,
image.registry=quay.io/<developer-registry>, broker.googlepubsub.project_id,
broker.googlepubsub.topic, broker.googlepubsub.dead_letter_topic) intact so the
command is copy/pasteable.
In `@charts/README.md`:
- Around line 90-97: Add documentation for the missing parameter
broker.googlepubsub.dead_letter_topic to the parameters table in
charts/README.md so the public chart surface matches examples; update the table
row list (the same block that contains broker.googlepubsub.project_id,
broker.googlepubsub.subscription_id, and broker.googlepubsub.topic) to include a
row for broker.googlepubsub.dead_letter_topic with a brief description (e.g.,
"Dead letter topic name override (HYPERFLEET_BROKER_DEAD_LETTER_TOPIC)") and the
appropriate default (likely `""`), ensuring consistency with
charts/examples/maestro/values.yaml and charts/examples/maestro/README.md.
---
Duplicate comments:
In `@configs/adapter-task-config-template.yaml`:
- Around line 235-245: The Health condition's reason and message expressions
currently only check adapter.?errorReason and adapter.?errorMessage which can
mark runs with skipped resources or execution errors as "Healthy"; update the
reason and message expressions (the blocks that compute reason and message for
type "Health") to also check adapter.?resourcesSkipped and
adapter.?executionError (and adapter.?executionStatus if needed) and prefer
returning adapter.?executionError or a non-empty adapter.?errorReason when
present, return a descriptive message when adapter.?resourcesSkipped > 0 (e.g.,
indicating how many/skipped resources), and only default to "Healthy" / success
text when none of those fields are set and adapter.?executionStatus ==
"success".
- Around line 46-52: The template declares a required param named
"hyperfleetApiBaseUrl" but it's unused (API calls rely on
clients.hyperfleet_api.base_url), causing config validation failures; either
remove the "hyperfleetApiBaseUrl" param entry from the template, or change its
"required: true" to optional/false, or wire it by using hyperfleetApiBaseUrl
where the adapter builds HyperFleet requests (ensure code reads this param into
clients.hyperfleet_api.base_url or the request base URL if the intent was to
override it).
In `@internal/config_loader/validator.go`:
- Around line 582-586: The validator currently uses a type assertion on
manifest["metadata"] and silently skips validation if metadata exists but is not
a map; update the logic around manifest, metadata, FieldName and v.errors.Add so
that you first test existence of manifest["metadata"] (ok :=
manifest["metadata"]) and if present but not a map[string]interface{} add an
error via v.errors.Add(path+".metadata", "field 'metadata' must be an object"),
otherwise, when it's a map, perform the existing check for the required
FieldName and call v.errors.Add(path+".metadata", fmt.Sprintf("missing required
field %q", FieldName)) if missing.
In `@test/integration/config-loader/testdata/adapter_config_valid.yaml`:
- Around line 7-14: The fixture places hyperfleet_api and kubernetes at the root
instead of under clients, which will fail schema validation; move the
hyperfleet_api and kubernetes mappings inside a top-level clients mapping so the
config contains clients.hyperfleet_api (with timeout, retry_attempts,
retry_backoff) and clients.kubernetes (with api_version) to match the adapter
config schema.
- Around line 152-154: The template field observed_generation uses {{
.generation }} but no capture/parameter defines generation; fix by adding a
capture named generation (e.g., in the template's captures block) that maps to
source "event.generation" or by adding a parameter with source:
"event.generation" so the template context provides generation for
observed_generation; update the capture/parameter entries referenced by the
template rather than changing the observed_generation key.
In `@test/integration/config-loader/testdata/adapter-config-template.yaml`:
- Around line 304-316: The CEL expressions read
resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0] directly which can raise an index-out-of-bounds when the filter
returns an empty array; update the expressions used in reason and message to
first check the filtered array's size or existence (e.g., assign or re-evaluate
the filtered result and assert its size > 0 or use a safe-existence check)
before accessing [0], and then use that guarded element for .reason and
.message; ensure the same guard is applied to the has(...) calls and the
resources.clusterNamespace.status.phase check remains unchanged.
- Around line 324-333: The CEL expressions for the Available condition
unguardedly index filter results
(resources.clusterController.status.conditions.filter(c, c.type ==
'Available')[0].reason and ...[0].message) which can cause OOB errors; update
both expressions to first guard the filter with a size/length check (e.g.,
size(resources.clusterController.status.conditions.filter(c, c.type ==
'Available')) > 0) before accessing [0], and only use the ternary to return the
.reason or .message when that size guard succeeds, otherwise fall back to the
default strings ("DeploymentReady" / "Deployment is available and serving
traffic").
In `@test/integration/executor/executor_integration_test.go`:
- Around line 983-990: The log template used in the post-action "logCompletion"
(ActionBase with Name "logCompletion" and LogAction.Message) references an
undefined variable {{ .resourceId }}; remove that placeholder from
LogAction.Message (e.g., change to "Completed processing cluster {{ .clusterId
}}" or include a defined variable) and update the test's expected log string to
match the new rendered message ("Completed processing cluster log-test-cluster"
or the chosen output) so the assertion for the log entry passes.
In `@test/integration/executor/testdata/test-adapter-config.yaml`:
- Around line 5-8: The YAML fixture places the hyperfleet_api settings at the
root instead of under clients; move the hyperfleet_api mapping so it is nested
beneath a top-level clients key, preserving its child keys (timeout,
retry_attempts, retry_backoff) and values so the config matches the expected
schema (clients -> hyperfleet_api -> {timeout, retry_attempts, retry_backoff}).
In `@test/integration/maestro_client/client_tls_config_integration_test.go`:
- Around line 139-141: The tests dereference cfg.Clients.Maestro before
asserting cfg and cfg.Clients are non-nil; modify the test to first assert cfg
is not nil and cfg.Clients is not nil using require.NotNil(t, cfg) and
require.NotNil(t, cfg.Clients) before accessing cfg.Clients.Maestro (e.g., in
the setup where LoadConfig is called and in the places referencing
cfg.Clients.Maestro around the earlier lines), then proceed to require.NotNil(t,
cfg.Clients.Maestro) and assign maestroCfg := cfg.Clients.Maestro so failures
produce assertions instead of panics.
In `@test/testdata/dryrun/dryrun-cel-showcase-task-config.yaml`:
- Around line 330-334: The cluster_metadata_region mapping uses dig(resources,
"configmap0.data.cluster_metadata") but configmap0.data only contains cluster_id
and cluster_name, so the path is invalid; update the expression in
cluster_metadata_region to reference an existing key (e.g., dig(resources,
"configmap0.data.cluster_id") or dig(resources, "configmap0.data.cluster_name"))
or rename the target field to match the actual key produced by configmap0.data
so the value is not always empty.
In `@test/testdata/dryrun/dryrun-maestro-adapter-task-config.yaml`:
- Around line 154-166: The ConfigMap's namespace field uses '{{ .clusterId }}'
which is inconsistent with the Namespace resource using '{{ .clusterId | lower
}}'; update the ConfigMap template so its namespace uses '{{ .clusterId | lower
}}' (and similarly lowercase any other occurrences like name: '{{ .clusterId
}}-config-symbol2222' if needed) to ensure the ConfigMap targets the same
lowercase namespace as the Namespace resource.
---
Nitpick comments:
In `@configs/adapter-config-template.yaml`:
- Line 128: Replace the ambiguous placeholder value for kube_config_path with a
clearer default and guidance: set kube_config_path to an empty string ("") by
default and add an inline comment showing when to populate it (e.g., for
out-of-cluster access), or alternatively replace the value with an explicitly
invalid placeholder like "<SET_KUBECONFIG_PATH>" and include a comment; update
the kube_config_path line and its surrounding comment so users cannot mistake
the template value for a real path.
In `@internal/config_loader/viper_loader.go`:
- Around line 177-186: Update the configuration docs to document the LOG_*
environment variable convention: note that LOG_LEVEL, LOG_FORMAT, and LOG_OUTPUT
are read without the HYPERFLEET_ prefix (as implemented by the os.Getenv calls
and v.Set("log::level"/"log::format"/"log::output") in viper_loader.go) and
explain that this matches serve mode behavior; add a short note in the
configuration/environment variables section describing the LOG_* precedence and
accepted values (e.g., level values, format, output) and reference these exact
env var names so users aren’t confused by the differing prefix convention.
In `@scripts/test-config-loading.sh`:
- Around line 63-64: The cleanup function is only registered for EXIT so
SIGINT/SIGTERM can skip cleanup; update the trap registration that references
cleanup() (the trap line that currently reads "trap cleanup EXIT") to also trap
SIGINT and SIGTERM (and optionally SIGHUP) so cleanup() runs on
interrupt/termination signals; locate the cleanup() function and the trap
invocation in scripts/test-config-loading.sh and change the trap to include the
additional signals.
- Around line 411-413: Duplicate assertions using assert_contains call cfg_dump
with the same inputs (HYPERFLEET_API_BASE_URL and --hyperfleet-api-base-url) are
present; remove the redundant assertion (the second assert_contains referencing
"env does not override cli") or replace it with a distinct test (e.g., test env
> file without CLI by calling cfg_dump without the --hyperfleet-api-base-url
flag) to validate environment-only precedence; locate the assertions by the
assert_contains invocation and the cfg_dump invocation using CFG and update
accordingly.
- Around line 79-82: The cfg_dump helper currently discards all stderr from the
adapter binary (via 2>/dev/null), which hides real failures; change cfg_dump
(and calls using ADAPTER_BIN config-dump) to capture stderr (e.g., into a
variable or temp file) instead of discarding it, check the adapter exit status,
and on failure call the existing fail() function including the captured stderr
output (or only suppress known benign warnings), so test failures surface real
adapter errors rather than silent "pattern not found" messages.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fcaa5ac3-bc2a-443c-bbb7-81e83ce5de9f
📒 Files selected for processing (75)
README.mdcharts/README.mdcharts/examples/README.mdcharts/examples/kubernetes/README.mdcharts/examples/kubernetes/adapter-config.yamlcharts/examples/kubernetes/adapter-task-config.yamlcharts/examples/kubernetes/adapter-task-resource-job-rolebinding.yamlcharts/examples/kubernetes/values.yamlcharts/examples/maestro-kubernetes/adapter-config.yamlcharts/examples/maestro-kubernetes/adapter-task-config.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yamlcharts/examples/maestro-kubernetes/adapter-task-resource-namespace.yamlcharts/examples/maestro/README.mdcharts/examples/maestro/adapter-config.yamlcharts/examples/maestro/adapter-task-config.yamlcharts/examples/maestro/adapter-task-resource-manifestwork.yamlcharts/examples/maestro/values.yamlcharts/templates/configmap-broker.yamlcharts/templates/deployment.yamlcharts/values.yamlcmd/adapter/main.goconfigs/adapter-config-template.yamlconfigs/adapter-deployment-config.yamlconfigs/adapter-task-config-template.yamlconfigs/templates/cluster-status-payload.yamlconfigs/templates/deployment.yamlconfigs/templates/job.yamlconfiguration.mddocs/adapter-authoring-guide.mddocs/configuration.mdinternal/config_loader/README.mdinternal/config_loader/accessors.gointernal/config_loader/constants.gointernal/config_loader/loader.gointernal/config_loader/loader_test.gointernal/config_loader/types.gointernal/config_loader/validator.gointernal/config_loader/validator_test.gointernal/config_loader/viper_loader.gointernal/criteria/README.mdinternal/criteria/cel_evaluator_test.gointernal/criteria/evaluator_test.gointernal/executor/README.mdinternal/executor/executor.gointernal/executor/executor_test.gointernal/executor/param_extractor.gointernal/executor/types.gointernal/executor/utils.gointernal/executor/utils_test.gointernal/hyperfleet_api/types.gointernal/manifest/generation.gointernal/manifest/manifest.gopkg/health/server.goscripts/test-config-loading.shtest/integration/config-loader/config_criteria_integration_test.gotest/integration/config-loader/loader_template_test.gotest/integration/config-loader/testdata/adapter-config-template.yamltest/integration/config-loader/testdata/adapter_config_valid.yamltest/integration/config-loader/testdata/templates/cluster-status-payload.yamltest/integration/config-loader/testdata/templates/job.yamltest/integration/executor/executor_integration_test.gotest/integration/executor/executor_k8s_integration_test.gotest/integration/executor/testdata/test-adapter-config.yamltest/integration/maestro_client/client_tls_config_integration_test.gotest/integration/testutil/mock_api_server.gotest/testdata/adapter-config.yamltest/testdata/adapter_config_valid.yamltest/testdata/dryrun/dryrun-cel-showcase-task-config.yamltest/testdata/dryrun/dryrun-kubernetes-adapter-config.yamltest/testdata/dryrun/dryrun-kubernetes-adatepr-task-config-invalid.yamltest/testdata/dryrun/dryrun-kubernetes-task-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-config.yamltest/testdata/dryrun/dryrun-maestro-adapter-task-config.yamltest/testdata/task-config.yamltest/testdata/templates/deployment.yaml
💤 Files with no reviewable changes (12)
- test/testdata/adapter_config_valid.yaml
- configs/templates/job.yaml
- internal/manifest/generation.go
- charts/examples/maestro-kubernetes/adapter-task-resource-namespace.yaml
- internal/manifest/manifest.go
- configuration.md
- configs/templates/cluster-status-payload.yaml
- configs/templates/deployment.yaml
- charts/examples/maestro-kubernetes/adapter-task-config.yaml
- charts/examples/maestro-kubernetes/adapter-config.yaml
- charts/examples/maestro-kubernetes/adapter-task-resource-manifestwork.yaml
- configs/adapter-deployment-config.yaml
✅ Files skipped from review due to trivial changes (1)
- charts/examples/kubernetes/README.md
🚧 Files skipped from review as they are similar to previous changes (14)
- test/integration/config-loader/loader_template_test.go
- test/integration/config-loader/testdata/templates/cluster-status-payload.yaml
- test/integration/testutil/mock_api_server.go
- internal/criteria/README.md
- internal/executor/types.go
- internal/executor/utils.go
- charts/examples/kubernetes/adapter-config.yaml
- charts/templates/configmap-broker.yaml
- test/testdata/dryrun/dryrun-maestro-adapter-config.yaml
- docs/configuration.md
- internal/config_loader/constants.go
- test/integration/config-loader/testdata/templates/job.yaml
- charts/templates/deployment.yaml
- README.md
dbd42de to
011c961
Compare
| spec: | ||
| adapter: | ||
| version: "0.1.0" | ||
| #version: "0.1.0" |
There was a problem hiding this comment.
I think it can be added back now. And the version should be 0.2.0 I suppose
| googlepubsub: | ||
| projectId: CHANGE_ME | ||
| subscriptionId: CHANGE_ME | ||
| project_id: CHANGE_ME |
There was a problem hiding this comment.
Is broker bumped? Can this work?
| spec: | ||
| adapter: | ||
| version: "0.1.0" | ||
| #version: "0.1.0" |
There was a problem hiding this comment.
So this version, it can be enabled as 0.2 now I suppose
| # httpCaFile: "/etc/maestro/certs/https/ca.crt" | ||
| # Source identifier for CloudEvents routing (must be unique across adapters) | ||
| # Environment variable: HYPERFLEET_MAESTRO_SOURCE_ID | ||
| source_id: "hyperfleet-adapter" |
There was a problem hiding this comment.
How about name the source_id with parameter of the adapter name?
|
|
||
| # Client identifier (defaults to source_id if not specified) | ||
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | ||
| client_id: "hyperfleet-adapter-client" |
011c961 to
8ef6227
Compare
|
/test presubmits-integration |
5f7fa33 to
8b6c016
Compare
8b6c016 to
92e0696
Compare
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ciaranRoche The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
0bcbd77
into
openshift-hyperfleet:main
https://issues.redhat.com/browse/HYPERFLEET-551
This PR adopts the new HyperFleet configuration standard across the adapter framework, replacing the Kubernetes-style apiVersion/kind/metadata/spec envelope with a flat, plain-YAML structure.
This is a breaking change for adapters
Changes are required in the infra and e2e repositories
Key changes:
metadata.namenow it should beadapter.name1.25.7which corresponds to the Go versionLoading from files validates non-existent properties and the adapter will fail to start
Adds a new endpoint
/configto the health server to display the current configurationdebug_log=trueand sensitive values are redactedTest plan
charts/examples/maestro
Summary by CodeRabbit