Skip to content

Conversation

@rh-amarin
Copy link
Contributor

@rh-amarin rh-amarin commented Feb 9, 2026

Summary

  • Add Maestro transport client support for delivering resources to remote clusters via ManifestWork
    • Note that here we don't have an interface for resource clients
    • The executor will use either kubernetes/maestro client directly
  • Implement comprehensive config validators with CEL expression parsing, template variable validation, and file reference checks
  • Clean up legacy config templates (removed 1,200+ lines of obsolete configs)
  • Add new transport configuration block supporting both kubernetes (direct) and maestro (ManifestWork-based) client types

Key Changes

Transport Layer

  • New transport.client field: kubernetes (default) or maestro
  • Maestro transport builds ManifestWork resources with configurable name, labels, annotations, and deleteOption
  • Supports template rendering in targetCluster and manifestWork.name fields

Cleanup

  • Removed /configs directory containing outdated templates and documentation
  • Consolidated examples under /charts/examples

Summary by CodeRabbit

  • New Features

    • Maestro added as an alternative transport for delivering resources.
    • Per-resource transport selection (Kubernetes or Maestro) supported.
  • Documentation

    • Clarified configuration priority: CLI flags > env vars > file > defaults.
    • Added and expanded configuration examples, including Maestro and Kubernetes transport samples.
    • Removed several legacy sample templates and deprecated broker documentation.
  • Bug Fixes

    • Improved config validation and manifest-loading robustness.

@openshift-ci
Copy link

openshift-ci bot commented Feb 9, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign mischulee for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link

coderabbitai bot commented Feb 9, 2026

Walkthrough

Adds optional Maestro transport for resource delivery and per-resource transport configuration types (TransportConfig, MaestroTransportConfig, ManifestWorkConfig). Executor and ResourceExecutor gain Maestro client wiring and a Maestro delivery path alongside existing Kubernetes path. Introduces client_factory helpers to create API, K8s, and Maestro clients and updates main initialization to use them. Config loader now conditionally loads manifest and ManifestWork refs and validates transport-related fields. Removes several legacy example/config template files and broker documentation; README and chart example files updated.

Sequence Diagram(s)

sequenceDiagram
    participant Adapter as Adapter (cmd/adapter/main.go)
    participant Config as Config Loader
    participant Executor as Executor Builder
    participant ResExec as ResourceExecutor
    participant K8s as Kubernetes API
    participant Maestro as Maestro API

    Adapter->>Config: Load adapter & task configs
    Config->>Config: Parse transport and load manifest refs
    Adapter->>Adapter: Create API & K8s clients via client_factory
    Adapter->>Adapter: Optionally create Maestro client via client_factory
    Adapter->>Executor: Build executor with clients (K8s, maybe Maestro)

    Executor->>ResExec: Execute resource
    ResExec->>ResExec: Determine transport client type

    alt transport == kubernetes
        ResExec->>K8s: applyResourceK8s (apply manifest)
        K8s-->>ResExec: result
    else transport == maestro
        ResExec->>ResExec: build ManifestWork from manifest
        ResExec->>Maestro: apply ManifestWork
        Maestro-->>ResExec: result
    end

    ResExec-->>Executor: resource result
    Executor-->>Adapter: execution complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • ciaranRoche
  • crizzo71
🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: implementing Maestro DSL loader with transport client support for resources, which aligns with the core objective of adding Maestro transport delivery.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

190-196: ⚠️ Potential issue | 🟡 Minor

Fix typo: “overriden” → “overridden.”
Minor doc polish.

✏️ Proposed fix
-  - Configure topic/subscription in the `adapter-config.yaml` but can be overriden with env variables or cli params
+  - Configure topic/subscription in the `adapter-config.yaml` but can be overridden with env variables or CLI params
🤖 Fix all issues with AI agents
In `@internal/executor/resource_executor.go`:
- Around line 356-366: The generation annotation is being lost because
work.SetAnnotations is called twice and the second call in
applyManifestWorkSettings overwrites all annotations; instead merge annotations:
when copying generation from manifest (using manifestAnnotations and
constants.AnnotationGeneration) read existing work.GetAnnotations(), create a
merged map that preserves existing keys (including any metadata.annotations from
RefContent), set/overwrite the generation key in that merged map, and then call
work.SetAnnotations once with the merged map; similarly, update
applyManifestWorkSettings to merge incoming metadata.annotations with
work.GetAnnotations() rather than replacing them so existing generation and
other annotations are preserved.
🧹 Nitpick comments (3)
internal/executor/types.go (1)

63-67: Add config-time validation for MaestroClient when maestro transport is used.

While applyResourceMaestro already validates at runtime that maestroClient is not nil, validation at config initialization time would catch the issue earlier. Consider adding a check in validateExecutorConfig to iterate over resources and require MaestroClient when any resource uses TransportClientMaestro.

internal/executor/resource_executor.go (2)

388-405: Labels from metadata.labels overwrite rather than merge.

On line 403, work.SetLabels(labelMap) overwrites any existing labels. This is inconsistent with the root-level labels handling (lines 436-444) which merges with existing. Consider using the same merge pattern for consistency:

♻️ Suggested fix for consistency
 		if labels, ok := metadata["labels"].(map[string]interface{}); ok {
 			labelMap := make(map[string]string)
 			for k, v := range labels {
 				if str, ok := v.(string); ok {
 					rendered, err := renderTemplate(str, params)
 					if err != nil {
 						return fmt.Errorf("failed to render label value for key %s: %w", k, err)
 					}
 					labelMap[k] = rendered
 				}
 			}
-			work.SetLabels(labelMap)
+			// Merge with existing labels
+			existing := work.GetLabels()
+			if existing == nil {
+				existing = make(map[string]string)
+			}
+			for k, v := range labelMap {
+				existing[k] = v
+			}
+			work.SetLabels(existing)
 			re.log.Debugf(ctx, "Applied labels: %v", labelMap)
 		}

297-299: Operation result is always OperationCreate but the actual operation is an upsert.

The comment correctly states "ManifestWork apply is always an upsert", but the operation is recorded as OperationCreate. This could be misleading in logs and metrics. Consider using a more accurate operation type or adding a new constant like OperationApply or OperationUpsert.

Comment on lines +356 to +366
// Copy the generation annotation from the manifest to the ManifestWork
// This is required by the maestro client for generation-based idempotency
manifestAnnotations := manifest.GetAnnotations()
if manifestAnnotations != nil {
if gen, ok := manifestAnnotations[constants.AnnotationGeneration]; ok {
work.SetAnnotations(map[string]string{
constants.AnnotationGeneration: gen,
})
re.log.Debugf(ctx, "Set ManifestWork generation annotation: %s", gen)
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Generation annotation may be overwritten by RefContent settings.

The generation annotation is set on lines 361-364, but applyManifestWorkSettings calls work.SetAnnotations() (line 419) which overwrites all annotations. This could lose the generation annotation if metadata.annotations is present in RefContent.

Consider merging annotations instead of overwriting:

🐛 Proposed fix
 		// Apply annotations from metadata
 		if annotations, ok := metadata["annotations"].(map[string]interface{}); ok {
 			annotationMap := make(map[string]string)
 			for k, v := range annotations {
 				if str, ok := v.(string); ok {
 					rendered, err := renderTemplate(str, params)
 					if err != nil {
 						return fmt.Errorf("failed to render annotation value for key %s: %w", k, err)
 					}
 					annotationMap[k] = rendered
 				}
 			}
-			work.SetAnnotations(annotationMap)
+			// Merge with existing annotations to preserve generation annotation
+			existing := work.GetAnnotations()
+			if existing == nil {
+				existing = make(map[string]string)
+			}
+			for k, v := range annotationMap {
+				existing[k] = v
+			}
+			work.SetAnnotations(existing)
 			re.log.Debugf(ctx, "Applied annotations: %v", annotationMap)
 		}

Also applies to: 403-421

🤖 Prompt for AI Agents
In `@internal/executor/resource_executor.go` around lines 356 - 366, The
generation annotation is being lost because work.SetAnnotations is called twice
and the second call in applyManifestWorkSettings overwrites all annotations;
instead merge annotations: when copying generation from manifest (using
manifestAnnotations and constants.AnnotationGeneration) read existing
work.GetAnnotations(), create a merged map that preserves existing keys
(including any metadata.annotations from RefContent), set/overwrite the
generation key in that merged map, and then call work.SetAnnotations once with
the merged map; similarly, update applyManifestWorkSettings to merge incoming
metadata.annotations with work.GetAnnotations() rather than replacing them so
existing generation and other annotations are preserved.

}

// Configure TLS if auth type is "tls"
if maestroConfig.Auth.Type == "tls" && maestroConfig.Auth.TLSConfig != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could lead to a confusing scenario where a user
configures auth.type: tls but forgets to provide tlsConfig — the client would silently
connect without mTLS, potentially exposing traffic.

Consider returning an explicit error when the configuration is inconsistent:

  if maestroConfig.Auth.Type == "tls" {
      if maestroConfig.Auth.TLSConfig == nil {
          return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided")
      }
      clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile
      clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
      clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
  }

}

// Error if env var is not set and no default provided
if envValue == "" && param.Default == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, a parameter like:

  • name: myParam
    source: "env.MY_REQUIRED_VAR"
    required: true
    would fail validation if MY_REQUIRED_VAR was not set and no default was provided. Now it
    passes validation regardless of whether the env var exists.

Was this change intentional? If so, it would be good to document why the runtime env check was
removed (e.g., validation runs at build time, not in the target environment). If not, this is
a regression that could lead to silent failures at runtime when required env vars are
missing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I confused the moment when then adapter env gets evaluated.
At runtime, the env for the Adapter instance will not change, which is different for the Adapter Task itself, for example a job is created in the resources phase, where the manifest can introduce new env variables.

# Resources with valid K8s manifests
resources:
- name: "maestro"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of configuration lose the manifestwork ref. And for there should be multiple manifests in the manifestwork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we should put manifests in same manifestwork together. Manifests number cannot be changed once the manifestwork created.

hyperfleet.io/cluster-name: "{{ .clusterName }}"
annotations:
hyperfleet.io/generation: "{{ .generationSpec }}"
discovery:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discovery is manifest level not manifestwork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the discovery can help us locate the manifest attributes with jsonPath

}

// createMaestroClient creates a Maestro client from the config
func createMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about move the createXXXClient functions to another file to simplify main.go file?

transport:
client: "maestro"
maestro:
targetCluster: cluster1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

manifestwork.ref here should be something required and have a validation

// Replace manifest with loaded content
resource.Manifest = content
// Load transport.maestro.manifestWork.ref
if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ManifestWork must not be nil.

}

// Validate transport.maestro.manifestWork.ref in spec.resources
if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about have a splited function for maestro validation only?

}

// validateTransportConfig validates transport configuration for all resources
func (v *TaskConfigValidator) validateTransportConfig() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function only validate maestro.

// Step 2: Delegate to applyResource which handles discovery, generation comparison, and operations
return re.applyResource(ctx, resource, manifest, execCtx)
// Step 2: Check transport type and route to appropriate client
clientType := resource.Transport.GetClientType()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have resource.Transport.GetClient()? The we just return maestroClient or k8sClient. My executor is having an interface defined transportClient that can accept both maestroClient and k8sClient

}

// Validate maestro transport config is present
if resource.Transport == nil || resource.Transport.Maestro == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this condition checkout should happen outside of this function. It should happen before calling applyResourceMaestro,


// applyResourceMaestro handles resource delivery via Maestro ManifestWork.
// It builds a ManifestWork containing the manifest and applies it to the target cluster.
func (re *ResourceExecutor) applyResourceMaestro(ctx context.Context, resource config_loader.Resource, manifest *unstructured.Unstructured, execCtx *ExecutionContext) (ResourceResult, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my PR, I refactored this part. Let me shrink the PR to split out this part and make it merged first instead of stuck to steps implemenation

@openshift-ci
Copy link

openshift-ci bot commented Feb 10, 2026

@rh-amarin: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/unit 0d78626 link true /test unit
ci/prow/presubmits-integration 0d78626 link true /test presubmits-integration

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

191-196: ⚠️ Potential issue | 🟡 Minor

Fix minor typo in broker override note.

✏️ Suggested edit
-  - Configure topic/subscription in the `adapter-config.yaml` but can be overriden with env variables or cli params
+  - Configure topic/subscription in the `adapter-config.yaml` but can be overridden with env variables or CLI params
🤖 Fix all issues with AI agents
In `@charts/examples/adapter-config.yaml`:
- Line 48: Update the example to use secure defaults by changing the adapter
configuration key insecure from true to false (replace "insecure: true" with
"insecure: false" in the adapter-config.yaml example) and add a brief comment or
note near the insecure setting explaining that insecure can be enabled
explicitly for local development only; ensure the example still documents how to
opt into plaintext for dev environments so maintainers know how to override
insecure when needed.
- Around line 14-16: Change the example to avoid enabling full merged-config
debug output by setting debugConfig to false (do not log secrets), and keep
log.level at debug if desired; update the adapter-config.yaml entry for the
debugConfig key and add a short inline comment next to debugConfig explaining to
enable it only for local troubleshooting to avoid leaking secrets in real
deployments.

In `@cmd/adapter/main.go`:
- Around line 287-313: The executor can be built even when resources require
Maestro transport but no Maestro client is provided; update
validateExecutorConfig to detect this by scanning config.Spec.Resources (or
wherever resource transports are defined) for any resource with
TransportClientMaestro and return an error if maestroClient is nil, and then
call validateExecutorConfig (pass the config and the maestroClient from
execBuilder or the local maestroClient variable) before calling
execBuilder.Build(); reference validateExecutorConfig(), applyResourceMaestro(),
TransportClientMaestro, MaestroClient, executor.NewBuilder(),
execBuilder.WithMaestroClient(), and execBuilder.Build() when locating the
change.

In `@internal/client_factory/maestro_client.go`:
- Around line 13-39: Add a nil-check at the start of CreateMaestroClient to
validate maestroConfig (and return a clear error) before any field access to
avoid panics; specifically, check if maestroConfig == nil and return an error
like "nil maestroConfig provided" from CreateMaestroClient, and also guard
accesses to maestroConfig.Auth (e.g., check maestroConfig.Auth != nil before
reading Auth.Type or TLSConfig) so clientConfig population and the TLS branch
(TLSConfig, CAFile, CertFile, KeyFile) never dereference a nil pointer.

In `@internal/executor/resource_executor.go`:
- Around line 203-206: The failure logging currently dumps the full manifest
(manifest.Object) via re.log.Debugf which can leak Secret data; update the
ResourceExecutor failure path to either skip logging for sensitive kinds (e.g.,
"Secret") or to redact sensitive fields before rendering (mask common secret
keys and data/strings) and then log the sanitized YAML instead of the raw
manifest, and apply the same change to the Maestro failure logging codepath so
both use the sanitized output rather than dumping manifest.Object or raw
manifests.
🧹 Nitpick comments (2)
charts/examples/adapter-task-config.yaml (1)

79-84: Avoid hardcoding the Maestro target cluster in the example.

Using a static cluster1 makes the example easy to misapply; consider templating it to the event/lookup result so it routes correctly.

✏️ Suggested change
-        maestro:
-          targetCluster: cluster1
+        maestro:
+          targetCluster: "{{ .clusterName }}"
internal/config_loader/validator.go (1)

544-574: Flag maestro config when transport.client isn’t maestro. This avoids silently defaulting to Kubernetes if users set a maestro block but forget the client field.

♻️ Suggested validation tweak
-		// Validate maestro config is present when client=maestro
-		if resource.Transport.Client == TransportClientMaestro {
+		clientType := resource.Transport.GetClientType()
+		// Validate maestro config is present when client=maestro
+		if clientType == TransportClientMaestro {
 			if resource.Transport.Maestro == nil {
 				v.errors.Add(basePath, "maestro configuration is required when client is 'maestro'")
 				continue
 			}
 			// ...
+		} else if resource.Transport.Maestro != nil {
+			v.errors.Add(basePath+"."+FieldMaestro, "maestro config is set but transport.client is not 'maestro'")
 		}

Comment on lines +14 to +16
debugConfig: true
log:
level: debug
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid enabling full config debug logging in the example.

debugConfig: true logs the full merged config and can leak secrets when users copy this example into real deployments. Consider defaulting it to false and leaving a comment for troubleshooting.

🔒 Suggested change
-  debugConfig: true
-  log:
-    level: debug
+  debugConfig: false
+  log:
+    level: info
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debugConfig: true
log:
level: debug
debugConfig: false
log:
level: info
🤖 Prompt for AI Agents
In `@charts/examples/adapter-config.yaml` around lines 14 - 16, Change the example
to avoid enabling full merged-config debug output by setting debugConfig to
false (do not log secrets), and keep log.level at debug if desired; update the
adapter-config.yaml entry for the debugConfig key and add a short inline comment
next to debugConfig explaining to enable it only for local troubleshooting to
avoid leaking secrets in real deployments.

# Client identifier (defaults to sourceId if not specified)
# Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID
clientId: "hyperfleet-adapter-client"
insecure: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Prefer secure defaults for Maestro in the example.

insecure: true makes the example opt into plaintext. Safer to default to false and allow enabling insecure mode explicitly for local dev.

🔧 Suggested change
-      insecure: true
+      insecure: false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
insecure: true
insecure: false
🤖 Prompt for AI Agents
In `@charts/examples/adapter-config.yaml` at line 48, Update the example to use
secure defaults by changing the adapter configuration key insecure from true to
false (replace "insecure: true" with "insecure: false" in the
adapter-config.yaml example) and add a brief comment or note near the insecure
setting explaining that insecure can be enabled explicitly for local development
only; ensure the example still documents how to opt into plaintext for dev
environments so maintainers know how to override insecure when needed.

Comment on lines +287 to 313
// Create Maestro client if configured
var maestroClient maestro_client.ManifestWorkClient
if config.Spec.Clients.Maestro != nil {
log.Info(ctx, "Creating Maestro client...")
maestroClient, err = client_factory.CreateMaestroClient(ctx, config.Spec.Clients.Maestro, log)
if err != nil {
errCtx := logger.WithErrorField(ctx, err)
log.Errorf(errCtx, "Failed to create Maestro client")
return fmt.Errorf("failed to create Maestro client: %w", err)
}
log.Info(ctx, "Maestro client created successfully")
}

// Create the executor using the builder pattern
log.Info(ctx, "Creating event executor...")
exec, err := executor.NewBuilder().
execBuilder := executor.NewBuilder().
WithConfig(config).
WithAPIClient(apiClient).
WithK8sClient(k8sClient).
WithLogger(log).
Build()
WithLogger(log)

if maestroClient != nil {
execBuilder = execBuilder.WithMaestroClient(maestroClient)
}

exec, err := execBuilder.Build()
if err != nil {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for Maestro client validation logic
rg -n "WithMaestroClient|MaestroClient|TransportClientMaestro" internal/executor internal/config_loader -C 3

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 8184


🏁 Script executed:

#!/bin/bash
# Check executor Build() method for validation
rg -n "func.*Build\(\)" internal/executor/executor.go -A 20

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 181


🏁 Script executed:

#!/bin/bash
# Check applyResourceMaestro implementation
rg -n "func.*applyResourceMaestro" internal/executor -A 15

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 1449


🏁 Script executed:

#!/bin/bash
# Check NewExecutor function for validation logic
rg -n "func NewExecutor" internal/executor/executor.go -A 30

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 999


🏁 Script executed:

#!/bin/bash
# Get the full validateExecutorConfig function
rg -n "func validateExecutorConfig" internal/executor/executor.go -A 50

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 2081


🏁 Script executed:

#!/bin/bash
# Check if there's any config-level validation linking transport requirements to client initialization
rg -n "TransportClientMaestro" internal/config_loader/validator.go -B 5 -A 10

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 759


Add build-time validation to fail fast when resources require Maestro transport but no Maestro client is configured. The executor currently has a runtime guard in applyResourceMaestro() (lines 235–239) that catches a nil Maestro client, but the executor builds successfully even if resources require maestro transport and no Maestro client is configured. Validate in validateExecutorConfig() that if any resource uses TransportClientMaestro, the MaestroClient must be non-nil, enabling fail-fast at startup rather than during execution.

🤖 Prompt for AI Agents
In `@cmd/adapter/main.go` around lines 287 - 313, The executor can be built even
when resources require Maestro transport but no Maestro client is provided;
update validateExecutorConfig to detect this by scanning config.Spec.Resources
(or wherever resource transports are defined) for any resource with
TransportClientMaestro and return an error if maestroClient is nil, and then
call validateExecutorConfig (pass the config and the maestroClient from
execBuilder or the local maestroClient variable) before calling
execBuilder.Build(); reference validateExecutorConfig(), applyResourceMaestro(),
TransportClientMaestro, MaestroClient, executor.NewBuilder(),
execBuilder.WithMaestroClient(), and execBuilder.Build() when locating the
change.

Comment on lines +13 to +39
// CreateMaestroClient creates a Maestro client from the config
func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
clientConfig := &maestro_client.Config{
MaestroServerAddr: maestroConfig.HTTPServerAddress,
GRPCServerAddr: maestroConfig.GRPCServerAddress,
SourceID: maestroConfig.SourceID,
Insecure: maestroConfig.Insecure,
}

// Parse timeout if specified
if maestroConfig.Timeout != "" {
timeout, err := time.ParseDuration(maestroConfig.Timeout)
if err != nil {
return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err)
}
clientConfig.HTTPTimeout = timeout
}

// Configure TLS if auth type is "tls"
if maestroConfig.Auth.Type == "tls" {
if maestroConfig.Auth.TLSConfig == nil {
return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided")
}
clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile
clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -n "type MaestroClientConfig|type Maestro.*Auth|TLSConfig" internal/config_loader/types.go -C 3

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 338


🏁 Script executed:

cat -n internal/config_loader/types.go | head -100

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 4215


🏁 Script executed:

rg -n "type MaestroClientConfig|type.*Auth|TLSConfig" internal/config_loader/types.go -A 10

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 1633


🏁 Script executed:

rg -n "CreateMaestroClient" internal/ -B 2 -A 2

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 663


🏁 Script executed:

cat -n internal/client_factory/maestro_client.go | head -50

Repository: openshift-hyperfleet/hyperfleet-adapter

Length of output: 1871


Add nil guard for maestroConfig to prevent panic. Exported factory functions should validate input parameters; without this guard, nil config will panic on field access.

Proposed nil check
 func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
+	if maestroConfig == nil {
+		return nil, fmt.Errorf("maestro config is nil")
+	}
 	clientConfig := &maestro_client.Config{
 		MaestroServerAddr: maestroConfig.HTTPServerAddress,
 		GRPCServerAddr:    maestroConfig.GRPCServerAddress,
 		SourceID:          maestroConfig.SourceID,
 		Insecure:          maestroConfig.Insecure,
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// CreateMaestroClient creates a Maestro client from the config
func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
clientConfig := &maestro_client.Config{
MaestroServerAddr: maestroConfig.HTTPServerAddress,
GRPCServerAddr: maestroConfig.GRPCServerAddress,
SourceID: maestroConfig.SourceID,
Insecure: maestroConfig.Insecure,
}
// Parse timeout if specified
if maestroConfig.Timeout != "" {
timeout, err := time.ParseDuration(maestroConfig.Timeout)
if err != nil {
return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err)
}
clientConfig.HTTPTimeout = timeout
}
// Configure TLS if auth type is "tls"
if maestroConfig.Auth.Type == "tls" {
if maestroConfig.Auth.TLSConfig == nil {
return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided")
}
clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile
clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
}
// CreateMaestroClient creates a Maestro client from the config
func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
if maestroConfig == nil {
return nil, fmt.Errorf("maestro config is nil")
}
clientConfig := &maestro_client.Config{
MaestroServerAddr: maestroConfig.HTTPServerAddress,
GRPCServerAddr: maestroConfig.GRPCServerAddress,
SourceID: maestroConfig.SourceID,
Insecure: maestroConfig.Insecure,
}
// Parse timeout if specified
if maestroConfig.Timeout != "" {
timeout, err := time.ParseDuration(maestroConfig.Timeout)
if err != nil {
return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err)
}
clientConfig.HTTPTimeout = timeout
}
// Configure TLS if auth type is "tls"
if maestroConfig.Auth.Type == "tls" {
if maestroConfig.Auth.TLSConfig == nil {
return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided")
}
clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile
clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
}
🤖 Prompt for AI Agents
In `@internal/client_factory/maestro_client.go` around lines 13 - 39, Add a
nil-check at the start of CreateMaestroClient to validate maestroConfig (and
return a clear error) before any field access to avoid panics; specifically,
check if maestroConfig == nil and return an error like "nil maestroConfig
provided" from CreateMaestroClient, and also guard accesses to
maestroConfig.Auth (e.g., check maestroConfig.Auth != nil before reading
Auth.Type or TLSConfig) so clientConfig population and the TLS branch
(TLSConfig, CAFile, CertFile, KeyFile) never dereference a nil pointer.

Comment on lines +203 to +206
// Log the full manifest for debugging
if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil {
re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid logging full manifests that may include secrets.
Dumping full YAML on failure can leak Secret data; the same risk appears in the Maestro failure logs. Consider redaction or skipping for sensitive kinds.

🔒 Suggested guard (apply similarly in the Maestro failure path)
-		if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil {
-			re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML))
-		}
+		if manifest.GetKind() != "Secret" {
+			if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil {
+				re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML))
+			}
+		} else {
+			re.log.Debugf(errCtx, "Resource[%s] failed manifest: <redacted Secret>", resource.Name)
+		}
🤖 Prompt for AI Agents
In `@internal/executor/resource_executor.go` around lines 203 - 206, The failure
logging currently dumps the full manifest (manifest.Object) via re.log.Debugf
which can leak Secret data; update the ResourceExecutor failure path to either
skip logging for sensitive kinds (e.g., "Secret") or to redact sensitive fields
before rendering (mask common secret keys and data/strings) and then log the
sanitized YAML instead of the raw manifest, and apply the same change to the
Maestro failure logging codepath so both use the sanitized output rather than
dumping manifest.Object or raw manifests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants