-
Notifications
You must be signed in to change notification settings - Fork 8
HYPERFLEET-569 - feat: Implement maestro DSL loader #41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
e904536 to
8a0c194
Compare
WalkthroughAdds optional Maestro transport for resource delivery and per-resource transport configuration types (TransportConfig, MaestroTransportConfig, ManifestWorkConfig). Executor and ResourceExecutor gain Maestro client wiring and a Maestro delivery path alongside existing Kubernetes path. Introduces client_factory helpers to create API, K8s, and Maestro clients and updates main initialization to use them. Config loader now conditionally loads manifest and ManifestWork refs and validates transport-related fields. Removes several legacy example/config template files and broker documentation; README and chart example files updated. Sequence Diagram(s)sequenceDiagram
participant Adapter as Adapter (cmd/adapter/main.go)
participant Config as Config Loader
participant Executor as Executor Builder
participant ResExec as ResourceExecutor
participant K8s as Kubernetes API
participant Maestro as Maestro API
Adapter->>Config: Load adapter & task configs
Config->>Config: Parse transport and load manifest refs
Adapter->>Adapter: Create API & K8s clients via client_factory
Adapter->>Adapter: Optionally create Maestro client via client_factory
Adapter->>Executor: Build executor with clients (K8s, maybe Maestro)
Executor->>ResExec: Execute resource
ResExec->>ResExec: Determine transport client type
alt transport == kubernetes
ResExec->>K8s: applyResourceK8s (apply manifest)
K8s-->>ResExec: result
else transport == maestro
ResExec->>ResExec: build ManifestWork from manifest
ResExec->>Maestro: apply ManifestWork
Maestro-->>ResExec: result
end
ResExec-->>Executor: resource result
Executor-->>Adapter: execution complete
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
190-196:⚠️ Potential issue | 🟡 MinorFix typo: “overriden” → “overridden.”
Minor doc polish.✏️ Proposed fix
- - Configure topic/subscription in the `adapter-config.yaml` but can be overriden with env variables or cli params + - Configure topic/subscription in the `adapter-config.yaml` but can be overridden with env variables or CLI params
🤖 Fix all issues with AI agents
In `@internal/executor/resource_executor.go`:
- Around line 356-366: The generation annotation is being lost because
work.SetAnnotations is called twice and the second call in
applyManifestWorkSettings overwrites all annotations; instead merge annotations:
when copying generation from manifest (using manifestAnnotations and
constants.AnnotationGeneration) read existing work.GetAnnotations(), create a
merged map that preserves existing keys (including any metadata.annotations from
RefContent), set/overwrite the generation key in that merged map, and then call
work.SetAnnotations once with the merged map; similarly, update
applyManifestWorkSettings to merge incoming metadata.annotations with
work.GetAnnotations() rather than replacing them so existing generation and
other annotations are preserved.
🧹 Nitpick comments (3)
internal/executor/types.go (1)
63-67: Add config-time validation for MaestroClient when maestro transport is used.While
applyResourceMaestroalready validates at runtime thatmaestroClientis not nil, validation at config initialization time would catch the issue earlier. Consider adding a check invalidateExecutorConfigto iterate over resources and requireMaestroClientwhen any resource usesTransportClientMaestro.internal/executor/resource_executor.go (2)
388-405: Labels frommetadata.labelsoverwrite rather than merge.On line 403,
work.SetLabels(labelMap)overwrites any existing labels. This is inconsistent with the root-level labels handling (lines 436-444) which merges with existing. Consider using the same merge pattern for consistency:♻️ Suggested fix for consistency
if labels, ok := metadata["labels"].(map[string]interface{}); ok { labelMap := make(map[string]string) for k, v := range labels { if str, ok := v.(string); ok { rendered, err := renderTemplate(str, params) if err != nil { return fmt.Errorf("failed to render label value for key %s: %w", k, err) } labelMap[k] = rendered } } - work.SetLabels(labelMap) + // Merge with existing labels + existing := work.GetLabels() + if existing == nil { + existing = make(map[string]string) + } + for k, v := range labelMap { + existing[k] = v + } + work.SetLabels(existing) re.log.Debugf(ctx, "Applied labels: %v", labelMap) }
297-299: Operation result is alwaysOperationCreatebut the actual operation is an upsert.The comment correctly states "ManifestWork apply is always an upsert", but the operation is recorded as
OperationCreate. This could be misleading in logs and metrics. Consider using a more accurate operation type or adding a new constant likeOperationApplyorOperationUpsert.
| // Copy the generation annotation from the manifest to the ManifestWork | ||
| // This is required by the maestro client for generation-based idempotency | ||
| manifestAnnotations := manifest.GetAnnotations() | ||
| if manifestAnnotations != nil { | ||
| if gen, ok := manifestAnnotations[constants.AnnotationGeneration]; ok { | ||
| work.SetAnnotations(map[string]string{ | ||
| constants.AnnotationGeneration: gen, | ||
| }) | ||
| re.log.Debugf(ctx, "Set ManifestWork generation annotation: %s", gen) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generation annotation may be overwritten by RefContent settings.
The generation annotation is set on lines 361-364, but applyManifestWorkSettings calls work.SetAnnotations() (line 419) which overwrites all annotations. This could lose the generation annotation if metadata.annotations is present in RefContent.
Consider merging annotations instead of overwriting:
🐛 Proposed fix
// Apply annotations from metadata
if annotations, ok := metadata["annotations"].(map[string]interface{}); ok {
annotationMap := make(map[string]string)
for k, v := range annotations {
if str, ok := v.(string); ok {
rendered, err := renderTemplate(str, params)
if err != nil {
return fmt.Errorf("failed to render annotation value for key %s: %w", k, err)
}
annotationMap[k] = rendered
}
}
- work.SetAnnotations(annotationMap)
+ // Merge with existing annotations to preserve generation annotation
+ existing := work.GetAnnotations()
+ if existing == nil {
+ existing = make(map[string]string)
+ }
+ for k, v := range annotationMap {
+ existing[k] = v
+ }
+ work.SetAnnotations(existing)
re.log.Debugf(ctx, "Applied annotations: %v", annotationMap)
}Also applies to: 403-421
🤖 Prompt for AI Agents
In `@internal/executor/resource_executor.go` around lines 356 - 366, The
generation annotation is being lost because work.SetAnnotations is called twice
and the second call in applyManifestWorkSettings overwrites all annotations;
instead merge annotations: when copying generation from manifest (using
manifestAnnotations and constants.AnnotationGeneration) read existing
work.GetAnnotations(), create a merged map that preserves existing keys
(including any metadata.annotations from RefContent), set/overwrite the
generation key in that merged map, and then call work.SetAnnotations once with
the merged map; similarly, update applyManifestWorkSettings to merge incoming
metadata.annotations with work.GetAnnotations() rather than replacing them so
existing generation and other annotations are preserved.
cmd/adapter/main.go
Outdated
| } | ||
|
|
||
| // Configure TLS if auth type is "tls" | ||
| if maestroConfig.Auth.Type == "tls" && maestroConfig.Auth.TLSConfig != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could lead to a confusing scenario where a user
configures auth.type: tls but forgets to provide tlsConfig — the client would silently
connect without mTLS, potentially exposing traffic.
Consider returning an explicit error when the configuration is inconsistent:
if maestroConfig.Auth.Type == "tls" {
if maestroConfig.Auth.TLSConfig == nil {
return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided")
}
clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile
clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile
clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile
}| } | ||
|
|
||
| // Error if env var is not set and no default provided | ||
| if envValue == "" && param.Default == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, a parameter like:
- name: myParam
source: "env.MY_REQUIRED_VAR"
required: true
would fail validation if MY_REQUIRED_VAR was not set and no default was provided. Now it
passes validation regardless of whether the env var exists.
Was this change intentional? If so, it would be good to document why the runtime env check was
removed (e.g., validation runs at build time, not in the target environment). If not, this is
a regression that could lead to silent failures at runtime when required env vars are
missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, I confused the moment when then adapter env gets evaluated.
At runtime, the env for the Adapter instance will not change, which is different for the Adapter Task itself, for example a job is created in the resources phase, where the manifest can introduce new env variables.
| # Resources with valid K8s manifests | ||
| resources: | ||
| - name: "maestro" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This kind of configuration lose the manifestwork ref. And for there should be multiple manifests in the manifestwork.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we should put manifests in same manifestwork together. Manifests number cannot be changed once the manifestwork created.
| hyperfleet.io/cluster-name: "{{ .clusterName }}" | ||
| annotations: | ||
| hyperfleet.io/generation: "{{ .generationSpec }}" | ||
| discovery: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discovery is manifest level not manifestwork.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the discovery can help us locate the manifest attributes with jsonPath
cmd/adapter/main.go
Outdated
| } | ||
|
|
||
| // createMaestroClient creates a Maestro client from the config | ||
| func createMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about move the createXXXClient functions to another file to simplify main.go file?
| transport: | ||
| client: "maestro" | ||
| maestro: | ||
| targetCluster: cluster1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
manifestwork.ref here should be something required and have a validation
| // Replace manifest with loaded content | ||
| resource.Manifest = content | ||
| // Load transport.maestro.manifestWork.ref | ||
| if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ManifestWork must not be nil.
| } | ||
|
|
||
| // Validate transport.maestro.manifestWork.ref in spec.resources | ||
| if resource.Transport != nil && resource.Transport.Maestro != nil && resource.Transport.Maestro.ManifestWork != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about have a splited function for maestro validation only?
| } | ||
|
|
||
| // validateTransportConfig validates transport configuration for all resources | ||
| func (v *TaskConfigValidator) validateTransportConfig() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function only validate maestro.
| // Step 2: Delegate to applyResource which handles discovery, generation comparison, and operations | ||
| return re.applyResource(ctx, resource, manifest, execCtx) | ||
| // Step 2: Check transport type and route to appropriate client | ||
| clientType := resource.Transport.GetClientType() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have resource.Transport.GetClient()? The we just return maestroClient or k8sClient. My executor is having an interface defined transportClient that can accept both maestroClient and k8sClient
| } | ||
|
|
||
| // Validate maestro transport config is present | ||
| if resource.Transport == nil || resource.Transport.Maestro == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this condition checkout should happen outside of this function. It should happen before calling applyResourceMaestro,
|
|
||
| // applyResourceMaestro handles resource delivery via Maestro ManifestWork. | ||
| // It builds a ManifestWork containing the manifest and applies it to the target cluster. | ||
| func (re *ResourceExecutor) applyResourceMaestro(ctx context.Context, resource config_loader.Resource, manifest *unstructured.Unstructured, execCtx *ExecutionContext) (ResourceResult, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my PR, I refactored this part. Let me shrink the PR to split out this part and make it merged first instead of stuck to steps implemenation
8a0c194 to
0d78626
Compare
|
@rh-amarin: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
README.md (1)
191-196:⚠️ Potential issue | 🟡 MinorFix minor typo in broker override note.
✏️ Suggested edit
- - Configure topic/subscription in the `adapter-config.yaml` but can be overriden with env variables or cli params + - Configure topic/subscription in the `adapter-config.yaml` but can be overridden with env variables or CLI params
🤖 Fix all issues with AI agents
In `@charts/examples/adapter-config.yaml`:
- Line 48: Update the example to use secure defaults by changing the adapter
configuration key insecure from true to false (replace "insecure: true" with
"insecure: false" in the adapter-config.yaml example) and add a brief comment or
note near the insecure setting explaining that insecure can be enabled
explicitly for local development only; ensure the example still documents how to
opt into plaintext for dev environments so maintainers know how to override
insecure when needed.
- Around line 14-16: Change the example to avoid enabling full merged-config
debug output by setting debugConfig to false (do not log secrets), and keep
log.level at debug if desired; update the adapter-config.yaml entry for the
debugConfig key and add a short inline comment next to debugConfig explaining to
enable it only for local troubleshooting to avoid leaking secrets in real
deployments.
In `@cmd/adapter/main.go`:
- Around line 287-313: The executor can be built even when resources require
Maestro transport but no Maestro client is provided; update
validateExecutorConfig to detect this by scanning config.Spec.Resources (or
wherever resource transports are defined) for any resource with
TransportClientMaestro and return an error if maestroClient is nil, and then
call validateExecutorConfig (pass the config and the maestroClient from
execBuilder or the local maestroClient variable) before calling
execBuilder.Build(); reference validateExecutorConfig(), applyResourceMaestro(),
TransportClientMaestro, MaestroClient, executor.NewBuilder(),
execBuilder.WithMaestroClient(), and execBuilder.Build() when locating the
change.
In `@internal/client_factory/maestro_client.go`:
- Around line 13-39: Add a nil-check at the start of CreateMaestroClient to
validate maestroConfig (and return a clear error) before any field access to
avoid panics; specifically, check if maestroConfig == nil and return an error
like "nil maestroConfig provided" from CreateMaestroClient, and also guard
accesses to maestroConfig.Auth (e.g., check maestroConfig.Auth != nil before
reading Auth.Type or TLSConfig) so clientConfig population and the TLS branch
(TLSConfig, CAFile, CertFile, KeyFile) never dereference a nil pointer.
In `@internal/executor/resource_executor.go`:
- Around line 203-206: The failure logging currently dumps the full manifest
(manifest.Object) via re.log.Debugf which can leak Secret data; update the
ResourceExecutor failure path to either skip logging for sensitive kinds (e.g.,
"Secret") or to redact sensitive fields before rendering (mask common secret
keys and data/strings) and then log the sanitized YAML instead of the raw
manifest, and apply the same change to the Maestro failure logging codepath so
both use the sanitized output rather than dumping manifest.Object or raw
manifests.
🧹 Nitpick comments (2)
charts/examples/adapter-task-config.yaml (1)
79-84: Avoid hardcoding the Maestro target cluster in the example.Using a static
cluster1makes the example easy to misapply; consider templating it to the event/lookup result so it routes correctly.✏️ Suggested change
- maestro: - targetCluster: cluster1 + maestro: + targetCluster: "{{ .clusterName }}"internal/config_loader/validator.go (1)
544-574: Flag maestro config whentransport.clientisn’t maestro. This avoids silently defaulting to Kubernetes if users set a maestro block but forget the client field.♻️ Suggested validation tweak
- // Validate maestro config is present when client=maestro - if resource.Transport.Client == TransportClientMaestro { + clientType := resource.Transport.GetClientType() + // Validate maestro config is present when client=maestro + if clientType == TransportClientMaestro { if resource.Transport.Maestro == nil { v.errors.Add(basePath, "maestro configuration is required when client is 'maestro'") continue } // ... + } else if resource.Transport.Maestro != nil { + v.errors.Add(basePath+"."+FieldMaestro, "maestro config is set but transport.client is not 'maestro'") }
| debugConfig: true | ||
| log: | ||
| level: debug |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid enabling full config debug logging in the example.
debugConfig: true logs the full merged config and can leak secrets when users copy this example into real deployments. Consider defaulting it to false and leaving a comment for troubleshooting.
🔒 Suggested change
- debugConfig: true
- log:
- level: debug
+ debugConfig: false
+ log:
+ level: info📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| debugConfig: true | |
| log: | |
| level: debug | |
| debugConfig: false | |
| log: | |
| level: info |
🤖 Prompt for AI Agents
In `@charts/examples/adapter-config.yaml` around lines 14 - 16, Change the example
to avoid enabling full merged-config debug output by setting debugConfig to
false (do not log secrets), and keep log.level at debug if desired; update the
adapter-config.yaml entry for the debugConfig key and add a short inline comment
next to debugConfig explaining to enable it only for local troubleshooting to
avoid leaking secrets in real deployments.
| # Client identifier (defaults to sourceId if not specified) | ||
| # Environment variable: HYPERFLEET_MAESTRO_CLIENT_ID | ||
| clientId: "hyperfleet-adapter-client" | ||
| insecure: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer secure defaults for Maestro in the example.
insecure: true makes the example opt into plaintext. Safer to default to false and allow enabling insecure mode explicitly for local dev.
🔧 Suggested change
- insecure: true
+ insecure: false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| insecure: true | |
| insecure: false |
🤖 Prompt for AI Agents
In `@charts/examples/adapter-config.yaml` at line 48, Update the example to use
secure defaults by changing the adapter configuration key insecure from true to
false (replace "insecure: true" with "insecure: false" in the
adapter-config.yaml example) and add a brief comment or note near the insecure
setting explaining that insecure can be enabled explicitly for local development
only; ensure the example still documents how to opt into plaintext for dev
environments so maintainers know how to override insecure when needed.
| // Create Maestro client if configured | ||
| var maestroClient maestro_client.ManifestWorkClient | ||
| if config.Spec.Clients.Maestro != nil { | ||
| log.Info(ctx, "Creating Maestro client...") | ||
| maestroClient, err = client_factory.CreateMaestroClient(ctx, config.Spec.Clients.Maestro, log) | ||
| if err != nil { | ||
| errCtx := logger.WithErrorField(ctx, err) | ||
| log.Errorf(errCtx, "Failed to create Maestro client") | ||
| return fmt.Errorf("failed to create Maestro client: %w", err) | ||
| } | ||
| log.Info(ctx, "Maestro client created successfully") | ||
| } | ||
|
|
||
| // Create the executor using the builder pattern | ||
| log.Info(ctx, "Creating event executor...") | ||
| exec, err := executor.NewBuilder(). | ||
| execBuilder := executor.NewBuilder(). | ||
| WithConfig(config). | ||
| WithAPIClient(apiClient). | ||
| WithK8sClient(k8sClient). | ||
| WithLogger(log). | ||
| Build() | ||
| WithLogger(log) | ||
|
|
||
| if maestroClient != nil { | ||
| execBuilder = execBuilder.WithMaestroClient(maestroClient) | ||
| } | ||
|
|
||
| exec, err := execBuilder.Build() | ||
| if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for Maestro client validation logic
rg -n "WithMaestroClient|MaestroClient|TransportClientMaestro" internal/executor internal/config_loader -C 3Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 8184
🏁 Script executed:
#!/bin/bash
# Check executor Build() method for validation
rg -n "func.*Build\(\)" internal/executor/executor.go -A 20Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 181
🏁 Script executed:
#!/bin/bash
# Check applyResourceMaestro implementation
rg -n "func.*applyResourceMaestro" internal/executor -A 15Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 1449
🏁 Script executed:
#!/bin/bash
# Check NewExecutor function for validation logic
rg -n "func NewExecutor" internal/executor/executor.go -A 30Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 999
🏁 Script executed:
#!/bin/bash
# Get the full validateExecutorConfig function
rg -n "func validateExecutorConfig" internal/executor/executor.go -A 50Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 2081
🏁 Script executed:
#!/bin/bash
# Check if there's any config-level validation linking transport requirements to client initialization
rg -n "TransportClientMaestro" internal/config_loader/validator.go -B 5 -A 10Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 759
Add build-time validation to fail fast when resources require Maestro transport but no Maestro client is configured. The executor currently has a runtime guard in applyResourceMaestro() (lines 235–239) that catches a nil Maestro client, but the executor builds successfully even if resources require maestro transport and no Maestro client is configured. Validate in validateExecutorConfig() that if any resource uses TransportClientMaestro, the MaestroClient must be non-nil, enabling fail-fast at startup rather than during execution.
🤖 Prompt for AI Agents
In `@cmd/adapter/main.go` around lines 287 - 313, The executor can be built even
when resources require Maestro transport but no Maestro client is provided;
update validateExecutorConfig to detect this by scanning config.Spec.Resources
(or wherever resource transports are defined) for any resource with
TransportClientMaestro and return an error if maestroClient is nil, and then
call validateExecutorConfig (pass the config and the maestroClient from
execBuilder or the local maestroClient variable) before calling
execBuilder.Build(); reference validateExecutorConfig(), applyResourceMaestro(),
TransportClientMaestro, MaestroClient, executor.NewBuilder(),
execBuilder.WithMaestroClient(), and execBuilder.Build() when locating the
change.
| // CreateMaestroClient creates a Maestro client from the config | ||
| func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { | ||
| clientConfig := &maestro_client.Config{ | ||
| MaestroServerAddr: maestroConfig.HTTPServerAddress, | ||
| GRPCServerAddr: maestroConfig.GRPCServerAddress, | ||
| SourceID: maestroConfig.SourceID, | ||
| Insecure: maestroConfig.Insecure, | ||
| } | ||
|
|
||
| // Parse timeout if specified | ||
| if maestroConfig.Timeout != "" { | ||
| timeout, err := time.ParseDuration(maestroConfig.Timeout) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err) | ||
| } | ||
| clientConfig.HTTPTimeout = timeout | ||
| } | ||
|
|
||
| // Configure TLS if auth type is "tls" | ||
| if maestroConfig.Auth.Type == "tls" { | ||
| if maestroConfig.Auth.TLSConfig == nil { | ||
| return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided") | ||
| } | ||
| clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile | ||
| clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile | ||
| clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "type MaestroClientConfig|type Maestro.*Auth|TLSConfig" internal/config_loader/types.go -C 3Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 338
🏁 Script executed:
cat -n internal/config_loader/types.go | head -100Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 4215
🏁 Script executed:
rg -n "type MaestroClientConfig|type.*Auth|TLSConfig" internal/config_loader/types.go -A 10Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 1633
🏁 Script executed:
rg -n "CreateMaestroClient" internal/ -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 663
🏁 Script executed:
cat -n internal/client_factory/maestro_client.go | head -50Repository: openshift-hyperfleet/hyperfleet-adapter
Length of output: 1871
Add nil guard for maestroConfig to prevent panic. Exported factory functions should validate input parameters; without this guard, nil config will panic on field access.
Proposed nil check
func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) {
+ if maestroConfig == nil {
+ return nil, fmt.Errorf("maestro config is nil")
+ }
clientConfig := &maestro_client.Config{
MaestroServerAddr: maestroConfig.HTTPServerAddress,
GRPCServerAddr: maestroConfig.GRPCServerAddress,
SourceID: maestroConfig.SourceID,
Insecure: maestroConfig.Insecure,
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // CreateMaestroClient creates a Maestro client from the config | |
| func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { | |
| clientConfig := &maestro_client.Config{ | |
| MaestroServerAddr: maestroConfig.HTTPServerAddress, | |
| GRPCServerAddr: maestroConfig.GRPCServerAddress, | |
| SourceID: maestroConfig.SourceID, | |
| Insecure: maestroConfig.Insecure, | |
| } | |
| // Parse timeout if specified | |
| if maestroConfig.Timeout != "" { | |
| timeout, err := time.ParseDuration(maestroConfig.Timeout) | |
| if err != nil { | |
| return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err) | |
| } | |
| clientConfig.HTTPTimeout = timeout | |
| } | |
| // Configure TLS if auth type is "tls" | |
| if maestroConfig.Auth.Type == "tls" { | |
| if maestroConfig.Auth.TLSConfig == nil { | |
| return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided") | |
| } | |
| clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile | |
| clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile | |
| clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile | |
| } | |
| // CreateMaestroClient creates a Maestro client from the config | |
| func CreateMaestroClient(ctx context.Context, maestroConfig *config_loader.MaestroClientConfig, log logger.Logger) (*maestro_client.Client, error) { | |
| if maestroConfig == nil { | |
| return nil, fmt.Errorf("maestro config is nil") | |
| } | |
| clientConfig := &maestro_client.Config{ | |
| MaestroServerAddr: maestroConfig.HTTPServerAddress, | |
| GRPCServerAddr: maestroConfig.GRPCServerAddress, | |
| SourceID: maestroConfig.SourceID, | |
| Insecure: maestroConfig.Insecure, | |
| } | |
| // Parse timeout if specified | |
| if maestroConfig.Timeout != "" { | |
| timeout, err := time.ParseDuration(maestroConfig.Timeout) | |
| if err != nil { | |
| return nil, fmt.Errorf("invalid maestro timeout %q: %w", maestroConfig.Timeout, err) | |
| } | |
| clientConfig.HTTPTimeout = timeout | |
| } | |
| // Configure TLS if auth type is "tls" | |
| if maestroConfig.Auth.Type == "tls" { | |
| if maestroConfig.Auth.TLSConfig == nil { | |
| return nil, fmt.Errorf("maestro auth type is 'tls' but tlsConfig is not provided") | |
| } | |
| clientConfig.CAFile = maestroConfig.Auth.TLSConfig.CAFile | |
| clientConfig.ClientCertFile = maestroConfig.Auth.TLSConfig.CertFile | |
| clientConfig.ClientKeyFile = maestroConfig.Auth.TLSConfig.KeyFile | |
| } |
🤖 Prompt for AI Agents
In `@internal/client_factory/maestro_client.go` around lines 13 - 39, Add a
nil-check at the start of CreateMaestroClient to validate maestroConfig (and
return a clear error) before any field access to avoid panics; specifically,
check if maestroConfig == nil and return an error like "nil maestroConfig
provided" from CreateMaestroClient, and also guard accesses to
maestroConfig.Auth (e.g., check maestroConfig.Auth != nil before reading
Auth.Type or TLSConfig) so clientConfig population and the TLS branch
(TLSConfig, CAFile, CertFile, KeyFile) never dereference a nil pointer.
| // Log the full manifest for debugging | ||
| if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil { | ||
| re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid logging full manifests that may include secrets.
Dumping full YAML on failure can leak Secret data; the same risk appears in the Maestro failure logs. Consider redaction or skipping for sensitive kinds.
🔒 Suggested guard (apply similarly in the Maestro failure path)
- if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil {
- re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML))
- }
+ if manifest.GetKind() != "Secret" {
+ if manifestYAML, marshalErr := yaml.Marshal(manifest.Object); marshalErr == nil {
+ re.log.Debugf(errCtx, "Resource[%s] failed manifest:\n%s", resource.Name, string(manifestYAML))
+ }
+ } else {
+ re.log.Debugf(errCtx, "Resource[%s] failed manifest: <redacted Secret>", resource.Name)
+ }🤖 Prompt for AI Agents
In `@internal/executor/resource_executor.go` around lines 203 - 206, The failure
logging currently dumps the full manifest (manifest.Object) via re.log.Debugf
which can leak Secret data; update the ResourceExecutor failure path to either
skip logging for sensitive kinds (e.g., "Secret") or to redact sensitive fields
before rendering (mask common secret keys and data/strings) and then log the
sanitized YAML instead of the raw manifest, and apply the same change to the
Maestro failure logging codepath so both use the sanitized output rather than
dumping manifest.Object or raw manifests.
Summary
kubernetes/maestroclient directlytransportconfiguration block supporting bothkubernetes(direct) andmaestro(ManifestWork-based) client typesKey Changes
Transport Layer
transport.clientfield:kubernetes(default) ormaestrotargetClusterandmanifestWork.namefieldsCleanup
/configsdirectory containing outdated templates and documentation/charts/examplesSummary by CodeRabbit
New Features
Documentation
Bug Fixes