-
Notifications
You must be signed in to change notification settings - Fork 6
HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HYPERFLEET-590 - test: automate Clusters Resource Type - Workflow Validation #21
Conversation
WalkthroughThis PR replaces a GCP-specific cluster creation test with a generalized end-to-end cluster workflow validation. The test now: submits a Cluster create API request (using testdata payload), verifies initial status (Ready/Available false), polls and validates adapter-driven readiness including per-adapter conditions and metadata (reason, message, timestamps, observedGeneration), confirms final Ready/Available true with observedGeneration checks, and performs cleanup by deleting the Kubernetes namespace for the created cluster. Supporting changes add adapter configuration defaults and expose adapters in the config; helper cleanup now runs kubectl namespace delete with timeout and logs output. Sequence Diagram(s)sequenceDiagram
participant TestSuite as Test Suite
participant API as Cluster API
participant Adapter as Adapter(s)
participant K8s as Kubernetes
TestSuite->>API: POST /clusters (cluster-request.json)
API-->>TestSuite: 201 Created (clusterID)
TestSuite->>API: GET /clusters/{id} (verify initial status)
API-->>TestSuite: status Ready=false, Available=false
loop Poll adapters and status
TestSuite->>API: GET /clusters/{id}/adapters
API-->>TestSuite: adapter list & conditions (Applied, Available, Health, metadata)
TestSuite->>Adapter: (implicit) adapter reconciliation runs
Adapter-->>API: updates adapter status & metadata
API-->>TestSuite: updated adapter states
end
TestSuite->>API: GET /clusters/{id} (final state)
API-->>TestSuite: status Ready=true, Available=true (observedGeneration checks)
TestSuite->>K8s: kubectl delete namespace {clusterID}
K8s-->>TestSuite: namespace deleted (output/log)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@e2e/cluster/creation.go`:
- Around line 51-54: Replace the hardcoded "Available" string with the
client.ConditionTypeAvailable constant in calls that check cluster conditions:
update the HasResourceCondition invocation that currently passes "Available"
(where you call h.HasResourceCondition(cluster.Status.Conditions, "Available",
openapi.ResourceConditionStatusFalse)) to use client.ConditionTypeAvailable, and
likewise replace the other comparison/Expect that uses "Available" (the one
checking for ResourceConditionStatusTrue/False further down) to use
client.ConditionTypeAvailable so both condition checks consistently reference
the constant.
In `@test-design/testcases/cluster.md`:
- Line 111: Duplicate heading number: change the second "Step 5: Cleanup
resources" heading to "Step 6: Cleanup resources" so steps are sequential;
locate the heading text "Step 5: Cleanup resources" in the cluster.md content
and update the numeral to 6, and also scan for any in-document references or
cross-links that point to "Step 5: Cleanup resources" (or to step numbers after
it) and update them accordingly to preserve correct ordering.
🧹 Nitpick comments (2)
pkg/helper/helper.go (1)
41-57: Consider validatingclusterIDbefore shell execution.While this is test code, passing
clusterIDdirectly toexec.CommandContextcould be a concern if the value ever contains shell metacharacters. The ID comes from the API response, so it's likely safe, but a defensive validation could prevent issues if the ID format changes.🛡️ Optional: Add basic validation for clusterID
func (h *Helper) CleanupTestCluster(ctx context.Context, clusterID string) error { + // Basic validation to ensure clusterID is a valid Kubernetes namespace name + if clusterID == "" || strings.ContainsAny(clusterID, " \t\n;|&$`") { + return fmt.Errorf("invalid cluster ID: %q", clusterID) + } + logger.Info("deleting cluster namespace (workaround)", "cluster_id", clusterID, "namespace", clusterID)This would require adding
"strings"to the imports.e2e/cluster/creation.go (1)
81-82: TheObservedGenerationassertion assumes generation is always 1 for new clusters.This is reasonable for a creation test but could become brittle if the system behavior changes (e.g., if generation starts at 0 or is incremented differently). Consider documenting this assumption or making the assertion more flexible.
💡 Alternative: Assert generation is positive instead of exactly 1
- g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)), - "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter) + g.Expect(adapter.ObservedGeneration).To(BeNumerically(">=", int32(1)), + "adapter %s should have observed_generation >= 1 for creation request", adapter.Adapter)
e2e/cluster/creation.go
Outdated
| // 4. Final cluster state verification | ||
| ginkgo.It("should validate complete workflow for clusters resource type from creation to Ready state", | ||
| func(ctx context.Context) { | ||
| ginkgo.By("Submit a \"clusters\" resource type request via API") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it better Submit an API request to create a Cluster resource?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Updated.
e2e/cluster/creation.go
Outdated
| Expect(cluster.Id).NotTo(BeNil(), "cluster ID should be generated") | ||
| clusterID = *cluster.Id | ||
| ginkgo.GinkgoWriter.Printf("Created cluster ID: %s\n", clusterID) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A redundant space line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed it.
|
|
||
| hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions, | ||
| "Available", openapi.ResourceConditionStatusTrue) | ||
| Expect(hasAvailable).To(BeTrue(), "cluster should have Available=True condition") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The verification steps are somewhat reversed. We should verify that the status of all adapters is true, then verify that the available condition of the cluster resource is true, and then the ready condition.
But actually, as we are just validation the two cluster-level conditions, we could validation them before the adapter status. If that, I do not think we need the Verify final cluster state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! I've adjust the verification order. And add new input configs to define the required adapters for clusters and nodepools, which could enable E2E testing cases know what adapters it should check.
…dapters for clusters and nodepools and refine with review comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@configs/config.yaml`:
- Around line 93-108: The adapter names in the cluster and nodepool lists are
incorrect: replace "cl-namespace" with the full expected adapter names (e.g.,
"clusters-namespace") and "np-configmap" with "nodepools-configmap", and ensure
the cluster list includes the other expected adapters ("clusters-job" and
"clusters-deployment") so they match code defaults and test expectations; update
the entries under the cluster and nodepool keys (and verify compatibility with
HYPERFLEET_ADAPTERS_CLUSTER / HYPERFLEET_ADAPTERS_NODEPOOL overrides) so adapter
validation uses the correct names.
In `@e2e/cluster/creation.go`:
- Around line 124-146: The test waits for Ready but then immediately asserts
Available, which can flake; call h.WaitForClusterCondition for
client.ConditionTypeAvailable (with openapi.ResourceConditionStatusTrue and an
appropriate timeout, e.g., h.Cfg.Timeouts.Cluster.Available or the Ready
timeout) before calling h.Client.GetCluster and before using
h.HasResourceCondition to check Available, so the Available condition is waited
for and settled prior to the final assertions.
- Around line 57-120: The Eventually block loops over adapter.Conditions and
currently dereferences condition.Reason and condition.Message after non-fatal
g.Expect checks, which can panic during retries; keep the existing
g.Expect(condition.Reason).NotTo(BeNil(), ...) and
g.Expect(condition.Message).NotTo(BeNil(), ...) but only dereference inside a
guarded branch (e.g., if condition.Reason != nil {
g.Expect(*condition.Reason).NotTo(BeEmpty(), ...) } and if condition.Message !=
nil { g.Expect(*condition.Message).NotTo(BeEmpty(), ...) }) within the same loop
so nil pointers are never dereferenced during the polling in the Eventually
block (refer to the Eventually block, the loop over adapter.Conditions, and the
fields condition.Reason / condition.Message).
In `@pkg/config/config.go`:
- Around line 276-281: The code assigns DefaultClusterAdapters and
DefaultNodePoolAdapters directly to c.Adapters.Cluster/NodePool which shares the
underlying slice memory; change these assignments to create a new slice copy
(e.g., make a new slice and copy elements or use append([]T(nil), Default...))
so that c.Adapters.Cluster and c.Adapters.NodePool own their backing arrays and
mutations won’t affect DefaultClusterAdapters/DefaultNodePoolAdapters.
| adapters: | ||
| # Required adapters for cluster resources | ||
| # List of adapter names that must be present and have correct conditions | ||
| # when validating cluster adapter execution | ||
| # | ||
| # Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated) | ||
| cluster: | ||
| - "cl-namespace" | ||
|
|
||
| # Required adapters for nodepool resources | ||
| # List of adapter names that must be present and have correct conditions | ||
| # when validating nodepool adapter execution | ||
| # | ||
| # Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated) | ||
| nodepool: | ||
| - "np-configmap" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adapter names here don’t match code defaults and test expectations.
Line 100 and Line 108 use abbreviated names (cl-namespace, np-configmap), while the defaults and test design reference clusters-namespace, clusters-job, clusters-deployment, and nodepools-configmap. Because this file overrides defaults, adapter validation will likely fail unless names align.
🔧 Suggested fix
adapters:
# Required adapters for cluster resources
@@
cluster:
- - "cl-namespace"
+ - "clusters-namespace"
+ - "clusters-job"
+ - "clusters-deployment"
@@
nodepool:
- - "np-configmap"
+ - "nodepools-configmap"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| adapters: | |
| # Required adapters for cluster resources | |
| # List of adapter names that must be present and have correct conditions | |
| # when validating cluster adapter execution | |
| # | |
| # Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated) | |
| cluster: | |
| - "cl-namespace" | |
| # Required adapters for nodepool resources | |
| # List of adapter names that must be present and have correct conditions | |
| # when validating nodepool adapter execution | |
| # | |
| # Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated) | |
| nodepool: | |
| - "np-configmap" | |
| adapters: | |
| # Required adapters for cluster resources | |
| # List of adapter names that must be present and have correct conditions | |
| # when validating cluster adapter execution | |
| # | |
| # Can be overridden by: HYPERFLEET_ADAPTERS_CLUSTER (comma-separated) | |
| cluster: | |
| - "clusters-namespace" | |
| - "clusters-job" | |
| - "clusters-deployment" | |
| # Required adapters for nodepool resources | |
| # List of adapter names that must be present and have correct conditions | |
| # when validating nodepool adapter execution | |
| # | |
| # Can be overridden by: HYPERFLEET_ADAPTERS_NODEPOOL (comma-separated) | |
| nodepool: | |
| - "nodepools-configmap" |
🤖 Prompt for AI Agents
In `@configs/config.yaml` around lines 93 - 108, The adapter names in the cluster
and nodepool lists are incorrect: replace "cl-namespace" with the full expected
adapter names (e.g., "clusters-namespace") and "np-configmap" with
"nodepools-configmap", and ensure the cluster list includes the other expected
adapters ("clusters-job" and "clusters-deployment") so they match code defaults
and test expectations; update the entries under the cluster and nodepool keys
(and verify compatibility with HYPERFLEET_ADAPTERS_CLUSTER /
HYPERFLEET_ADAPTERS_NODEPOOL overrides) so adapter validation uses the correct
names.
| Eventually(func(g Gomega) { | ||
| statuses, err := h.Client.GetClusterStatuses(ctx, clusterID) | ||
| g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses") | ||
| g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed") | ||
|
|
||
| // Build a map of adapter statuses for easy lookup | ||
| adapterMap := make(map[string]openapi.AdapterStatus) | ||
| for _, adapter := range statuses.Items { | ||
| adapterMap[adapter.Adapter] = adapter | ||
| } | ||
|
|
||
| // Validate each required adapter from config | ||
| for _, requiredAdapter := range h.Cfg.Adapters.Cluster { | ||
| adapter, exists := adapterMap[requiredAdapter] | ||
| g.Expect(exists).To(BeTrue(), | ||
| "required adapter %s should be present in adapter statuses", requiredAdapter) | ||
|
|
||
| // Validate adapter-level metadata | ||
| g.Expect(adapter.CreatedTime).NotTo(BeZero(), | ||
| "adapter %s should have valid created_time", adapter.Adapter) | ||
| g.Expect(adapter.LastReportTime).NotTo(BeZero(), | ||
| "adapter %s should have valid last_report_time", adapter.Adapter) | ||
| g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)), | ||
| "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter) | ||
|
|
||
| hasApplied := h.HasAdapterCondition( | ||
| adapter.Conditions, | ||
| client.ConditionTypeApplied, | ||
| openapi.AdapterConditionStatusTrue, | ||
| ) | ||
| g.Expect(hasApplied).To(BeTrue(), | ||
| "adapter %s should have Applied=True", adapter.Adapter) | ||
|
|
||
| hasAvailable := h.HasAdapterCondition( | ||
| adapter.Conditions, | ||
| client.ConditionTypeAvailable, | ||
| openapi.AdapterConditionStatusTrue, | ||
| ) | ||
| g.Expect(hasAvailable).To(BeTrue(), | ||
| "adapter %s should have Available=True", adapter.Adapter) | ||
|
|
||
| hasHealth := h.HasAdapterCondition( | ||
| adapter.Conditions, | ||
| client.ConditionTypeHealth, | ||
| openapi.AdapterConditionStatusTrue, | ||
| ) | ||
| g.Expect(hasHealth).To(BeTrue(), | ||
| "adapter %s should have Health=True", adapter.Adapter) | ||
|
|
||
| // Validate condition metadata for each condition | ||
| for _, condition := range adapter.Conditions { | ||
| g.Expect(condition.Reason).NotTo(BeNil(), | ||
| "adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type) | ||
| g.Expect(*condition.Reason).NotTo(BeEmpty(), | ||
| "adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type) | ||
|
|
||
| g.Expect(condition.Message).NotTo(BeNil(), | ||
| "adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type) | ||
| g.Expect(*condition.Message).NotTo(BeEmpty(), | ||
| "adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type) | ||
|
|
||
| g.Expect(condition.LastTransitionTime).NotTo(BeZero(), | ||
| "adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat e2e/cluster/creation.go | head -150Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 8687
🌐 Web query:
Gomega Eventually function behavior nil dereference panic
💡 Result:
In Gomega (onsi/gomega), when you pass a function to Eventually, Gomega polls that function synchronously until the matcher passes or the timeout/context ends. (onsi.github.io)
A regular Go runtime panic inside that polled function—like panic: runtime error: invalid memory address or nil pointer dereference—is not an “Eventually failure”; it’s a bug in the polled code. In practice, it will escape the polling loop and abort the test/spec unless you handle it yourself (e.g., avoid nil dereferences or wrap the body with your own defer/recover).
The one important exception is that Gomega intentionally uses panics as control signals for async polling in specific cases (e.g. StopTrying(...).Now() and TryAgainAfter(...).Now()), and Eventually/Consistently are designed to interpret those signals (stop or delay polling) rather than crash the test. (onsi.github.io)
So, if you’re seeing nil-deref panics in an Eventually(func() ...), the fix is to make the polled function nil-safe (return a value/error you can assert on) rather than letting it dereference nil.
Citations:
Guard Reason/Message before dereference inside Eventually.
In the Eventually block, g.Expect failures do not abort execution, so dereferencing nil pointers with *condition.Reason / *condition.Message at lines 118 and 123 will panic and escape the polling loop rather than allowing the retry cycle to continue.
🛠️ Suggested fix
g.Expect(condition.Reason).NotTo(BeNil(),
"adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type)
+ if condition.Reason != nil {
g.Expect(*condition.Reason).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type)
+ }
g.Expect(condition.Message).NotTo(BeNil(),
"adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type)
+ if condition.Message != nil {
g.Expect(*condition.Message).NotTo(BeEmpty(),
"adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Eventually(func(g Gomega) { | |
| statuses, err := h.Client.GetClusterStatuses(ctx, clusterID) | |
| g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses") | |
| g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed") | |
| // Build a map of adapter statuses for easy lookup | |
| adapterMap := make(map[string]openapi.AdapterStatus) | |
| for _, adapter := range statuses.Items { | |
| adapterMap[adapter.Adapter] = adapter | |
| } | |
| // Validate each required adapter from config | |
| for _, requiredAdapter := range h.Cfg.Adapters.Cluster { | |
| adapter, exists := adapterMap[requiredAdapter] | |
| g.Expect(exists).To(BeTrue(), | |
| "required adapter %s should be present in adapter statuses", requiredAdapter) | |
| // Validate adapter-level metadata | |
| g.Expect(adapter.CreatedTime).NotTo(BeZero(), | |
| "adapter %s should have valid created_time", adapter.Adapter) | |
| g.Expect(adapter.LastReportTime).NotTo(BeZero(), | |
| "adapter %s should have valid last_report_time", adapter.Adapter) | |
| g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)), | |
| "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter) | |
| hasApplied := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeApplied, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasApplied).To(BeTrue(), | |
| "adapter %s should have Applied=True", adapter.Adapter) | |
| hasAvailable := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeAvailable, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasAvailable).To(BeTrue(), | |
| "adapter %s should have Available=True", adapter.Adapter) | |
| hasHealth := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeHealth, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasHealth).To(BeTrue(), | |
| "adapter %s should have Health=True", adapter.Adapter) | |
| // Validate condition metadata for each condition | |
| for _, condition := range adapter.Conditions { | |
| g.Expect(condition.Reason).NotTo(BeNil(), | |
| "adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type) | |
| g.Expect(*condition.Reason).NotTo(BeEmpty(), | |
| "adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type) | |
| g.Expect(condition.Message).NotTo(BeNil(), | |
| "adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type) | |
| g.Expect(*condition.Message).NotTo(BeEmpty(), | |
| "adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type) | |
| g.Expect(condition.LastTransitionTime).NotTo(BeZero(), | |
| "adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type) | |
| } | |
| Eventually(func(g Gomega) { | |
| statuses, err := h.Client.GetClusterStatuses(ctx, clusterID) | |
| g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster statuses") | |
| g.Expect(statuses.Items).NotTo(BeEmpty(), "at least one adapter should have executed") | |
| // Build a map of adapter statuses for easy lookup | |
| adapterMap := make(map[string]openapi.AdapterStatus) | |
| for _, adapter := range statuses.Items { | |
| adapterMap[adapter.Adapter] = adapter | |
| } | |
| // Validate each required adapter from config | |
| for _, requiredAdapter := range h.Cfg.Adapters.Cluster { | |
| adapter, exists := adapterMap[requiredAdapter] | |
| g.Expect(exists).To(BeTrue(), | |
| "required adapter %s should be present in adapter statuses", requiredAdapter) | |
| // Validate adapter-level metadata | |
| g.Expect(adapter.CreatedTime).NotTo(BeZero(), | |
| "adapter %s should have valid created_time", adapter.Adapter) | |
| g.Expect(adapter.LastReportTime).NotTo(BeZero(), | |
| "adapter %s should have valid last_report_time", adapter.Adapter) | |
| g.Expect(adapter.ObservedGeneration).To(Equal(int32(1)), | |
| "adapter %s should have observed_generation=1 for new creation request", adapter.Adapter) | |
| hasApplied := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeApplied, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasApplied).To(BeTrue(), | |
| "adapter %s should have Applied=True", adapter.Adapter) | |
| hasAvailable := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeAvailable, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasAvailable).To(BeTrue(), | |
| "adapter %s should have Available=True", adapter.Adapter) | |
| hasHealth := h.HasAdapterCondition( | |
| adapter.Conditions, | |
| client.ConditionTypeHealth, | |
| openapi.AdapterConditionStatusTrue, | |
| ) | |
| g.Expect(hasHealth).To(BeTrue(), | |
| "adapter %s should have Health=True", adapter.Adapter) | |
| // Validate condition metadata for each condition | |
| for _, condition := range adapter.Conditions { | |
| g.Expect(condition.Reason).NotTo(BeNil(), | |
| "adapter %s condition %s should have non-nil reason", adapter.Adapter, condition.Type) | |
| if condition.Reason != nil { | |
| g.Expect(*condition.Reason).NotTo(BeEmpty(), | |
| "adapter %s condition %s should have non-empty reason", adapter.Adapter, condition.Type) | |
| } | |
| g.Expect(condition.Message).NotTo(BeNil(), | |
| "adapter %s condition %s should have non-nil message", adapter.Adapter, condition.Type) | |
| if condition.Message != nil { | |
| g.Expect(*condition.Message).NotTo(BeEmpty(), | |
| "adapter %s condition %s should have non-empty message", adapter.Adapter, condition.Type) | |
| } | |
| g.Expect(condition.LastTransitionTime).NotTo(BeZero(), | |
| "adapter %s condition %s should have valid last_transition_time", adapter.Adapter, condition.Type) | |
| } |
🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 57 - 120, The Eventually block loops
over adapter.Conditions and currently dereferences condition.Reason and
condition.Message after non-fatal g.Expect checks, which can panic during
retries; keep the existing g.Expect(condition.Reason).NotTo(BeNil(), ...) and
g.Expect(condition.Message).NotTo(BeNil(), ...) but only dereference inside a
guarded branch (e.g., if condition.Reason != nil {
g.Expect(*condition.Reason).NotTo(BeEmpty(), ...) } and if condition.Message !=
nil { g.Expect(*condition.Message).NotTo(BeEmpty(), ...) }) within the same loop
so nil pointers are never dereferenced during the polling in the Eventually
block (refer to the Eventually block, the loop over adapter.Conditions, and the
fields condition.Reason / condition.Message).
| ginkgo.By("Verify final cluster state") | ||
| // Wait for cluster Ready condition and verify both Ready and Available conditions are True | ||
| // This confirms the cluster has reached the desired end state | ||
| err = h.WaitForClusterCondition( | ||
| ctx, | ||
| clusterID, | ||
| client.ConditionTypeReady, | ||
| openapi.ResourceConditionStatusTrue, | ||
| h.Cfg.Timeouts.Cluster.Ready, | ||
| ) | ||
| Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True") | ||
|
|
||
| finalCluster, err := h.Client.GetCluster(ctx, clusterID) | ||
| Expect(err).NotTo(HaveOccurred(), "failed to get final cluster state") | ||
| Expect(finalCluster.Status).NotTo(BeNil(), "cluster status should be present") | ||
|
|
||
| hasReady := h.HasResourceCondition(finalCluster.Status.Conditions, | ||
| client.ConditionTypeReady, openapi.ResourceConditionStatusTrue) | ||
| Expect(hasReady).To(BeTrue(), "cluster should have Ready=True condition") | ||
|
|
||
| hasAvailable := h.HasResourceCondition(finalCluster.Status.Conditions, | ||
| client.ConditionTypeAvailable, openapi.ResourceConditionStatusTrue) | ||
| Expect(hasAvailable).To(BeTrue(), "cluster should have Available=True condition") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's verify the file exists and check the lines in question
head -150 e2e/cluster/creation.go | tail -50Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 3274
🏁 Script executed:
# Let's find the implementation of WaitForClusterCondition
rg -n "func.*WaitForClusterCondition" -A 20Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1556
🏁 Script executed:
# Let's find the implementation of HasResourceCondition
rg -n "func.*HasResourceCondition" -A 15Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1080
🏁 Script executed:
# Let's check if there are other tests that wait for Available condition
rg -n "WaitForClusterCondition.*Available" -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 61
🏁 Script executed:
# Let's check if Available is being waited for anywhere in tests
rg -n "Available" e2e/ | head -30Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 2048
🏁 Script executed:
# Let's look at the adapter checking pattern around line 85-100
sed -n '70,105p' e2e/cluster/creation.goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 2097
🏁 Script executed:
# Let's see the full Eventually() block that wraps the adapter checks
sed -n '50,105p' e2e/cluster/creation.goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 3319
🏁 Script executed:
# Let's look at the timeout configurations
cat pkg/helper/config.go | grep -A 5 -B 5 "Timeouts"Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 131
🏁 Script executed:
# Let's find the config file
rg -l "Timeouts.*Cluster" --type goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 148
🏁 Script executed:
# Let's examine the config file
rg -n "Timeouts" pkg/config/config.go -A 10 | head -50Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1975
🏁 Script executed:
# Let's check if there are any other tests that wait for multiple conditions sequentially
rg -n "WaitForClusterCondition.*Ready" -A 5 -B 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 853
🏁 Script executed:
# Let's verify the full context around the suggested fix one more time to ensure accuracy
sed -n '120,155p' e2e/cluster/creation.goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 2175
Wait for Available=True to avoid readiness flake.
The code waits for the cluster Ready condition (line 127-135) but then immediately asserts that Available is also True (line 144-146) without waiting for it. If the Available condition transitions after Ready, this assertion can intermittently fail. Add a wait for Available before fetching and checking the final state.
🛠️ Suggested fix
err = h.WaitForClusterCondition(
ctx,
clusterID,
client.ConditionTypeReady,
openapi.ResourceConditionStatusTrue,
h.Cfg.Timeouts.Cluster.Ready,
)
Expect(err).NotTo(HaveOccurred(), "cluster Ready condition should transition to True")
+
+ err = h.WaitForClusterCondition(
+ ctx,
+ clusterID,
+ client.ConditionTypeAvailable,
+ openapi.ResourceConditionStatusTrue,
+ h.Cfg.Timeouts.Cluster.Ready,
+ )
+ Expect(err).NotTo(HaveOccurred(), "cluster Available condition should transition to True")🤖 Prompt for AI Agents
In `@e2e/cluster/creation.go` around lines 124 - 146, The test waits for Ready but
then immediately asserts Available, which can flake; call
h.WaitForClusterCondition for client.ConditionTypeAvailable (with
openapi.ResourceConditionStatusTrue and an appropriate timeout, e.g.,
h.Cfg.Timeouts.Cluster.Available or the Ready timeout) before calling
h.Client.GetCluster and before using h.HasResourceCondition to check Available,
so the Available condition is waited for and settled prior to the final
assertions.
| // Apply adapter defaults | ||
| if c.Adapters.Cluster == nil { | ||
| c.Adapters.Cluster = DefaultClusterAdapters | ||
| } | ||
| if c.Adapters.NodePool == nil { | ||
| c.Adapters.NodePool = DefaultNodePoolAdapters |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the config.go file
find . -name "config.go" -type f | head -20Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 139
🏁 Script executed:
# Read the specific lines mentioned in the review (276-281)
sed -n '270,290p' pkg/config/config.goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 682
🏁 Script executed:
# Find where DefaultClusterAdapters and DefaultNodePoolAdapters are defined
rg -n "DefaultClusterAdapters|DefaultNodePoolAdapters" pkg/config/ -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1236
🏁 Script executed:
# Check the type of these adapter defaults
rg -n "var.*DefaultClusterAdapters|var.*DefaultNodePoolAdapters|const.*DefaultClusterAdapters|const.*DefaultNodePoolAdapters" -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 61
🏁 Script executed:
# Search for usages of Adapters.Cluster and Adapters.NodePool
rg -n "\.Adapters\.(Cluster|NodePool)" --type go -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1263
🏁 Script executed:
# Search for append operations on adapters to see if mutations are happening
rg -n "append.*Adapters|Adapters.*append" --type go -B 2 -A 2Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 61
🏁 Script executed:
# Check the Adapters struct definition to understand the field types
rg -n "type.*Adapters|Adapters.*struct" -A 10 --type goRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1538
Copy slice defaults to prevent mutations from leaking into global state.
Lines 277-281 directly assign the default slices. Because slices are reference types, the underlying array is shared between c.Adapters.Cluster/NodePool and the global DefaultClusterAdapters/DefaultNodePoolAdapters. Any future mutations (e.g., append with room to grow) would corrupt the defaults.
🛠️ Suggested fix
- if c.Adapters.Cluster == nil {
- c.Adapters.Cluster = DefaultClusterAdapters
- }
- if c.Adapters.NodePool == nil {
- c.Adapters.NodePool = DefaultNodePoolAdapters
- }
+ if c.Adapters.Cluster == nil {
+ c.Adapters.Cluster = append([]string(nil), DefaultClusterAdapters...)
+ }
+ if c.Adapters.NodePool == nil {
+ c.Adapters.NodePool = append([]string(nil), DefaultNodePoolAdapters...)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Apply adapter defaults | |
| if c.Adapters.Cluster == nil { | |
| c.Adapters.Cluster = DefaultClusterAdapters | |
| } | |
| if c.Adapters.NodePool == nil { | |
| c.Adapters.NodePool = DefaultNodePoolAdapters | |
| // Apply adapter defaults | |
| if c.Adapters.Cluster == nil { | |
| c.Adapters.Cluster = append([]string(nil), DefaultClusterAdapters...) | |
| } | |
| if c.Adapters.NodePool == nil { | |
| c.Adapters.NodePool = append([]string(nil), DefaultNodePoolAdapters...) | |
| } |
🤖 Prompt for AI Agents
In `@pkg/config/config.go` around lines 276 - 281, The code assigns
DefaultClusterAdapters and DefaultNodePoolAdapters directly to
c.Adapters.Cluster/NodePool which shares the underlying slice memory; change
these assignments to create a new slice copy (e.g., make a new slice and copy
elements or use append([]T(nil), Default...)) so that c.Adapters.Cluster and
c.Adapters.NodePool own their backing arrays and mutations won’t affect
DefaultClusterAdapters/DefaultNodePoolAdapters.
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: yasun1 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
4e5066c
into
openshift-hyperfleet:main
Summary
Testing
Summary by CodeRabbit