Skip to content

Conversation

@86254860
Copy link
Contributor

@86254860 86254860 commented Feb 11, 2026

Summary

  • Add clusters job adapter configurations for E2E testing
  • Add clusters deployment adapter configurations for E2E testing
  • Rename clusters namespace adapter configuration file

Testing

  • The newly added adapter configurations can work well for E2E testing case.

Summary by CodeRabbit

  • Tests

    • Added comprehensive adapter test configurations and resource manifests for deployment, job, and namespace scenarios to enable end-to-end validation of provisioning, status reporting, and failure modes.
  • Bug Fixes

    • Corrected a configuration file path mapping to ensure the adapter configuration is referenced correctly.

@openshift-ci openshift-ci bot requested review from tzhou5 and xueli181114 February 11, 2026 11:58
@coderabbitai
Copy link

coderabbitai bot commented Feb 11, 2026

Walkthrough

Adds comprehensive testdata for multiple HyperFleet adapters: new AdapterConfig and AdapterTaskConfig YAMLs, Kubernetes manifest templates (Deployment, Job, ServiceAccount, Role, RoleBinding), and Helm values for two adapter types (clusters-cl-deployment, clusters-cl-job); includes templated CEL-based preconditions, resource discovery, post-processing payloads, and status-reporting POST actions. Also fixes a path mapping in clusters-cl-namespace/values.yaml. Total: 13 new files and 1 modified file.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Event as Event
  participant Adapter as Adapter (task runner)
  participant HFAPI as HyperFleet API
  participant K8s as Kubernetes
  participant Broker as Broker

  Event->>Adapter: trigger with event.id (clusterId)
  Adapter->>HFAPI: GET /clusters/{{clusterId}} (precondition capture)
  HFAPI-->>Adapter: cluster data (generationSpec, status)
  Adapter->>HFAPI: GET /clusters/{{clusterId}}/statuses (precondition)
  HFAPI-->>Adapter: adapter status list
  Adapter->>K8s: create/apply templated resources (Deployment/Job/SA/Role/RoleBinding)
  K8s-->>Adapter: resource statuses / discovery results
  Adapter->>Adapter: post-processing (CEL) -> build clusterStatusPayload
  Adapter->>HFAPI: POST /clusters/{{clusterId}}/statuses (reportClusterStatus) with clusterStatusPayload
  HFAPI-->>Adapter: 200 OK
  Adapter->>Broker: (optional) publish events / subscribe as configured
  Broker-->>Adapter: messages/events (if any)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested labels

lgtm, approved

Suggested reviewers

  • tzhou5
  • xueli181114
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and clearly summarizes the main changes: adding job and deployment adapter configurations for E2E testing, which aligns with the bulk of the changeset and PR objectives.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
testdata/adapter-configs/clusters-cl-deployment/adapter-task-resource-deployment.yaml (1)

24-27: Pin the nginx image instead of using latest.

Using latest makes E2E runs non-deterministic when upstream changes. Consider pinning to a specific tag or digest (optionally via a values/param entry) for repeatable tests.

🔧 Possible update (wire image from values/params)
-        - name: test
-          image: nginx:latest
+        - name: test
+          image: "{{ .nginxImage }}" # pin to an org-approved tag/digest in values/params
testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job.yaml (1)

175-177: Pin the status‑reporter image instead of latest.

To keep E2E behavior stable, use a fixed tag/digest for the sidecar image (optionally surfaced via values/params).

🔧 Possible update (wire image from values/params)
-      - name: status-reporter
-        image: registry.ci.openshift.org/ci/status-reporter:latest
+      - name: status-reporter
+        image: "{{ .statusReporterImage }}" # pin to an org-approved tag/digest

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@testdata/adapter-configs/clusters-cl-deployment/adapter-task-config.yaml`:
- Around line 53-56: The condition entry under conditions using field
"clusterJobStatus" incorrectly uses the plural key values: "True" — change the
key from values to value so it matches the other condition (which uses value:
"False") and ensure the condition uses value: "True" (i.e., replace values with
value for the clusterJobStatus condition).

In `@testdata/adapter-configs/clusters-cl-job/adapter-task-config.yaml`:
- Around line 65-68: Fix the typo in the condition block under conditions for
the clusterNamespaceStatus check: change the key from plural "values" to the
singular "value" so the condition uses the same schema as the other entries
(e.g., the existing pattern with value: "False"); update the condition where
field: "clusterNamespaceStatus" and operator: "equals" to use value: "Active" to
ensure the condition is evaluated correctly.
🧹 Nitpick comments (7)
testdata/adapter-configs/clusters-cl-deployment/adapter-task-resource-deployment.yaml (1)

23-28: Consider adding security context for better test coverage.

Static analysis flags missing security hardening. While this is test data, adding a security context would make the test deployment more representative of production patterns and could help catch security-related regressions.

🛡️ Suggested security context addition
     spec:
+      securityContext:
+        runAsNonRoot: true
       containers:
         - name: test
           image: nginx:latest
           ports:
             - containerPort: 80
+          securityContext:
+            allowPrivilegeEscalation: false
+            readOnlyRootFilesystem: true
+            capabilities:
+              drop:
+                - ALL
testdata/adapter-configs/clusters-cl-deployment/values.yaml (1)

28-31: Minor: Inconsistent indentation under rbac.

The resources list uses 3-space indentation while the rest of the file uses 2-space indentation. This could cause issues with strict YAML parsers or linters.

📝 Suggested fix
 rbac:
-   resources:
-     - deployments
-     - deployments/status
+  resources:
+    - deployments
+    - deployments/status
testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-serviceaccount.yaml (1)

5-5: Minor: Extra whitespace in template variable.

Line 5 has an extra space before the closing braces ({{ .clusterId }}), which is inconsistent with other template variables in this file and across related manifests.

📝 Suggested fix
-  namespace: "{{ .clusterId  }}"
+  namespace: "{{ .clusterId }}"
testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-role.yaml (1)

5-5: Minor: Extra whitespace in template variable.

Same whitespace inconsistency as the ServiceAccount manifest - extra space before closing braces.

📝 Suggested fix
-  namespace: "{{ .clusterId  }}"
+  namespace: "{{ .clusterId }}"
testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-rolebinding.yaml (1)

18-18: Minor: Extra whitespace in template variable.

Same whitespace inconsistency - extra space before closing braces on line 18.

📝 Suggested fix
-    namespace: "{{ .clusterId  }}"
+    namespace: "{{ .clusterId }}"
testdata/adapter-configs/clusters-cl-job/values.yaml (1)

31-39: Minor: Inconsistent indentation in rbac section.

Line 32 uses 3 spaces for indentation while the rest of the file uses 2 spaces. This inconsistency could cause issues with strict YAML parsers or linters.

🔧 Suggested fix
 rbac:
-   resources:
+  resources:
     - namespaces
     - serviceaccounts
     - roles
     - rolebindings
     - jobs
     - jobs/status
     - pods
testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job.yaml (1)

27-39: Add security context to harden containers.

Static analysis (Trivy KSV-0014, KSV-0118) flags that both containers lack security context configuration. While this is test data, adding security context is a good practice that aligns with Kubernetes security best practices and may be required in restricted environments.

🛡️ Suggested security context additions

Add pod-level and container-level security context:

     spec:
       serviceAccountName: "job-{{ .clusterId }}-sa"
       restartPolicy: Never
+      securityContext:
+        runAsNonRoot: true
+        seccompProfile:
+          type: RuntimeDefault

       # Volumes
       volumes:
       - name: results
         emptyDir: {}

       containers:
       - name: test-job
         image: alpine:3.19
         imagePullPolicy: IfNotPresent
+        securityContext:
+          allowPrivilegeEscalation: false
+          capabilities:
+            drop:
+              - ALL
+          readOnlyRootFilesystem: true
         command: ["/bin/sh", "-c"]

For the status-reporter container:

       - name: status-reporter
         image: registry.ci.openshift.org/ci/status-reporter:latest
         imagePullPolicy: Always
+        securityContext:
+          allowPrivilegeEscalation: false
+          capabilities:
+            drop:
+              - ALL
+          readOnlyRootFilesystem: true

Note: If the containers require writing to paths other than /results, you may need to add additional emptyDir volume mounts for those paths (e.g., /tmp).

Also applies to: 175-177

@86254860 86254860 force-pushed the HYPERFLEET-620-add-clusters-job-deployment branch from b14bff9 to 58b86c8 Compare February 12, 2026 01:24
@yasun1
Copy link
Contributor

yasun1 commented Feb 12, 2026

/lgtm

@openshift-ci
Copy link

openshift-ci bot commented Feb 12, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: yasun1

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot bot merged commit 6399f3a into openshift-hyperfleet:main Feb 12, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants