-
Notifications
You must be signed in to change notification settings - Fork 6
HYPERFLEET-620 - test: add clusters job and deployment adapter configurations #26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HYPERFLEET-620 - test: add clusters job and deployment adapter configurations #26
Conversation
WalkthroughAdds comprehensive testdata for multiple HyperFleet adapters: new AdapterConfig and AdapterTaskConfig YAMLs, Kubernetes manifest templates (Deployment, Job, ServiceAccount, Role, RoleBinding), and Helm values for two adapter types ( Sequence Diagram(s)sequenceDiagram
autonumber
participant Event as Event
participant Adapter as Adapter (task runner)
participant HFAPI as HyperFleet API
participant K8s as Kubernetes
participant Broker as Broker
Event->>Adapter: trigger with event.id (clusterId)
Adapter->>HFAPI: GET /clusters/{{clusterId}} (precondition capture)
HFAPI-->>Adapter: cluster data (generationSpec, status)
Adapter->>HFAPI: GET /clusters/{{clusterId}}/statuses (precondition)
HFAPI-->>Adapter: adapter status list
Adapter->>K8s: create/apply templated resources (Deployment/Job/SA/Role/RoleBinding)
K8s-->>Adapter: resource statuses / discovery results
Adapter->>Adapter: post-processing (CEL) -> build clusterStatusPayload
Adapter->>HFAPI: POST /clusters/{{clusterId}}/statuses (reportClusterStatus) with clusterStatusPayload
HFAPI-->>Adapter: 200 OK
Adapter->>Broker: (optional) publish events / subscribe as configured
Broker-->>Adapter: messages/events (if any)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@testdata/adapter-configs/clusters-cl-deployment/adapter-task-config.yaml`:
- Around line 53-56: The condition entry under conditions using field
"clusterJobStatus" incorrectly uses the plural key values: "True" — change the
key from values to value so it matches the other condition (which uses value:
"False") and ensure the condition uses value: "True" (i.e., replace values with
value for the clusterJobStatus condition).
In `@testdata/adapter-configs/clusters-cl-job/adapter-task-config.yaml`:
- Around line 65-68: Fix the typo in the condition block under conditions for
the clusterNamespaceStatus check: change the key from plural "values" to the
singular "value" so the condition uses the same schema as the other entries
(e.g., the existing pattern with value: "False"); update the condition where
field: "clusterNamespaceStatus" and operator: "equals" to use value: "Active" to
ensure the condition is evaluated correctly.
🧹 Nitpick comments (7)
testdata/adapter-configs/clusters-cl-deployment/adapter-task-resource-deployment.yaml (1)
23-28: Consider adding security context for better test coverage.Static analysis flags missing security hardening. While this is test data, adding a security context would make the test deployment more representative of production patterns and could help catch security-related regressions.
🛡️ Suggested security context addition
spec: + securityContext: + runAsNonRoot: true containers: - name: test image: nginx:latest ports: - containerPort: 80 + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: + drop: + - ALLtestdata/adapter-configs/clusters-cl-deployment/values.yaml (1)
28-31: Minor: Inconsistent indentation underrbac.The
resourceslist uses 3-space indentation while the rest of the file uses 2-space indentation. This could cause issues with strict YAML parsers or linters.📝 Suggested fix
rbac: - resources: - - deployments - - deployments/status + resources: + - deployments + - deployments/statustestdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-serviceaccount.yaml (1)
5-5: Minor: Extra whitespace in template variable.Line 5 has an extra space before the closing braces (
{{ .clusterId }}), which is inconsistent with other template variables in this file and across related manifests.📝 Suggested fix
- namespace: "{{ .clusterId }}" + namespace: "{{ .clusterId }}"testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-role.yaml (1)
5-5: Minor: Extra whitespace in template variable.Same whitespace inconsistency as the ServiceAccount manifest - extra space before closing braces.
📝 Suggested fix
- namespace: "{{ .clusterId }}" + namespace: "{{ .clusterId }}"testdata/adapter-configs/clusters-cl-job/adapter-task-resource-job-rolebinding.yaml (1)
18-18: Minor: Extra whitespace in template variable.Same whitespace inconsistency - extra space before closing braces on line 18.
📝 Suggested fix
- namespace: "{{ .clusterId }}" + namespace: "{{ .clusterId }}"testdata/adapter-configs/clusters-cl-job/values.yaml (1)
31-39: Minor: Inconsistent indentation inrbacsection.Line 32 uses 3 spaces for indentation while the rest of the file uses 2 spaces. This inconsistency could cause issues with strict YAML parsers or linters.
🔧 Suggested fix
rbac: - resources: + resources: - namespaces - serviceaccounts - roles - rolebindings - jobs - jobs/status - podstestdata/adapter-configs/clusters-cl-job/adapter-task-resource-job.yaml (1)
27-39: Add security context to harden containers.Static analysis (Trivy KSV-0014, KSV-0118) flags that both containers lack security context configuration. While this is test data, adding security context is a good practice that aligns with Kubernetes security best practices and may be required in restricted environments.
🛡️ Suggested security context additions
Add pod-level and container-level security context:
spec: serviceAccountName: "job-{{ .clusterId }}-sa" restartPolicy: Never + securityContext: + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault # Volumes volumes: - name: results emptyDir: {} containers: - name: test-job image: alpine:3.19 imagePullPolicy: IfNotPresent + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true command: ["/bin/sh", "-c"]For the status-reporter container:
- name: status-reporter image: registry.ci.openshift.org/ci/status-reporter:latest imagePullPolicy: Always + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: trueNote: If the containers require writing to paths other than
/results, you may need to add additionalemptyDirvolume mounts for those paths (e.g.,/tmp).Also applies to: 175-177
testdata/adapter-configs/clusters-cl-deployment/adapter-task-config.yaml
Outdated
Show resolved
Hide resolved
testdata/adapter-configs/clusters-cl-job/adapter-task-config.yaml
Outdated
Show resolved
Hide resolved
b14bff9 to
58b86c8
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: yasun1 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
6399f3a
into
openshift-hyperfleet:main
Summary
Testing
Summary by CodeRabbit
Tests
Bug Fixes