🌱 chore(ci): Add test to check if user changes are preserved#2512
🌱 chore(ci): Add test to check if user changes are preserved#2512camilamacedo86 wants to merge 1 commit intooperator-framework:mainfrom
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Pull request overview
This PR adds an end-to-end test to verify that OLMv1, when using Server-Side Apply, does not revert user-initiated changes to deployed resources. The test specifically validates the scenario where a user runs kubectl rollout restart deployment, which was problematic in OLMv0. The PR is explicitly marked as Work In Progress (WIP).
Changes:
- Added new feature file
rollout-restart.featurewith a scenario testing user-initiated deployment changes - Implemented three new step functions:
UserAddsRestartAnnotation,ResourceHasRestartAnnotation, andClusterExtensionAnnotationIsSet
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| test/e2e/features/rollout-restart.feature | New feature file defining the test scenario for verifying OLMv1 doesn't revert user-initiated changes to deployments |
| test/e2e/steps/steps.go | Added three new step functions to support the rollout restart test scenario and registered them in the step definitions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
6173613 to
6893e06
Compare
6893e06 to
98a5126
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
98a5126 to
f626edd
Compare
f626edd to
741afe1
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
741afe1 to
91438bf
Compare
91438bf to
ef15cc9
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
ef15cc9 to
add4751
Compare
add4751 to
4472c38
Compare
183b08f to
ceacc79
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2512 +/- ##
==========================================
- Coverage 64.33% 63.42% -0.92%
==========================================
Files 131 131
Lines 9333 9333
==========================================
- Hits 6004 5919 -85
- Misses 2853 2939 +86
+ Partials 476 475 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
40dcee0 to
568227e
Compare
568227e to
81937d3
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| @@ -0,0 +1,37 @@ | |||
| @BoxcutterRuntime | |||
| Feature: Rollout Restart User Changes | |||
There was a problem hiding this comment.
Maybe another way to express this might be that non-revision spec field values are preserved? Would that make sense?
There was a problem hiding this comment.
i.e. it's not just about user changes, but change from other controllers as well. If if change does not interfere with the revision spec, OLM won't revert it.
There was a problem hiding this comment.
If this is the case, is it important to test this feature in this way? Would something simpler like adding an annotation to the deployment and ensure that it doesn't get removed over 5 seconds sufficient?
There was a problem hiding this comment.
If this is the case, is it important to test this feature in this way? Would something simpler like adding an annotation to the deployment and ensure that it doesn't get removed over 5 seconds sufficient?
I think that does not cover the case scenario that we are looking for: operator-framework/operator-lifecycle-manager#3392. The goal here is ensure that we cover this scenario.
There was a problem hiding this comment.
@perdasilva I improved the tests .. but I think that is .. see wdyt now ?
Can we get this one merged?
There was a problem hiding this comment.
I need to agree here with @perdasilva - by reading operator-framework/operator-lifecycle-manager#3392 I would also say that the issue here is that we should allow users to add custom annotations and labels to the deployed resources, without trying to revert them. Hence, I would even call this feature differently.
There was a problem hiding this comment.
I've addressed the feedback:
- Renamed the feature to "Preserve Custom Annotations and Labels on Managed Resources" -- framing it as the broader capability rather than a specific rollout restart fix.
- Added a simplified first scenario that adds a custom annotation and label via kubectl annotate/kubectl label, triggers reconciliation, and verifies they're preserved. No rollout complexity.
- Kept the rollout restart as a second scenario since it covers the real-world use case from operator-lifecycle-manager#3392.
Note that as far I understand, annotations are part of .metadata -- changes there don't bump .metadata.generation. Since we rely on comparing observedGeneration with generation to confirm the controller has processed the change (via the existing step ClusterExtension has been reconciled the latest generation), patching an annotation wouldn't give us that sync signal.
That is why the current approach -- patching spec.install.preflight.crdUpgradeSafety.enforcement -- reliably bumps generation because it's a persisted spec field.
81937d3 to
96f94eb
Compare
test/e2e/steps/steps.go
Outdated
| // Patch the spec with a harmless reconcileRequests entry that includes a | ||
| // unique timestamp. Any spec change will bump .metadata.generation, which | ||
| // the controller should observe and reconcile. | ||
| payload := fmt.Sprintf(`{"spec":{"reconcileRequests":[%d]}}`, time.Now().UnixNano()) |
There was a problem hiding this comment.
would adding a the recocileRequests as an annotation also trigger the reconciliation? If so, could it be a better approach?
There was a problem hiding this comment.
An annotation wouldn't work here. Annotation changes don't bump metadata.generation, so the next step (ClusterExtension has been reconciled the latest generation) — which waits for observedGeneration == generation — would pass immediately before the controller actually reconciles. That's a race condition.
The original spec.reconcileRequests approach was also broken: that field doesn't exist in the CRD schema, so the API server silently prunes it. No spec change means no generation bump and no reconciliation is triggered at all.
The fix uses install.preflight.crdUpgradeSafety.enforcement: "None" instead — a real spec field that persists, bumps generation, and gives us a reliable sync signal. It's harmless here since the test doesn't involve CRD upgrades.
There was a problem hiding this comment.
@pedjak ^ I add an extra one, following the suggestions but see above
96f94eb to
5b8024b
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
|
/approve |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: perdasilva The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| @@ -0,0 +1,37 @@ | |||
| @BoxcutterRuntime | |||
| Feature: Rollout Restart User Changes | |||
There was a problem hiding this comment.
I need to agree here with @perdasilva - by reading operator-framework/operator-lifecycle-manager#3392 I would also say that the issue here is that we should allow users to add custom annotations and labels to the deployed resources, without trying to revert them. Hence, I would even call this feature differently.
|
I did the changes suggested by you. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| Scenario: Bundle-provided and user-added labels and annotations persist after OLM reconciliation | ||
| When ClusterExtension is applied | ||
| """ | ||
| apiVersion: olm.operatorframework.io/v1 | ||
| kind: ClusterExtension | ||
| metadata: | ||
| name: ${NAME} | ||
| spec: | ||
| namespace: ${TEST_NAMESPACE} | ||
| serviceAccount: | ||
| name: olm-sa | ||
| source: | ||
| sourceType: Catalog | ||
| catalog: | ||
| packageName: test | ||
| selector: | ||
| matchLabels: | ||
| "olm.operatorframework.io/metadata.name": test-catalog | ||
| """ | ||
| Then ClusterExtension is rolled out | ||
| And ClusterExtension is available | ||
| And resource "deployment/test-operator" is available | ||
| # Verify bundle-provided labels are present after initial install | ||
| And resource "deployment/test-operator" has label "app.kubernetes.io/name" with value "test-operator" | ||
| # Add user-owned labels and annotations | ||
| When user adds annotation "example.com/custom-annotation=my-value" to "deployment/test-operator" | ||
| And user adds label "example.com/custom-label=my-value" to "deployment/test-operator" | ||
| Then resource "deployment/test-operator" has annotation "example.com/custom-annotation" with value "my-value" | ||
| And resource "deployment/test-operator" has label "example.com/custom-label" with value "my-value" |
There was a problem hiding this comment.
I agree with Copilot on this comment.
I see this both ways:
I think it makes sense for us to have two scenarios:
And if we care about two scenarios being overkill, then I'd call this feature something based on (1), but then test it with (2) as the scenario since that is the specific thing a user cared about AND it would cover the general case. |
So that means, have the PR as it was before the changes I will remove the latest commit and keep it as it was. |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| // First, get the deployment to find its selector labels | ||
| deploymentOut, err := k8sClient("get", "deployment", deploymentName, "-n", sc.namespace, "-o", "json") | ||
| if err != nil { | ||
| logger.V(1).Info("Failed to get deployment", "deployment", deploymentName, "error", err) | ||
| return false | ||
| } | ||
|
|
||
| var deployment appsv1.Deployment | ||
| if err := json.Unmarshal([]byte(deploymentOut), &deployment); err != nil { | ||
| logger.V(1).Info("Failed to parse deployment", "error", err) | ||
| return false | ||
| } | ||
|
|
| if len(rsList) < expectedCount { | ||
| logger.V(1).Info("Not enough ReplicaSets yet", "deployment", deploymentName, "current", len(rsList), "expected", expectedCount) |
| // We set install.preflight.crdUpgradeSafety.enforcement to "None" because it is | ||
| // a real spec field that the API server will persist (unlike unknown fields, which | ||
| // are pruned by structural schemas). Any persisted spec change bumps | ||
| // .metadata.generation, giving us a reliable synchronization signal. | ||
| func TriggerClusterExtensionReconciliation(ctx context.Context) error { | ||
| sc := scenarioCtx(ctx) | ||
| payload := `{"spec":{"install":{"preflight":{"crdUpgradeSafety":{"enforcement":"None"}}}}}` | ||
| _, err := k8sClient("patch", "clusterextension", sc.clusterExtensionName, | ||
| "--type=merge", | ||
| "-p", payload) |
| } | ||
|
|
||
| // DeploymentHasReplicaSets verifies that a deployment has the expected number of ReplicaSets | ||
| // and that the latest one is active with pods running. |
Add a test to ensure that OLM is not reverting user changes like kubectl rollout restart. Assisted-by: Cursor/Claude
Add a test to ensure that OLM is not reverting user changes like kubectl rollout restart.