Skip to content

OCPBUGS-84661: Fix wrong early exit during kubelet MCs regeneration#5898

Merged
openshift-merge-bot[bot] merged 1 commit into
openshift:mainfrom
pablintino:ocpbugs-84661
May 6, 2026
Merged

OCPBUGS-84661: Fix wrong early exit during kubelet MCs regeneration#5898
openshift-merge-bot[bot] merged 1 commit into
openshift:mainfrom
pablintino:ocpbugs-84661

Conversation

@pablintino
Copy link
Copy Markdown
Contributor

@pablintino pablintino commented Apr 29, 2026

Closes: #OCPBUGS-84661

- What I did

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

- How to verify it

TBD

- Description for the changelog

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

Summary by CodeRabbit

  • Bug Fixes

    • The kubelet configuration controller now continues processing remaining machine configuration pools within the same reconciliation when one pool is already up-to-date, instead of stopping the reconciliation early.
  • Tests

    • Added a deterministic test to ensure the controller skips an up-to-date pool and proceeds to update other pools in the same run.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Apr 29, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@pablintino: This pull request references Jira Issue OCPBUGS-84661, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Closes: #OCPBUGS-84661

- What I did

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

- How to verify it

TBD

- Description for the changelog

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 29, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 29, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 9f14e12a-2f19-4e41-9a7d-df84db872b6f

📥 Commits

Reviewing files that changed from the base of the PR and between 21e9bca and a5bc08d.

📒 Files selected for processing (2)
  • pkg/controller/kubelet-config/kubelet_config_controller.go
  • pkg/controller/kubelet-config/kubelet_config_controller_test.go
✅ Files skipped from review due to trivial changes (1)
  • pkg/controller/kubelet-config/kubelet_config_controller.go

Walkthrough

Changed the per-pool control flow in syncKubeletConfig: when a pool's generated MachineConfig is already at the current controller version, the controller now skips that pool (continue) and proceeds to process remaining pools instead of returning early and ending the reconciliation.

Changes

Kubelet Config Controller and Tests

Layer / File(s) Summary
Core Control Flow
pkg/controller/kubelet-config/kubelet_config_controller.go
Replaced an early return nil with continue when a pool's MachineConfig controller-version equals version.Hash, allowing the loop to process subsequent MachineConfigPools.
Test Fixtures / Deterministic Iteration
pkg/controller/kubelet-config/kubelet_config_controller_test.go
Imported sort and k8s.io/apimachinery/pkg/labels; added sortedMCPLister wrapper to ensure deterministic ordering of MachineConfigPool.List results by name.
Test Behavior / Assertions
pkg/controller/kubelet-config/kubelet_config_controller_test.go
Updated TestMachineConfigSkipUpdate expected actions to include expectUpdateKubeletConfig(kc1); added TestSkipUpdateContinuesToNextPool that verifies when one pool is up-to-date the controller continues and regenerates the MachineConfig for the next pool (annotated to version.Hash).

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 12
✅ Passed checks (12 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main fix: preventing wrong early exit during kubelet MachineConfigs regeneration, which directly matches the changeset that replaces an early return with continue to process remaining pools.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All test names are stable and deterministic. The new test uses static platform constants for subtest naming with no dynamic information.
Test Structure And Quality ✅ Passed Tests use standard Go testing, not Ginkgo. New test has single responsibility, meaningful assertions, proper fixture setup, and follows codebase patterns. No Ginkgo-specific features apply.
Microshift Test Compatibility ✅ Passed The new test is a standard Go unit test, not a Ginkgo e2e test. It uses testing.T pattern, not Ginkgo syntax (It(), Describe(), etc.). The check only applies to Ginkgo e2e tests.
Single Node Openshift (Sno) Test Compatibility ✅ Passed The new test is a standard Go unit test using testing.T, not a Ginkgo e2e test. The custom check applies only to Ginkgo e2e tests. No SNO-incompatible assumptions detected.
Topology-Aware Scheduling Compatibility ✅ Passed PR modifies controller logic only (syncKubeletConfig and tests). No scheduling constraints, pod affinity, topology spreads, nodeSelectors, tolerations, PDBs, or replica configurations are introduced.
Ote Binary Stdout Contract ✅ Passed The PR does not involve OTE binaries or Ginkgo test suites. It only modifies a standard Go unit test file and a controller package. The check is not applicable to standard Go unit tests.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR adds standard Go unit tests (TestSkipUpdateContinuesToNextPool), not Ginkgo e2e tests. The custom check applies only to new Ginkgo e2e tests. Check is not applicable.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Makes sense based on the discussions, thanks!

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label Apr 30, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aws-ovn
/test e2e-aws-ovn-upgrade
/test e2e-gcp-op-part1
/test e2e-gcp-op-part2
/test e2e-gcp-op-single-node
/test e2e-hypershift

@sdodson
Copy link
Copy Markdown
Member

sdodson commented May 2, 2026

/retest-required

Just using this as a canary to see if other prs fail consistently

@sdodson
Copy link
Copy Markdown
Member

sdodson commented May 2, 2026

/test e2e-gcp-op-ocl-part1
(this is the specific job that's failing repeatedly on #5905)

@pablintino
Copy link
Copy Markdown
Contributor Author

/retest-required

@sergiordlr
Copy link
Copy Markdown
Contributor

sergiordlr commented May 4, 2026

Verified using IPI on AWS

We had problems while trying to reproduce the issue. The only way we managed to reproduce it was killing the MCC when the kubeletconfig resourcers' controller-version starts updating.

Steps:

  1. Create 10 custom MCP
  2. Create a kubeletconfig matching all the pools, including the worker pool
 COMMON_LABEL="reproduce-84661"

###############################################################################
# STEP 1: Create 10 custom MCPs with the common label
###############################################################################
echo "=== Creating 10 custom MCPs ==="
for i in $(seq 1 10); do
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: custom-$i
  labels:
    $COMMON_LABEL: ""
    pools.operator.machineconfiguration.openshift.io/custom-$i: ""
spec:
  machineConfigSelector:
    matchExpressions:
      - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,custom-$i]}
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/custom-$i: ""
EOF
done

###############################################################################
# STEP 2: Add the common label to the worker pool
###############################################################################
echo "=== Adding common label to the worker pool ==="
oc label mcp worker "$COMMON_LABEL=" --overwrite

###############################################################################
# STEP 3: Create ONE KubeletConfig matching all 11 pools
###############################################################################
echo "=== Creating KubeletConfig targeting all pools ==="
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: kc-all-pools
spec:
  machineConfigPoolSelector:
    matchLabels:
      $COMMON_LABEL: ""
  kubeletConfig:
    maxPods: 250
EOF

  1. Upgrade the cluster to the new version

  2. While the upgrade is still running, run in the background a script watching the generated-kubelet MCs, so that when one of them changes the controller-version, the script kills the controller pod to force a restart.

NAMESPACE="openshift-machine-config-operator"

# Get the current controller version hash (pre-upgrade)
OLD_HASH=$(oc get mc 99-worker-generated-kubelet -o jsonpath='{.metadata.annotations.machineconfiguration\.openshift\.io/generated-by-controller-version}')
echo "Current controller version hash: $OLD_HASH"
echo "Waiting for upgrade to start regenerating kubelet MCs..."

while true; do
  # Count how many 99-*-generated-kubelet MCs have a DIFFERENT hash (new version)
  TOTAL=$(oc get mc -o json | jq -r --arg old "$OLD_HASH" '
    [.items[] | select(.metadata.name | test("^99-.*generated-kubelet$"))] | length')
  NEW=$(oc get mc -o json | jq -r --arg old "$OLD_HASH" '
    [.items[] | select(.metadata.name | test("^99-.*generated-kubelet$")) |
     select(.metadata.annotations["machineconfiguration.openshift.io/generated-by-controller-version"] != $old)] | length')

  echo "$(date +%H:%M:%S) - $NEW/$TOTAL MCs regenerated with new hash"

  # Kill when at least 1 but not all have been regenerated
  if [ "$NEW" -gt 0 ] && [ "$NEW" -lt "$TOTAL" ]; then
    echo ""
    echo "=== KILLING controller pod NOW ==="
    echo "=== $NEW out of $TOTAL MCs regenerated ==="
    oc delete pod -n "$NAMESPACE" -l k8s-app=machine-config-controller --wait=false
    echo ""
    echo "Controller killed. Monitor the state with:"
    echo '  watch "oc get mc -o json | jq -r '\''[.items[] | select(.metadata.name | test(\"^99-.*generated-kubelet$\"))] | map({name: .metadata.name, hash: .metadata.annotations[\"machineconfiguration.openshift.io/generated-by-controller-version\"]}) | sort_by(.name)'\''\"'
    exit 0
  fi

  sleep 1
done

  1. Check that the upgrade was correctly executed and no pool is degraded and reporting this error
- lastTransitionTime: "2026-04-30T15:45:37Z"
    message: 'Failed to render configuration for pool custom-10: could not generate
      rendered MachineConfig: Ignoring MC 99-custom-10-generated-kubelet generated
      by older version bc74feed8e96d179347b8ee0c102280c050d5e24 (my version: f72cd29b7306dcb05effc3bceee1398942d97902)'
    reason: ""
    status: "True"
    type: RenderDegraded
  1. Check that after the upgrade all the generated-kubelet configs have the right controller-version (it is a double check, actually, if that were not the case then the pools would be degraded)

/verified by @sergiordlr

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label May 4, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@sergiordlr: This PR has been marked as verified by @sergiordlr.

Details

In response to this:

Verified using IPI on AWS

We had problems while trying to reproduce the issue. The only way we managed to reproduce it was killing the MCC when the kubelet config starts updating the controller-version.

Steps:

  1. Create 10 custom MCP
  2. Create a kubeletconfig matching all the pools, including the worker pool
COMMON_LABEL="reproduce-84661"

###############################################################################
# STEP 1: Create 10 custom MCPs with the common label
###############################################################################
echo "=== Creating 10 custom MCPs ==="
for i in $(seq 1 10); do
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 name: custom-$i
 labels:
   $COMMON_LABEL: ""
   pools.operator.machineconfiguration.openshift.io/custom-$i: ""
spec:
 machineConfigSelector:
   matchExpressions:
     - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,custom-$i]}
 nodeSelector:
   matchLabels:
     node-role.kubernetes.io/custom-$i: ""
EOF
done

###############################################################################
# STEP 2: Add the common label to the worker pool
###############################################################################
echo "=== Adding common label to the worker pool ==="
oc label mcp worker "$COMMON_LABEL=" --overwrite

###############################################################################
# STEP 3: Create ONE KubeletConfig matching all 11 pools
###############################################################################
echo "=== Creating KubeletConfig targeting all pools ==="
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
 name: kc-all-pools
spec:
 machineConfigPoolSelector:
   matchLabels:
     $COMMON_LABEL: ""
 kubeletConfig:
   maxPods: 250
EOF

  1. Upgrade the cluster to the new version

  2. Run in the background a script watching the generated-kubelet MCs, so that when one of them changes the controller-version, the script kills the controller pod to force a restart.

COMMON_LABEL="reproduce-84661"

###############################################################################
# STEP 1: Create 10 custom MCPs with the common label
###############################################################################
echo "=== Creating 10 custom MCPs ==="
for i in $(seq 1 10); do
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
 name: custom-$i
 labels:
   $COMMON_LABEL: ""
   pools.operator.machineconfiguration.openshift.io/custom-$i: ""
spec:
 machineConfigSelector:
   matchExpressions:
     - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,custom-$i]}
 nodeSelector:
   matchLabels:
     node-role.kubernetes.io/custom-$i: ""
EOF
done

###############################################################################
# STEP 2: Add the common label to the worker pool
###############################################################################
echo "=== Adding common label to the worker pool ==="
oc label mcp worker "$COMMON_LABEL=" --overwrite

###############################################################################
# STEP 3: Create ONE KubeletConfig matching all 11 pools
###############################################################################
echo "=== Creating KubeletConfig targeting all pools ==="
oc apply -f - <<EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
 name: kc-all-pools
spec:
 machineConfigPoolSelector:
   matchLabels:
     $COMMON_LABEL: ""
 kubeletConfig:
   maxPods: 250
EOF

  1. Check that the upgrade was correctly executed and no pool is degraded and reporting this error
- lastTransitionTime: "2026-04-30T15:45:37Z"
   message: 'Failed to render configuration for pool custom-10: could not generate
     rendered MachineConfig: Ignoring MC 99-custom-10-generated-kubelet generated
     by older version bc74feed8e96d179347b8ee0c102280c050d5e24 (my version: f72cd29b7306dcb05effc3bceee1398942d97902)'
   reason: ""
   status: "True"
   type: RenderDegraded
  1. Check that after the upgrade all the generated-kubelet configs have the right controller-version (it is a double check, actually, if it wasn't like that then the pools would be degraded)

/verified by @sergiordlr

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

This change fixes a wrongly early return that made syncKubeletConfig
early exit without ensuring the rest of the pools in the loop are up to
date.

Signed-off-by: Pablo Rodriguez Nava <git@amail.pablintino.eu>
@openshift-ci-robot openshift-ci-robot removed the verified Signifies that the PR passed pre-merge verification criteria label May 5, 2026
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label May 5, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@pablintino: This pull request references Jira Issue OCPBUGS-84661, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

Details

In response to this:

Closes: #OCPBUGS-84661

- What I did

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

- How to verify it

TBD

- Description for the changelog

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

Summary by CodeRabbit

  • Bug Fixes

  • The kubelet configuration controller now continues processing remaining machine configuration pools within the same reconciliation when one pool is already up-to-date, instead of stopping the reconciliation early.

  • Tests

  • Added a deterministic test to ensure the controller skips an up-to-date pool and proceeds to update other pools in the same run.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@yuqi-zhang
Copy link
Copy Markdown
Contributor

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label May 5, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aws-ovn
/test e2e-aws-ovn-upgrade
/test e2e-gcp-op-part1
/test e2e-gcp-op-part2
/test e2e-gcp-op-single-node
/test e2e-hypershift

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 5, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pablintino, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [pablintino,yuqi-zhang]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pablintino
Copy link
Copy Markdown
Contributor Author

/verified by @sergiordlr
Verified now by TestSkipUpdateContinuesToNextPool too

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label May 5, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@pablintino: This PR has been marked as verified by @sergiordlr.

Details

In response to this:

/verified by @sergiordlr
Verified now by TestSkipUpdateContinuesToNextPool too

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pablintino
Copy link
Copy Markdown
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels May 6, 2026
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@pablintino: This pull request references Jira Issue OCPBUGS-84661, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (5.0.0) matches configured target version for branch (5.0.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

@pablintino: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-merge-bot openshift-merge-bot Bot merged commit 3951ac7 into openshift:main May 6, 2026
17 checks passed
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@pablintino: Jira Issue Verification Checks: Jira Issue OCPBUGS-84661
✔️ This pull request was pre-merge verified.
✔️ All associated pull requests have merged.
✔️ All associated, merged pull requests were pre-merge verified.

Jira Issue OCPBUGS-84661 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓

Details

In response to this:

Closes: #OCPBUGS-84661

- What I did

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

- How to verify it

TBD

- Description for the changelog

This change fixes a wrongly early return that made syncKubeletConfig early exit without ensuring the rest of the pools in the loop are up to date.

Summary by CodeRabbit

  • Bug Fixes

  • The kubelet configuration controller now continues processing remaining machine configuration pools within the same reconciliation when one pool is already up-to-date, instead of stopping the reconciliation early.

  • Tests

  • Added a deterministic test to ensure the controller skips an up-to-date pool and proceeds to update other pools in the same run.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@pablintino
Copy link
Copy Markdown
Contributor Author

/cherry-pick release-4.22

@openshift-cherrypick-robot
Copy link
Copy Markdown

@pablintino: new pull request created: #6009

Details

In response to this:

/cherry-pick release-4.22

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot
Copy link
Copy Markdown

@pablintino: new pull request created: #6009

Details

In response to this:

/cherry-pick release-4.22

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-robot
Copy link
Copy Markdown
Contributor

Fix included in release 5.0.0-0.nightly-2026-05-06-212204

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-important Referenced Jira bug's severity is important for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants