NO-JIRA: daemon: expand os image presence check#6005
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
Walkthrough
ChangesLocal vs Remote Rebase Flow
Sequence DiagramsequenceDiagram
participant Daemon
participant Podman
participant Registry
participant Status
Daemon->>Podman: GetPodmanImageInfoByReference(newURL)
alt Podman lookup error
Podman-->>Daemon: error
Daemon->>Status: return early (propagate error)
else Image present
Podman-->>Daemon: podmanImageInfo
Daemon->>Daemon: RebaseLayeredFromContainerStorage(image)
Daemon->>Status: update status (local rebase)
else Image absent
Podman-->>Daemon: nil
Daemon->>Registry: RebaseLayered(newURL) (pull)
Registry-->>Daemon: image data / errors (with retries)
Daemon->>Status: update MachineConfigNodeImagePulledFromRegistry conditions
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/hold I want to look at the logs from the CI run |
8acfad7 to
a43e96a
Compare
|
/test ci/prow/e2e-aws-ovn |
|
/test e2e-aws-ovn |
|
/test unit |
| if podmanImageInfo, err = dn.podmanInterface.GetPodmanImageInfoByReference(newURL); err != nil { | ||
| return err | ||
| } | ||
| podmanImageInfo, err := dn.podmanInterface.GetPodmanImageInfoByReference(newURL) |
There was a problem hiding this comment.
Hmm, so we added the PIS check in https://github.com/openshift/machine-config-operator/pull/5333/changes . Looking at the bug, we did it because the local pull can get cleaned up unless PIS configured crio not to delete the image.
The code around it seems to have changed a bit, but curious if that's still a problem
There was a problem hiding this comment.
Definitely would like more eyes on the code here, but IIUC the existing code only checks if the image is present if PIS was configured. i.e. it assumes the only reason the image would already be present is if PIS was configured.
The change here basically revolves arount the fact that the image could be present for multiple reasons so let's always check if it's present first and proceed accordingly if it is.
There was a problem hiding this comment.
Right, I want to say that we only wanted to rebase from the local image if PIS is configured, and not for any other case explicitly, even if it does incur an extra cost.
@pablintino does that sound right?
There was a problem hiding this comment.
@yuqi-zhang @dustymabe I think this change is safe to perform now given we did #5345. The check to see if PIS was enabled was introduced as a really short term fix for customers that faced the original bug that basically caused clusters with the image already in local storage to fail if a really tied pull policy was in place. As most users didn't have PIS enabled this if condition made all cluster go through the pull path instead of the local storage one. Now that we always manipulate the policies to make sure local storage is usable I don't think we will hace issues.
Apart from that, totally agree with the idea of this change to avoid the extra pull.
|
/pipeline required |
|
Scheduling tests matching the |
|
/test e2e-gcp-op-ocl-part2 |
| return dn.InplaceUpdateViaNewContainer(newURL) | ||
| } | ||
|
|
||
| isPisConfigured, err := dn.isPinnedImageSetConfigured() |
There was a problem hiding this comment.
isPinnedImageSetConfigured can now be removed as it's dead code.
a43e96a to
b95edf8
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@pkg/daemon/update.go`:
- Around line 2781-2788: Remove the orphaned comment fragment "// the local
image" that interrupts the comment block before the call to
dn.podmanInterface.GetPodmanImageInfoByReference(newURL); update the surrounding
comment to be a single coherent sentence (or remove the fragment entirely) so
the comment reads cleanly: e.g., keep the explanation about checking whether the
new container image is already present and delete the stray fragment.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 2d715bbe-d36a-426d-b41e-6507b98dd4e9
📒 Files selected for processing (1)
pkg/daemon/update.go
yuqi-zhang
left a comment
There was a problem hiding this comment.
/lgtm
Do you want to backport this @dustymabe ? I can create a bug for that. This will currently merge into 5.0 only
I think it should go anywhere the bootupd code landed. almost to be considered like a fixup for that code. wish we didn't have to do a bunch of paperwork for that. |
In openshift#5868 [1] we started running bootupd from the OS update container, which means the container image gets pulled down and is already present. Let's expand the GetPodmanImageInfoByReference check here to run it unconditionally, not just if isPisConfigured. This should prevent us pulling the OS image into container storage and then having rpm-ostree separately pull it (directly into ostree storage). If podmanImageInfo isn't nil then we'll rebase directly from `containers-storage:` transport. [1] openshift#5868
b95edf8 to
46930d4
Compare
I think given the context that:
I'm fine with having this generally only for main (5.0) and onwards, and consider it less of a fix but more of a operation improvement, unless you think there's a specific risk in double-pulls in 4.22 that we should get this in for |
|
ok, sounds good |
|
Given its minor scope, adding no-Jira /lgtm From my end |
|
/unhold |
|
/lgtm |
|
Scheduling tests matching the |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dustymabe, yuqi-zhang The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@dustymabe: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
- What I did
In #5868 [1] we started running bootupd from the OS update container, which means the container image gets pulled down and is already present. Let's expand the GetPodmanImageInfoByReference check here to run it unconditionally, not just if isPisConfigured.
This should prevent us pulling the OS image into container storage and then having rpm-ostree separately pull it (directly into ostree storage). If podmanImageInfo isn't nil then we'll rebase directly from
containers-storage:transport.[1] #5868
- How to verify it
Look at the logs and verify the rebase is coming from
containers-storage:.- Description for the changelog
Tell rpm-ostree rebase to use the image from containers-storage if the image is already present, which will be true if we previously updated the bootloader.
Summary by CodeRabbit