-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully. #116134
Conversation
Hi @cvvz. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
cc @andyzhangx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
/triage accepted |
@bart0sh: GitHub didn't allow me to request PR reviews from the following users: kubernetes/sig-storage-pr-reviews. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@cvvz can you please squash the commits, so we have a clean history? |
if !isExist { | ||
return nil, fmt.Errorf("volume: %q is not mounted", uniqueVolumeName) | ||
} | ||
|
||
reconstructedVolume := &reconstructedVolume{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we at least check for existence of mount or map path? If they don't exist (but may be pod directory exists) - is there a point in doing reconstruction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reconstruction is still needed when mount or map path doesn't exist, since in this situation, we can rely on unmountDevice
and unmountVolume
to clean up both the global mount path and pod mount path whereas cleanupMounts
can only cleanup the mount path in Pod directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to @cvvz, at this point, operationExecutor.ReconstructVolumeOperation
has succeeded , so the volume plugin should have enough information to clean up the volume by itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but if pod still running, we add it to asw directly, volume manage will not remount this volume forever, container will use local path instead of remote path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If Pod is still running, then the volume should exist in DSW, then it won't be added to ASW:
kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct.go
Lines 87 to 97 in e575e60
if volumeInDSW { | |
// Some pod needs the volume. And it exists on disk. Some previous | |
// kubelet must have created the directory, therefore it must have | |
// reported the volume as in use. Mark the volume as in use also in | |
// this new kubelet so reconcile() calls SetUp and re-mounts the | |
// volume if it's necessary. | |
volumeNeedReport = append(volumeNeedReport, reconstructedVolume.volumeName) | |
rc.skippedDuringReconstruction[reconstructedVolume.volumeName] = gvl | |
klog.V(4).InfoS("Volume exists in desired state, marking as InUse", "podName", volume.podName, "volumeSpecName", volume.volumeSpecName) | |
continue | |
} |
commit 1b3ae27e7af577372d5aaaf28ea401eb33d1c4df Author: weizhichen <[email protected]> Date: Thu Mar 9 08:39:04 2023 +0000 fix commit 566e139308e3cec4c9d4765eb4ccc3a735346c2e Author: weizhichen <[email protected]> Date: Thu Mar 9 08:36:32 2023 +0000 fix unit test commit 13a58ebd25b824dcf854a132e9ac474c8296f0bf Author: weizhichen <[email protected]> Date: Thu Mar 2 03:32:39 2023 +0000 add unit test commit c984e36e37c41bbef8aec46fe3fe81ab1c6a2521 Author: weizhichen <[email protected]> Date: Tue Feb 28 15:25:56 2023 +0000 fix imports commit 58ec617e0ff1fbd209ca0af3237017679c3c0ad7 Author: weizhichen <[email protected]> Date: Tue Feb 28 15:24:21 2023 +0000 delete CheckVolumeExistenceOperation commit 0d8cf0caa78bdf1f1f84ce011c4cc0e0de0e8707 Author: weizhichen <[email protected]> Date: Tue Feb 28 14:29:37 2023 +0000 fix 111933
/retest |
1 similar comment
/retest |
/lgtm |
LGTM label has been added. Git tree hash: 02acc88fcce5d632091a150161a97f8dd1081dce
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cvvz, jsafrane The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
It should be safe to backport to 1.27, but backporting to older releases needs more investigation. |
I checked 1.26 with the old-style volume reconstruction. If reconstructVolume() succeeds:
Therefore this PR is not suitable for the old-style volume reconstruction + backports to 1.25-1.26. I don't think it is safe to mark the volumes as uncertain, that's why we reworked the volume reconstruction and put it behind a feature gate. |
I think it would be ok. If kubelet gets a new Pod that uses the volume, the followings are going to happen:
So, I think we need to get the mount state of device mount path during reconstruction, if the device mount path is not mounted, then set the state as WDYT? @jsafrane |
not sure whether it's suitable to backports to 1.25-1.26, @cvvz could you back port to 1.27 first? thanks. |
…-origin-release-1.27 Automated cherry pick of #116134: fix: After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully.
How about backport to 1.26, or even previous ones? |
What type of PR is this?
/kind bug
What this PR does / why we need it:
After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully. (#111933) Meanwhile Kubelet will keep printing the log
Orphaned pod "xxx" found, but error not a directory occurred when trying to remove the volumes dir
every 2 seconds. (#105536)Although we have KEP 3756 to refactor the process of reconstructing ASW and DSW, which aims to fix these problems totally and some of the work has been done by introducing
SELinuxMountReadWriteOncePod
feature gate and adding new reconcile function, we still need to fix them under old implementation before the new feature is stable.Which issue(s) this PR fixes:
Fixes #111933 #105536
Special notes for your reviewer:
No need to check volume existence during reconstruction and cleanup if mount point does not exist.
After the evicted Pods are reconstructed and added to ASW or skippedDuringReconstruction, the mount point will be finally cleaned up in the subsequence reconcile loop, since these Pods are not in DSW.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: