Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully. #116134

Merged
merged 1 commit into from
Apr 11, 2023

Conversation

cvvz
Copy link
Member

@cvvz cvvz commented Feb 28, 2023

What type of PR is this?

/kind bug

What this PR does / why we need it:

After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully. (#111933) Meanwhile Kubelet will keep printing the log Orphaned pod "xxx" found, but error not a directory occurred when trying to remove the volumes dir every 2 seconds. (#105536)

Although we have KEP 3756 to refactor the process of reconstructing ASW and DSW, which aims to fix these problems totally and some of the work has been done by introducing SELinuxMountReadWriteOncePod feature gate and adding new reconcile function, we still need to fix them under old implementation before the new feature is stable.

Which issue(s) this PR fixes:

Fixes #111933 #105536

Special notes for your reviewer:

No need to check volume existence during reconstruction and cleanup if mount point does not exist.

After the evicted Pods are reconstructed and added to ASW or skippedDuringReconstruction, the mount point will be finally cleaned up in the subsequence reconcile loop, since these Pods are not in DSW.

Does this PR introduce a user-facing change?

fix: After a Node is down and take some time to get back to up again, the mount point of the evicted Pods cannot be cleaned up successfully. (#111933) Meanwhile Kubelet will print the log `Orphaned pod "xxx" found, but error not a directory occurred when trying to remove the volumes dir` every 2 seconds. (#105536)

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 28, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @cvvz. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 28, 2023
@cvvz
Copy link
Member Author

cvvz commented Feb 28, 2023

@dobsonj @gnufied @msau42 @jingxu97 @jsafrane @xing-yang
Could you please take a look? Thanks.

@cvvz
Copy link
Member Author

cvvz commented Mar 1, 2023

cc @andyzhangx

Copy link
Member

@andyzhangx andyzhangx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 1, 2023
@bart0sh bart0sh added this to Triage in SIG Node PR Triage Mar 1, 2023
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Mar 2, 2023
@bart0sh
Copy link
Contributor

bart0sh commented Mar 2, 2023

/triage accepted
/priority important-longterm
/cc @kubernetes/sig-storage-pr-reviews

@k8s-ci-robot
Copy link
Contributor

@bart0sh: GitHub didn't allow me to request PR reviews from the following users: kubernetes/sig-storage-pr-reviews.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/triage accepted
/priority important-longterm
/cc @kubernetes/sig-storage-pr-reviews

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 2, 2023
@bart0sh bart0sh moved this from Triage to Needs Reviewer in SIG Node PR Triage Mar 2, 2023
@jsafrane
Copy link
Member

jsafrane commented Mar 7, 2023

@cvvz can you please squash the commits, so we have a clean history?

if !isExist {
return nil, fmt.Errorf("volume: %q is not mounted", uniqueVolumeName)
}

reconstructedVolume := &reconstructedVolume{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we at least check for existence of mount or map path? If they don't exist (but may be pod directory exists) - is there a point in doing reconstruction?

Copy link
Member Author

@cvvz cvvz Mar 8, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reconstruction is still needed when mount or map path doesn't exist, since in this situation, we can rely on unmountDevice and unmountVolume to clean up both the global mount path and pod mount path whereas cleanupMounts can only cleanup the mount path in Pod directory.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @cvvz, at this point, operationExecutor.ReconstructVolumeOperation has succeeded , so the volume plugin should have enough information to clean up the volume by itself.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but if pod still running, we add it to asw directly, volume manage will not remount this volume forever, container will use local path instead of remote path.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If Pod is still running, then the volume should exist in DSW, then it won't be added to ASW:

if volumeInDSW {
// Some pod needs the volume. And it exists on disk. Some previous
// kubelet must have created the directory, therefore it must have
// reported the volume as in use. Mark the volume as in use also in
// this new kubelet so reconcile() calls SetUp and re-mounts the
// volume if it's necessary.
volumeNeedReport = append(volumeNeedReport, reconstructedVolume.volumeName)
rc.skippedDuringReconstruction[reconstructedVolume.volumeName] = gvl
klog.V(4).InfoS("Volume exists in desired state, marking as InUse", "podName", volume.podName, "volumeSpecName", volume.volumeSpecName)
continue
}

commit 1b3ae27e7af577372d5aaaf28ea401eb33d1c4df
Author: weizhichen <[email protected]>
Date:   Thu Mar 9 08:39:04 2023 +0000

    fix

commit 566e139308e3cec4c9d4765eb4ccc3a735346c2e
Author: weizhichen <[email protected]>
Date:   Thu Mar 9 08:36:32 2023 +0000

    fix unit test

commit 13a58ebd25b824dcf854a132e9ac474c8296f0bf
Author: weizhichen <[email protected]>
Date:   Thu Mar 2 03:32:39 2023 +0000

    add unit test

commit c984e36e37c41bbef8aec46fe3fe81ab1c6a2521
Author: weizhichen <[email protected]>
Date:   Tue Feb 28 15:25:56 2023 +0000

    fix imports

commit 58ec617e0ff1fbd209ca0af3237017679c3c0ad7
Author: weizhichen <[email protected]>
Date:   Tue Feb 28 15:24:21 2023 +0000

    delete CheckVolumeExistenceOperation

commit 0d8cf0caa78bdf1f1f84ce011c4cc0e0de0e8707
Author: weizhichen <[email protected]>
Date:   Tue Feb 28 14:29:37 2023 +0000

    fix 111933
@cvvz
Copy link
Member Author

cvvz commented Mar 9, 2023

/retest

1 similar comment
@cvvz
Copy link
Member Author

cvvz commented Mar 10, 2023

/retest

@cvvz
Copy link
Member Author

cvvz commented Mar 31, 2023

Hi, @gnufied @jsafrane Shall we move this pr on?

@jsafrane
Copy link
Member

jsafrane commented Apr 5, 2023

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 5, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 02acc88fcce5d632091a150161a97f8dd1081dce

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cvvz, jsafrane

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 5, 2023
@xing-yang
Copy link
Contributor

It should be safe to backport to 1.27, but backporting to older releases needs more investigation.

@jsafrane
Copy link
Member

jsafrane commented Apr 6, 2023

I checked 1.26 with the old-style volume reconstruction.

If reconstructVolume() succeeds:

  • if the volume is already in DSW, continue here skips adding it to ASW (that's good):
    if volumeInDSW {
    // Some pod needs the volume. And it exists on disk. Some previous
    // kubelet must have created the directory, therefore it must have
    // reported the volume as in use. Mark the volume as in use also in
    // this new kubelet so reconcile() calls SetUp and re-mounts the
    // volume if it's necessary.
    volumeNeedReport = append(volumeNeedReport, reconstructedVolume.volumeName)
    rc.skippedDuringReconstruction[reconstructedVolume.volumeName] = gvl
    klog.V(4).InfoS("Volume exists in desired state, marking as InUse", "podName", volume.podName, "volumeSpecName", volume.volumeSpecName)
    continue
    }
  • If the volume is not in DSW, it's added to ASW as Mounted (that's bad):
    err = rc.markVolumeState(volume, operationexecutor.VolumeMounted)
    So if kubelet gets a new Pod that uses the volume, then it will see it as mounted, while it could have been already unmounted.

Therefore this PR is not suitable for the old-style volume reconstruction + backports to 1.25-1.26. I don't think it is safe to mark the volumes as uncertain, that's why we reworked the volume reconstruction and put it behind a feature gate.

@bart0sh bart0sh moved this from Needs Reviewer to Needs Approver in SIG Node PR Triage Apr 7, 2023
@cvvz
Copy link
Member Author

cvvz commented Apr 8, 2023

If the volume is not in DSW, it's added to ASW as Mounted (that's bad)
So if kubelet gets a new Pod that uses the volume, then it will see it as mounted, while it could have been already unmounted.

I think it would be ok. If kubelet gets a new Pod that uses the volume, the followings are going to happen:

  1. Since the new Pod and old Pod uid is not the same, so rc.unmountVolumes() for old pod Volumes should succeed.
  2. rc.unmountDetachDevices() will be skipped since DSW has the same volumeName in ASW.
  3. rc.mountOrAttachVolumes() for new Pod will try to mount device if the plugin is DeviceMountable and NOT DeviceGloballyMounted: it will first make mount dir and save metadata to vol_data.json, I think the operations won't be failed even if the mount dir and vol_data.json already exist. And then call NodeStageVolume to mount device, after that mount new Pod volumes.

So, I think we need to get the mount state of device mount path during reconstruction, if the device mount path is not mounted, then set the state as DeviceMountUncertain.

WDYT? @jsafrane

@k8s-ci-robot k8s-ci-robot merged commit 4893c66 into kubernetes:master Apr 11, 2023
SIG Node PR Triage automation moved this from Needs Approver to Done Apr 11, 2023
@k8s-ci-robot k8s-ci-robot added this to the v1.28 milestone Apr 11, 2023
@andyzhangx
Copy link
Member

not sure whether it's suitable to backports to 1.25-1.26, @cvvz could you back port to 1.27 first? thanks.

@migs35323
Copy link

How about backport to 1.26, or even previous ones?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

CSI volumes left overs are not cleaned up if CSI plugin is attachable
9 participants