-
Notifications
You must be signed in to change notification settings - Fork 40.9k
order sandbox by attempt or create time #130551
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @yylt. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: yylt The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @HirazawaUi @hshiina |
@pacoxu: GitHub didn't allow me to request PR reviews from the following users: hshiina. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
if p[i].Metadata == nil || p[j].Metadata == nil { | ||
return p[i].CreatedAt > p[j].CreatedAt | ||
} | ||
return p[i].Metadata.Attempt > p[j].Metadata.Attempt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue appears to be caused by containerd, and this PR aims to implement some defensive programming in kubelet to address the bug in containerd.
At first glance, this seems like a positive change: when the physical clock is unreliable, it replaces the absolute time order with a logical incremental relationship.
I would like to raise a question here:
Both #126514 and containerd/containerd#9459 seem to be atypical issues.
That is, when an exception occurs here and reports error="failed to reserve sandbox name", there are already issues elsewhere before this point.
Could this fix potentially mask the real underlying problems? If it does mask the true errors, will users still have other means to troubleshoot this exception after we've obscured it?
I also have another question: Can this issue be reproduced on CRIO?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
YES, also happend on CRIO
This is a system time issue. If NTP synchronizes the wrong time, causing time rollback, this problem will occur
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mentioned earlier, the repeated creation of sandbox containers by kubelet is atypical behavior. This could be caused by:
-
Race conditions in kubelet and rapid pod lifecycle changes leading to duplicate sandbox creation (this would be a bug).
-
Containerd failing to create sandbox containers due to resource constraints, or sandbox containers entering erroneous states for other reasons.
For 2, we have already implemented some defensive programming in kubelet. We now sort the list of retrieved sandbox containers by creation time and select the first one.
Currently, kubelet strictly increments the Attempt value each time it tries to create a new sandbox for the same Pod (starting at 0, incrementing to 1 on retry after failure, etc.). This Attempt value is persisted and retained even across node reboots or containerd crashes, and continues to increment. Its value does not depend on system time, it is solely tied to the actual number of sandbox creation attempts.
From a defensive programming perspective, there appears to be no reason to reject this PR. In distributed systems, logical increment-based ordering is more reliable than absolute time order. Using the Attempt value as the sorting criterion is therefore justified.
cc @yujuhong @SergeyKanzhelev @tallclair sig-node maintainers for more accurate and comprehensive insights.
A minor suggestion: You could include an explanation in the PR description about why this PR is necessary. Otherwise, reviewers will need to start from scratch to understand the purpose of this PR and trace back its context. You can add this information under the What this PR does / why we need it: section. Thank you! |
Thanks suggestion, had add more info. |
/release-note-none As I mentioned in the comment, this is a defensive programming measure to prevent atypical issues, so it shouldn't be considered a bug. /remove-kind bug |
@yylt please address review comments or close the PR, thanks! |
It seems difficult to define, but if the timesync(ntp) service is not ready and the kubelet starts the pod, it will trigger this type of error, and the reason for the error is the use of physical time sorting, which is the root cause of the error, so it is a bug worthy type. Also, I don't have permission to open #126514 here |
I think the |
There is a binding relationship between |
cc @hshiina Is there anything I can do at this point to help? Or should I wait for other maintainers? |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This issue is caused by system clock rollback, which is prone to occur during system startup, especially when NTP starts around static pods.
Detailed Reproduction Steps:
date -s '0:0:0'
crictl
.crictl stopp $(crictl pods |grep -w busybox|awk '{print $1}')
crictl
.crictl stopp $(crictl pods |grep -w busybox|awk '{print $1}')
Making kubelet dependent on time services like NTP during system startup is unnecessary.
It's would be a more convenient and robust approach if kubelet use logical time to determine the number of sandbox creations
This could potentially mitigate issues arising from clock rollback, especially in scenarios where NTP synchronization is delayed or fails during early system boot.
Which issue(s) this PR fixes:
Fixes #126514
Special notes for your reviewer:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: