-
Notifications
You must be signed in to change notification settings - Fork 40.9k
Fix race in scheduler integration tests #132451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix race in scheduler integration tests #132451
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: macsko The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @dom4ha @sanposhiho |
/cc @googs1025 can you take a look for a first reviewing path |
@@ -1017,7 +1019,6 @@ func (p *PriorityQueue) Update(logger klog.Logger, oldPod, newPod *v1.Pod) { | |||
queue := p.requeuePodWithQueueingStrategy(logger, pInfo, hint, evt.Label()) | |||
if queue != unschedulablePods { | |||
logger.V(5).Info("Pod moved to an internal scheduling queue because the Pod is updated", "pod", klog.KObj(newPod), "event", evt.Label(), "queue", queue) | |||
p.unschedulablePods.delete(pInfo.Pod, gated) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Not sure if this is correct) According to the description, but when I looked at the source code 🤔 , I found that the Update
method still seems to call the moveToBackoffQ
method. Will this have any impact?
This PR moves deletion from unschedulablePods right after the pod is moved to the backoffQ (similarly to the activeQ). This allows to remove deletion from unschedulablePods from Update method, removing the data race.
kubernetes/pkg/scheduler/backend/queue/scheduling_queue.go
Lines 1029 to 1041 in 29ed1fb
if isPodUpdated(oldPod, newPod) { | |
// Pod might have completed its backoff time while being in unschedulablePods, | |
// so we should check isPodBackingoff before moving the pod to backoffQ. | |
if p.backoffQ.isPodBackingoff(pInfo) { | |
if added := p.moveToBackoffQ(logger, pInfo, framework.EventUnscheduledPodUpdate.Label()); added { | |
p.unschedulablePods.delete(pInfo.Pod, gated) | |
if p.isPopFromBackoffQEnabled { | |
p.activeQ.broadcast() | |
} | |
} | |
return | |
} | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update calls moveToBackoffQ
through requeuePodWithQueueingStrategy
as well. However, moving the deletion from unschedulablePods
right after adding the pod to the backoffQ, mitigates the race risk.
I also noticed that I forgot to remove the line 1034, I updated the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
got this. thanks 😄
14c646f
to
018c7f7
Compare
/hold for merging it after my (or dom4ha's) approval |
018c7f7
to
1024b21
Compare
The change itself looks good. |
I agree that we need to take a look at the update part of the scheduling queue, likely in a new issue (I'll create it).
Or just take activeQ's and backoffQ's locks for a part of the |
Right, that might be equivalent. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
There is an almost impossible data race in kube-scheduler that can be detected using built-in go test race detector. If pod gets updated (Update method of PriorityQueue is called - here) and then pod is popped from the queue, processed by the scheduling cycle and then handled in failure handler (here), the race occurs. See more detailed description here: #132043 (comment).
This PR moves deletion from unschedulablePods right after the pod is moved to the backoffQ (similarly to the activeQ). This allows to remove deletion from unschedulablePods from Update method, removing the data race.
Moreover, preemptionDoneChannels in TestAsyncPreemption is covered by the lock to prevent race in the test code itself.
Which issue(s) this PR is related to:
Fixes #132025
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: