-
Notifications
You must be signed in to change notification settings - Fork 40.9k
[FG:InPlacePodVerticalScaling] Prioritize resize requests by priorityClass and qos class #132342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Skipping CI for Draft Pull Request. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: natasha41575 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
44f82ba
to
05a324b
Compare
/triage accepted |
@natasha41575: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ran out of time, will take another pass tomorrow.
@@ -47,6 +53,12 @@ const ( | |||
actuatedPodsStateFile = "actuated_pods_state" | |||
) | |||
|
|||
var ( | |||
// ticker is used to periodically retry pending resizes. | |||
ticker = time.NewTicker(retryPeriod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ticker should be a member of the manager struct, not a global
} | ||
|
||
oldResizeStatus := m.statusManager.GetPodResizeConditions(uid) | ||
defer func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a defer? I can't quite remember... was this to avoid a deadlock? If so, leave a comment to that effect.
kl.statusManager.SetPodResizeInProgressCondition(pod.UID, v1.PodReasonError, r.Message, false) | ||
if utilfeature.DefaultFeatureGate.Enabled(features.InPlacePodVerticalScaling) { | ||
for _, r := range result.SyncResults { | ||
if r.Action == kubecontainer.ResizePodInPlace { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the resize restart policy is RestartContainer
, then the sync action won't be ResizePodInPlace
, but it could still result in the resize being actuated.
return | ||
} | ||
var podStatus *kubecontainer.PodStatus | ||
podStatus, err = m.podcache.Get(pod.UID) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pod status is used here to determine if containers are running, which in turn determines whether to evaluate the container for deciding whether the resize is in-progress. There is the potential for a race condition here, where the allocation manager sets the condition one way, but by the time it's synced the container status has changed.
Since the status is only ever written from within SyncPod, can we just move all handling of the InProgress condition into SyncPod?
return false | ||
} | ||
|
||
if isResizeIncreasingAnyRequestsForContainer(allocatedPod.Spec.Resources, pod.Spec.Resources) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check the pod-level resources feature gate here
return true | ||
} | ||
|
||
for i, c := range pod.Spec.Containers { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also need to check sidecar containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for i, c := range pod.Spec.Containers { | |
for c, cType := range podutil.ContainerIter(pod.Spec, podutil.InitContainers|podutil.Containers) { | |
if !isResizableContainer(c, cType) { | |
continue | |
} |
|
||
var oldCPURequests, newCPURequests, oldMemRequests, newMemRequests *apiresource.Quantity | ||
|
||
if old != nil && old.Requests != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can simplify this method a lot. I think this should work, even if requests are null or CPU is missing:
if old.Requests.Cpu().Cmp(new.Requests.Cpu()) > 0 {
return true
}
// - Second, based on the pod's PriorityClass. | ||
// - Third, based on the pod's QoS class. | ||
// - Last, prioritizing resizes that have been in the deferred state the longest. | ||
func (m *manager) sortPendingPodsByPriority() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this isn't just sorting by the priority value on the pod. Maybe sortPendingResizes
instead?
func (m *manager) sortPendingPodsByPriority() { | |
func (m *manager) sortPendingResizes() { |
if !firstPodIncreasing && secondPodIncreasing { | ||
return true | ||
} | ||
if !secondPodIncreasing && firstPodIncreasing { | ||
return false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: If neither or increasing, the order doesn't really matter.
if !firstPodIncreasing && secondPodIncreasing { | |
return true | |
} | |
if !secondPodIncreasing && firstPodIncreasing { | |
return false | |
} | |
if !firstPodIncreasing { | |
return true | |
} else if !secondPodIncreasing { | |
return false | |
} |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Prioritize resize requests by priorityClass and qos class when there is not enough room on the node to accept all the resize requests.
Link to design discussion: kubernetes/enhancements#5266
Which issue(s) this PR is related to:
Fixes #116971
Special notes for your reviewer:
This PR builds on #131612 (the first commit here is all the changes in #131612).
Does this PR introduce a user-facing change?