-
Notifications
You must be signed in to change notification settings - Fork 40.9k
DRA scheduler: implement filter timeout #132033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@@ -682,6 +685,10 @@ func lookupAttribute(device *draapi.BasicDevice, deviceID DeviceID, attributeNam | |||
// This allows the logic for subrequests to call allocateOne with the same | |||
// device index without causing infinite recursion. | |||
func (alloc *allocator) allocateOne(r deviceIndices, allocateSubRequest bool) (bool, error) { | |||
if alloc.ctx.Err() != nil { | |||
return false, fmt.Errorf("filter operation aborted: %w", alloc.ctx.Err()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO:
- benchmark this additional if check
- decide whether we should add a separate feature gate for it (KEP 4381: DRA structured parameters: updates, promotion to GA enhancements#5333 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found no relevant performance impact of this additional if check.
What I did find was that our instructions for running scheduler_perf didn't show how to use benchstat 😁 I had to re-discover how to do that. I've included one commit with updated instructions.
@sanposhiho: I also included one commit with the enhancements for Filter cancellation (use context.CancelCause, documentation in the interface).
This PR is now ready for merging.
/assign @sanposhiho @macsko
1d4d178
to
f1aec04
Compare
/retest Some known flakes, timeouts. |
f1aec04
to
dc1bb36
Compare
// usage of DRA. However, slow filtering can slow down Pod scheduling | ||
// also for Pods not using DRA. Administators can reduce the timeout | ||
// after checking the | ||
// `scheduler_framework_extension_point_duration_seconds` metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relying on this metric would give the e2e filtering duration (all plugins for all the nodes). You could add that the value could be adjusted based on the scheduler_plugin_execution_duration_seconds
metric (specific plugin for one node).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. Then admins should look at scheduler_plugin_execution_duration_seconds
first because that is what the timeout is applied to, with scheduler_framework_extension_point_duration_seconds
providing some additional insights. Updated accordingly.
// after checking the
// `scheduler_plugin_execution_duration_seconds` metrics.
// That tracks the time spend in each Filter operation.
// There's also `scheduler_framework_extension_point_duration_seconds`
// which tracks the duration of filtering overall.
pkg/scheduler/framework/interface.go
Outdated
@@ -559,6 +559,14 @@ type FilterPlugin interface { | |||
// For example, during preemption, we may pass a copy of the original | |||
// nodeInfo object that has some pods removed from it to evaluate the | |||
// possibility of preempting them to schedule the target pod. | |||
// | |||
// Plugins are encourage to check the context for cancellation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Plugins are encourage to check the context for cancellation. | |
// Plugins are encouraged to check the context for cancellation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
case errors.Is(err, context.DeadlineExceeded): | ||
return statusUnschedulable(logger, "timed out trying to allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate())) | ||
case ctx.Err() != nil: | ||
return statusUnschedulable(logger, "asked by caller to stop allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return statusUnschedulable(logger, "asked by caller to stop allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx)) | |
return statusUnschedulable(logger, "asked by caller to stop allocating devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
PluginConfig: append(append(slices.Clone(pluginConfigs[0:1]), configv1.PluginConfig{ | ||
Name: "DynamicResources", | ||
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{ | ||
TypeMeta: metav1.TypeMeta{ | ||
Kind: "DynamicResourcesArgs", | ||
APIVersion: "kubescheduler.config.k8s.io/v1", | ||
}, | ||
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second}, | ||
}}, | ||
}), pluginConfigs[1:]...), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this way will be simpler:
PluginConfig: append(append(slices.Clone(pluginConfigs[0:1]), configv1.PluginConfig{ | |
Name: "DynamicResources", | |
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{ | |
TypeMeta: metav1.TypeMeta{ | |
Kind: "DynamicResourcesArgs", | |
APIVersion: "kubescheduler.config.k8s.io/v1", | |
}, | |
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second}, | |
}}, | |
}), pluginConfigs[1:]...), | |
PluginConfig: append([]configv1.PluginConfig{pluginConfigs[0], configv1.PluginConfig{ | |
Name: "DynamicResources", | |
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{ | |
TypeMeta: metav1.TypeMeta{ | |
Kind: "DynamicResourcesArgs", | |
APIVersion: "kubescheduler.config.k8s.io/v1", | |
}, | |
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second}, | |
}}, | |
}}, pluginConfigs[1:]...), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Thanks!
// This variant uses the normal test objects to avoid excessive runtime. | ||
// It could theoretically pass even though the 1 ns limit is enforced, | ||
// but that's unlikely. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move this comment to the "timeout" test case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"timeout" uses unusual ("large") test data to increase the likelihood that processing them runs into the timeout.
It's really this case here which falls back to the normal test data (input + output) again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the "theoretically pass" part is misleading. I changed it to:
// It could theoretically pass even though the 1 ns limit is enforced
// although it shouldn't be (which then would be a false positive),
// but that's unlikely.
@@ -260,6 +266,10 @@ func (c CompilationResult) DeviceMatches(ctx context.Context, input Device) (boo | |||
|
|||
result, details, err := c.Program.ContextEval(ctx, variables) | |||
if err != nil { | |||
// CEL does not wrap the context error. We have to deduce why it failed. | |||
if strings.Contains(err.Error(), "operation interrupted") && ctx.Err() != nil { | |||
return false, details, fmt.Errorf("%w: %w", err, ctx.Err()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it use context.Cause
similarly to allocator?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's indeed better. I try to train myself to use it, but sometimes forget 😅
case "cancel": | ||
c, cancel := context.WithCancel(ctx) | ||
defer cancel() | ||
ctx = c |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this test case make sense if we defer the cancel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No - good catch! This was meant to start the evaluation with an already canceled context. Damn muscle memory... or cut-and-paste.
aa2c617
to
fd7197f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know the linter is faling, but leaving
/approve
from me anyway because I'm going to be unavailable next week.
leave /lgtm to @macsko. Though I believe it needs api-review anyway to get merged
fd7197f
to
66b9406
Compare
ctx = c | ||
case "cancel": | ||
c, cancel := context.WithCancel(ctx) | ||
defer cancel() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove defer from here as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
case errors.Is(err, context.DeadlineExceeded): | ||
return statusUnschedulable(logger, "timed out trying to allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate())) | ||
case ctx.Err() != nil: | ||
return statusUnschedulable(logger, "asked by caller to stop allocating devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This behavior is a bit inconsistent with the Filter's interface comment, where you mentioned that context.Cause(ctx)
is passed to the status message. Here, you would only log the cause .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. Let's do as suggested and include the cause. I also added a test case. I need to rebase, so here's the incremental diff before the rebase for your review: https://github.com/kubernetes/kubernetes/compare/66b9406a7dd4de8a82f1c11769e3379f5c1b51cf..62bc3360c20b1dd54206d4e0a6414a8fb5618ec8
66b9406
to
62bc336
Compare
With benchstat it's easy to do before/after comparisons, but the section for running benchmark didn't mention it at all and didn't work as shown there: - benchmark results must be printed (FULL_LOG) - timeout might have been too short (KUBE_TIMEOUT) - only "short" benchmarks ran (SHORT) - klog log output must be redirected (ARTIFACTS)
When using context.CancelCause in the scheduler and context.Cause in plugins, the status returned by plugins is more informative than just "context canceled". Context cancellation itself is not new, but many plugin authors probably weren't aware of it because it wasn't documented.
The only option is the filter timeout. The implementation of it follows in a separate commit.
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds. We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable". The allocator and thus Filter now also check for context cancellation by the scheduler. This happens when enough nodes have been found.
It's unclear why k8s.io/kubernetes/pkg/apis/resource/install needs to be imported explicitly. Having the apiserver and scheduler ready to be started ensures that all APIs are available.
This covers disabling the feature via the configuration, failing to schedule because of timeouts for all nodes, and retrying after ResourceSlice changes with partial success (timeout for one node, success for the other). While at it, some helper code gets improved.
The DRASchedulerFilterTimeout feature gate simplifies disabling the timeout because setting a feature gate is often easier than modifying the scheduler configuration with a zero timeout value. The timeout and feature gate are new. The gate starts as beta and enabled by default, which is consistent with the "smaller changes with low enough risk that still may need to be disabled..." guideline.
62bc336
to
2436ee7
Compare
/lgtm |
LGTM label has been added. Git tree hash: 37b3ada6bf1fc58c23e3ac7b3a62790a656c58c5
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: macsko, pohly, sanposhiho The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
@pohly: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/label api-review |
``` | ||
|
||
The output can used for `benchstat` to summarize results or to do before/after |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a link?
The output can used for `benchstat` to summarize results or to do before/after | |
The output can used for [`benchstat`](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat) to summarize results or to do before/after |
What type of PR is this?
/kind feature
What this PR does / why we need it:
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds, as discussed here:
Which issue(s) this PR fixes:
Related-to: #131730 (comment), kubernetes/enhancements#4381
Special notes for your reviewer:
We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable".
Does this PR introduce a user-facing change?