Skip to content

DRA scheduler: implement filter timeout #132033

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

pohly
Copy link
Contributor

@pohly pohly commented May 30, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds, as discussed here:

Which issue(s) this PR fixes:

Related-to: #131730 (comment), kubernetes/enhancements#4381

Special notes for your reviewer:

We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable".

Does this PR introduce a user-facing change?

DRA: the scheduler plugin now prevents abnormal filter runtimes by timing out after 10 seconds. This is configurable via the plugin configuration's `FilterTimeout`. Setting it to zero disables the timeout and restores the behavior of Kubernetes <= 1.33.

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels May 30, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels May 30, 2025
@k8s-ci-robot k8s-ci-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. wg/device-management Categorizes an issue or PR as relevant to WG Device Management. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 30, 2025
@@ -682,6 +685,10 @@ func lookupAttribute(device *draapi.BasicDevice, deviceID DeviceID, attributeNam
// This allows the logic for subrequests to call allocateOne with the same
// device index without causing infinite recursion.
func (alloc *allocator) allocateOne(r deviceIndices, allocateSubRequest bool) (bool, error) {
if alloc.ctx.Err() != nil {
return false, fmt.Errorf("filter operation aborted: %w", alloc.ctx.Err())
Copy link
Contributor Author

@pohly pohly May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found no relevant performance impact of this additional if check.

What I did find was that our instructions for running scheduler_perf didn't show how to use benchstat 😁 I had to re-discover how to do that. I've included one commit with updated instructions.

@sanposhiho: I also included one commit with the enhancements for Filter cancellation (use context.CancelCause, documentation in the interface).

This PR is now ready for merging.

/assign @sanposhiho @macsko

@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from 1d4d178 to f1aec04 Compare May 30, 2025 16:05
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 30, 2025
@pohly
Copy link
Contributor Author

pohly commented May 30, 2025

/retest

Some known flakes, timeouts.

@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from f1aec04 to dc1bb36 Compare June 2, 2025 08:48
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. area/code-generation and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jun 2, 2025
// usage of DRA. However, slow filtering can slow down Pod scheduling
// also for Pods not using DRA. Administators can reduce the timeout
// after checking the
// `scheduler_framework_extension_point_duration_seconds` metrics.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Relying on this metric would give the e2e filtering duration (all plugins for all the nodes). You could add that the value could be adjusted based on the scheduler_plugin_execution_duration_seconds metric (specific plugin for one node).

Copy link
Contributor Author

@pohly pohly Jun 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Then admins should look at scheduler_plugin_execution_duration_seconds first because that is what the timeout is applied to, with scheduler_framework_extension_point_duration_seconds providing some additional insights. Updated accordingly.

	// after checking the
	// `scheduler_plugin_execution_duration_seconds` metrics.
	// That tracks the time spend in each Filter operation.
	// There's also `scheduler_framework_extension_point_duration_seconds`
	// which tracks the duration of filtering overall.

@@ -559,6 +559,14 @@ type FilterPlugin interface {
// For example, during preemption, we may pass a copy of the original
// nodeInfo object that has some pods removed from it to evaluate the
// possibility of preempting them to schedule the target pod.
//
// Plugins are encourage to check the context for cancellation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Plugins are encourage to check the context for cancellation.
// Plugins are encouraged to check the context for cancellation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

case errors.Is(err, context.DeadlineExceeded):
return statusUnschedulable(logger, "timed out trying to allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()))
case ctx.Err() != nil:
return statusUnschedulable(logger, "asked by caller to stop allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return statusUnschedulable(logger, "asked by caller to stop allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx))
return statusUnschedulable(logger, "asked by caller to stop allocating devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Comment on lines 680 to 689
PluginConfig: append(append(slices.Clone(pluginConfigs[0:1]), configv1.PluginConfig{
Name: "DynamicResources",
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{
TypeMeta: metav1.TypeMeta{
Kind: "DynamicResourcesArgs",
APIVersion: "kubescheduler.config.k8s.io/v1",
},
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second},
}},
}), pluginConfigs[1:]...),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this way will be simpler:

Suggested change
PluginConfig: append(append(slices.Clone(pluginConfigs[0:1]), configv1.PluginConfig{
Name: "DynamicResources",
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{
TypeMeta: metav1.TypeMeta{
Kind: "DynamicResourcesArgs",
APIVersion: "kubescheduler.config.k8s.io/v1",
},
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second},
}},
}), pluginConfigs[1:]...),
PluginConfig: append([]configv1.PluginConfig{pluginConfigs[0], configv1.PluginConfig{
Name: "DynamicResources",
Args: runtime.RawExtension{Object: &configv1.DynamicResourcesArgs{
TypeMeta: metav1.TypeMeta{
Kind: "DynamicResourcesArgs",
APIVersion: "kubescheduler.config.k8s.io/v1",
},
FilterTimeout: &metav1.Duration{Duration: 10 * time.Second},
}},
}}, pluginConfigs[1:]...),

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Thanks!

Comment on lines 1021 to 1047
// This variant uses the normal test objects to avoid excessive runtime.
// It could theoretically pass even though the 1 ns limit is enforced,
// but that's unlikely.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move this comment to the "timeout" test case?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"timeout" uses unusual ("large") test data to increase the likelihood that processing them runs into the timeout.

It's really this case here which falls back to the normal test data (input + output) again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the "theoretically pass" part is misleading. I changed it to:

			// It could theoretically pass even though the 1 ns limit is enforced
			// although it shouldn't be (which then would be a false positive),
			// but that's unlikely.

@@ -260,6 +266,10 @@ func (c CompilationResult) DeviceMatches(ctx context.Context, input Device) (boo

result, details, err := c.Program.ContextEval(ctx, variables)
if err != nil {
// CEL does not wrap the context error. We have to deduce why it failed.
if strings.Contains(err.Error(), "operation interrupted") && ctx.Err() != nil {
return false, details, fmt.Errorf("%w: %w", err, ctx.Err())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't it use context.Cause similarly to allocator?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's indeed better. I try to train myself to use it, but sometimes forget 😅

case "cancel":
c, cancel := context.WithCancel(ctx)
defer cancel()
ctx = c
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this test case make sense if we defer the cancel?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No - good catch! This was meant to start the evaluation with an already canceled context. Damn muscle memory... or cut-and-paste.

@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from aa2c617 to fd7197f Compare June 26, 2025 15:15
Copy link
Member

@sanposhiho sanposhiho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know the linter is faling, but leaving
/approve
from me anyway because I'm going to be unavailable next week.

leave /lgtm to @macsko. Though I believe it needs api-review anyway to get merged

@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from fd7197f to 66b9406 Compare June 27, 2025 07:16
ctx = c
case "cancel":
c, cancel := context.WithCancel(ctx)
defer cancel()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove defer from here as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

case errors.Is(err, context.DeadlineExceeded):
return statusUnschedulable(logger, "timed out trying to allocate devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()))
case ctx.Err() != nil:
return statusUnschedulable(logger, "asked by caller to stop allocating devices", "pod", klog.KObj(pod), "node", klog.KObj(node), "resourceclaims", klog.KObjSlice(state.allocator.ClaimsToAllocate()), "cause", context.Cause(ctx))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This behavior is a bit inconsistent with the Filter's interface comment, where you mentioned that context.Cause(ctx) is passed to the status message. Here, you would only log the cause .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Let's do as suggested and include the cause. I also added a test case. I need to rebase, so here's the incremental diff before the rebase for your review: https://github.com/kubernetes/kubernetes/compare/66b9406a7dd4de8a82f1c11769e3379f5c1b51cf..62bc3360c20b1dd54206d4e0a6414a8fb5618ec8

pohly added a commit to pohly/kubernetes that referenced this pull request Jun 27, 2025
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2025
@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from 66b9406 to 62bc336 Compare June 28, 2025 15:35
pohly added 7 commits June 28, 2025 17:37
With benchstat it's easy to do before/after comparisons, but the section for
running benchmark didn't mention it at all and didn't work as shown there:
- benchmark results must be printed (FULL_LOG)
- timeout might have been too short (KUBE_TIMEOUT)
- only "short" benchmarks ran (SHORT)
- klog log output must be redirected (ARTIFACTS)
When using context.CancelCause in the scheduler and context.Cause in plugins,
the status returned by plugins is more informative than just "context
canceled".

Context cancellation itself is not new, but many plugin authors probably
weren't aware of it because it wasn't documented.
The only option is the filter timeout.
The implementation of it follows in a separate commit.
The intent is to catch abnormal runtimes with the generously large default
timeout of 10 seconds.

We have to set up a context with the configured timeout (optional!), then
ensure that both CEL evaluation and the allocation logic itself properly
returns the context error. The scheduler plugin then can convert that into
"unschedulable".

The allocator and thus Filter now also check for context cancellation by the
scheduler. This happens when enough nodes have been found.
It's unclear why k8s.io/kubernetes/pkg/apis/resource/install needs
to be imported explicitly. Having the apiserver and scheduler ready
to be started ensures that all APIs are available.
This covers disabling the feature via the configuration, failing to schedule
because of timeouts for all nodes, and retrying after ResourceSlice changes with
partial success (timeout for one node, success for the other).

While at it, some helper code gets improved.
The DRASchedulerFilterTimeout feature gate simplifies disabling the timeout
because setting a feature gate is often easier than modifying the scheduler
configuration with a zero timeout value.

The timeout and feature gate are new. The gate starts as beta and enabled by
default, which is consistent with the "smaller changes with low enough risk
that still may need to be disabled..." guideline.
@pohly pohly force-pushed the dra-scheduler-filter-timeout branch from 62bc336 to 2436ee7 Compare June 28, 2025 15:40
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2025
@macsko
Copy link
Member

macsko commented Jun 30, 2025

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 30, 2025
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 37b3ada6bf1fc58c23e3ac7b3a62790a656c58c5

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: macsko, pohly, sanposhiho
Once this PR has been reviewed and has the lgtm label, please assign ahg-g, msau42 for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pohly
Copy link
Contributor Author

pohly commented Jun 30, 2025

/retest

@pohly
Copy link
Contributor Author

pohly commented Jun 30, 2025

/label api/review
/assign @ahg-g @msau42

This changes the scheduler config (implies API review) and adds a feature gate which acts as emergency hatch if the new functionality causes problems (unlikely, so goes straight to beta with on-by-default).

@k8s-ci-robot
Copy link
Contributor

@pohly: The label(s) /label api/review cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor, ci-short, ci-extended, ci-full, official-cve-feed. Is this label configured under labels -> additional_labels or labels -> restricted_labels in plugin.yaml?

In response to this:

/label api/review
/assign @ahg-g @msau42

This changes the scheduler config (implies API review) and adds a feature gate which acts as emergency hatch if the new functionality causes problems (unlikely, so goes straight to beta with on-by-default).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@pohly
Copy link
Contributor Author

pohly commented Jun 30, 2025

/label api-review

@k8s-ci-robot k8s-ci-robot added the api-review Categorizes an issue or PR as actively needing an API review. label Jun 30, 2025
```

The output can used for `benchstat` to summarize results or to do before/after
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a link?

Suggested change
The output can used for `benchstat` to summarize results or to do before/after
The output can used for [`benchstat`](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat) to summarize results or to do before/after

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-review Categorizes an issue or PR as actively needing an API review. area/code-generation area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. wg/device-management Categorizes an issue or PR as relevant to WG Device Management.
Projects
Status: No status
Status: 👀 In review
Status: Archive-it
Development

Successfully merging this pull request may close these issues.

9 participants