-
Notifications
You must be signed in to change notification settings - Fork 40.9k
DRA scheduler: implement filter timeout #132033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@@ -682,6 +685,10 @@ func lookupAttribute(device *draapi.BasicDevice, deviceID DeviceID, attributeNam | |||
// This allows the logic for subrequests to call allocateOne with the same | |||
// device index without causing infinite recursion. | |||
func (alloc *allocator) allocateOne(r deviceIndices, allocateSubRequest bool) (bool, error) { | |||
if alloc.ctx.Err() != nil { | |||
return false, fmt.Errorf("filter operation aborted: %w", alloc.ctx.Err()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO:
- benchmark this additional if check
- decide whether we should add a separate feature gate for it (KEP 4381: DRA structured parameters: updates, promotion to GA enhancements#5333 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found no relevant performance impact of this additional if check.
What I did find was that our instructions for running scheduler_perf didn't show how to use benchstat 😁 I had to re-discover how to do that. I've included one commit with updated instructions.
@sanposhiho: I also included one commit with the enhancements for Filter cancellation (use context.CancelCause, documentation in the interface).
This PR is now ready for merging.
/assign @sanposhiho @macsko
1d4d178
to
f1aec04
Compare
/retest Some known flakes, timeouts. |
f1aec04
to
dc1bb36
Compare
@pohly: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/label api-review |
``` | ||
|
||
The output can used for `benchstat` to summarize results or to do before/after |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a link?
The output can used for `benchstat` to summarize results or to do before/after | |
The output can used for [`benchstat`](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat) to summarize results or to do before/after |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added and rebased to address conflicts with other Filter interface changes - please re-add LGTM?
With benchstat it's easy to do before/after comparisons, but the section for running benchmark didn't mention it at all and didn't work as shown there: - benchmark results must be printed (FULL_LOG) - timeout might have been too short (KUBE_TIMEOUT) - only "short" benchmarks ran (SHORT) - klog log output must be redirected (ARTIFACTS)
When using context.CancelCause in the scheduler and context.Cause in plugins, the status returned by plugins is more informative than just "context canceled". Context cancellation itself is not new, but many plugin authors probably weren't aware of it because it wasn't documented.
The only option is the filter timeout. The implementation of it follows in a separate commit.
2436ee7
to
796b64e
Compare
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: macsko, pohly, sanposhiho The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
want: want{ | ||
filter: perNodeResult{ | ||
workerNode.Name: { | ||
status: framework.NewStatus(framework.UnschedulableAndUnresolvable, `asked by caller to stop allocating devices: test canceling Filter`), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case you're not already on top of it, it looks like #132087 also came with some more subtle conflicts when these were moved:
status: framework.NewStatus(framework.UnschedulableAndUnresolvable, `asked by caller to stop allocating devices: test canceling Filter`), | |
status: fwk.NewStatus(fwk.UnschedulableAndUnresolvable, `asked by caller to stop allocating devices: test canceling Filter`), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had fixed the merge conflicts, but hadn't noticed that I need to update some code without conflicts. Should be fixed now.
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds. We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable". The allocator and thus Filter now also check for context cancellation by the scheduler. This happens when enough nodes have been found.
It's unclear why k8s.io/kubernetes/pkg/apis/resource/install needs to be imported explicitly. Having the apiserver and scheduler ready to be started ensures that all APIs are available.
This covers disabling the feature via the configuration, failing to schedule because of timeouts for all nodes, and retrying after ResourceSlice changes with partial success (timeout for one node, success for the other). While at it, some helper code gets improved.
The DRASchedulerFilterTimeout feature gate simplifies disabling the timeout because setting a feature gate is often easier than modifying the scheduler configuration with a zero timeout value. The timeout and feature gate are new. The gate starts as beta and enabled by default, which is consistent with the "smaller changes with low enough risk that still may need to be disabled..." guideline.
796b64e
to
8c70ff3
Compare
What type of PR is this?
/kind feature
What this PR does / why we need it:
The intent is to catch abnormal runtimes with the generously large default timeout of 10 seconds, as discussed here:
Which issue(s) this PR fixes:
Related-to: #131730 (comment), kubernetes/enhancements#4381
Special notes for your reviewer:
We have to set up a context with the configured timeout (optional!), then ensure that both CEL evaluation and the allocation logic itself properly returns the context error. The scheduler plugin then can convert that into "unschedulable".
Does this PR introduce a user-facing change?