Skip to content

Implement DRA Device Binding Conditions (KEP-5007) #130160

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

KobayashiD27
Copy link
Contributor

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR implements the KEP-5007 DRA device binding conditions. This feature ensures that the scheduler waits in the PreBind phase until any DRA devices that need preparation are ready.

Which issue(s) this PR fixes:

Related to kubernetes/enhancements#5007

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Add new optional APIs in ResouceSlice.Basic and ResourceClaim.Status.AllocatedDeviceStatus.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/issues/5007

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Feb 14, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @KobayashiD27. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added area/code-generation area/kubelet kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. wg/device-management Categorizes an issue or PR as relevant to WG Device Management. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Feb 14, 2025
@KobayashiD27
Copy link
Contributor Author

We are pleased to share the initial implementation of the KEP-5007 DRA device binding conditions. While this PR aligns with the outlined in the KEP, we recognize that there may be areas for improvement. We invite the community to review the implementation and provide feedback and insights to help refine and enhance this feature.

@pohly @johnbelamaric
First, from the perspective of DRA, I would like to confirm that the overall approach aligns with the intended direction. Your insight would be invaluable in ensuring that we are on the right track.

Copy link
Contributor

@pohly pohly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some quick comments. I have looked at the API and allocator, but not the scheduler plugin.

The API lacks validation, but that's of course okay when the main goal for now is to try out the functionality.

const (
// IsPrepared indicates the device ready state.
// If NeedToPreparing is True and IsPrepared is True, the scheduler proceeds to Bind.
IsPrepared = "dra.example.com/is-prepared"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was just an example in the KEP. It doesn't belong into the upstream API. Same for PreparingFailed.

})
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add the device status only when needed by a device.

You also have to add feature gate checking: I don't remember whether it was spelled out explicitly in the KEP (if not, please add in a follow-up), but what would make sense to me is to ignore devices which have binding conditions when the feature is turned off. In other words, don't select them because the code which waits during binding wouldn't be active.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I add allocatedDeviceStatus only when BindingConditions exist.
and I pass the featureGate status of BindingConditions to the allocator and check it.

@KobayashiD27
Copy link
Contributor Author

@pohly
Thank you for the initial feedback. To ensure we are on the right track, I would like to highlight a few key areas where community consensus is crucial. It would be helpful if you could prioritize reviewing the following aspects:

  1. Transferring AllocatedDeviceStatus from the allocator to the scheduler plugin
  2. Clearing the allocation state in the UnReserve phase

Early feedback on these sections would be very helpful.

Additionally, regarding the comment about the lack of API validation, are you referring to pkg/apis/resources/validation/validation.go?

@pohly
Copy link
Contributor

pohly commented Feb 19, 2025

/milestone v1.33

@k8s-ci-robot k8s-ci-robot added this to the v1.33 milestone Feb 19, 2025
@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch from 1c36f41 to f07e273 Compare February 25, 2025 00:18
@wojtek-t
Copy link
Member

My concern is that the KEP still refers to the aniti-pattern of failing binding on successful device attachment. I worry that if we proceed, that pattern becomes in fact enabled and we will have device implementations relying on that. This pattern may be problematic whenever we have workloads that are planned ahead. Failing binding should be a signal to workload rescheduling, not retry, so it would be hard to distinguish the two.

@wojtek-t do you have any thoughts on that?

Jumping late as much was written since then.

I fully agree with you, John and others that it seems to be an anti-pattern. And there seem to be an agreement that we don't want to proceed with that for beta.

My mental model for whether we want to proceed that for Alpha or not [in general, not just in this particular case is]

  • if we roughly know how the end-state looks like, we may take some shortcuts for Alpha to allow for initial experimentation with it. We may take conscious tradeoff to not implement some things or implement them differently than we want eventually to enable that. But we shouldn't try to do something that we know has a path out of Alpha and we don't know what that path is.

I think that the "we roughly know how the end-state looks like" is the crucial bit here in this particular context. It's just not true here. There are ideas (like proxy driver), but we didn't fully explore them and didn't make any decisions. So we don't even know how much we would need to rework stuff to achieve that state.
So until we do that homework and are able to consciously say "we do X eventually, but we still do Y now despite a bunch of rework needed later", we shouldn't proceed with it.

I didn't have time to really think about it, but the proxy driver idea seems the most promissing to me and worth exploring further.

The "fail then reschedule" model is similar to how auto scaler has worked in the past. There, a pod fails to schedule, auto scaler scales the cluster, and we hope the pod gets the new node. That pattern has been less than ideal. It leads to extra latency, it leads to pods not getting scheduled as expected, it makes pre-planning of multi-node workloads very difficult, and is generally a poor user experience.

I don't think we should model this new functionality on that.

+100 to this - especially given we're actively thinking how to change that, exactly because of the pains that John described above, and few others (e.g. worse efficiency, workload disruptions and more)

@dom4ha
Copy link
Member

dom4ha commented May 22, 2025

I think that the "we roughly know how the end-state looks like" is the crucial bit here in this particular context.

I agree with that and I think we still can try to come up with a sensible solution for beta now. We probably all agree that there is no straightforward solution considering the current scheduling model, but we are working on addressing this and similar problems under umbrella of the workload-aware scheduling. Even though we don't have clarity on how things should look like, we still can think what approach would be the best here.

One of the possibilities that are considered is extending the scheduling process with a phase where workload is planned, but not bound yet. In fact this how many other schedulers work already, so we could safely assume we would need such a phase. My question is whether it would be possible to "reserve" somehow the attachable resources before the attachment is eventually "requested", which could be a part of such a planning phase?

I truly hope it is, because otherwise we wouldn't be able to schedule workloads ahead of time and perform any scheduling optimizations toward finding the best pods placement. So the second question is whether we're able to construct the ResourceSlice offering for non-yet-attached device that would represent the potentially-attached device with all its attributes needed for scheduling? In other words, would both ResourceSlices be similar to each other before and after an attachment.

Finally, do we really need to reconstruct the ResourceSlice after attachment and why. We already see that the desire is that both ResourceSlices should be ideally almost identical, so I'm not sure if we can avoid building a proxy device pluggin around it. I hope it would be possible at all to mimic a non-yet-attached device and that the proxy thing could be generic rather than specific to the device plugin.

If we find answers to those questions, it's still not clear whether the attachment should be part of the binding phase, as there are several alternatives, although a similar mechanism to the binding conditions may be needed anyway.

@KobayashiD27
Copy link
Contributor Author

Thanks again for the thoughtful discussion. I’d like to clarify my current thinking and how I see the path forward.

While much of the conversation has (rightfully) focused on the modeling of fabric-attached devices, I believe the core mechanism proposed — BindingConditions — has broader utility and should be evaluated on its own merits. It provides a way to defer binding until readiness is confirmed, which can improve scheduling reliability in a variety of scenarios, not just for fabric devices.

I fully agree that the “fail then reschedule” pattern is problematic and should not be encouraged. I’ll revise the KEP to make that clear, and to better separate the concerns of the mechanism itself from the specific device models it might support.

At the same time, I recognize that the architectural questions around proxy drivers and planning phases are important and worth exploring. My hope is that we can continue those discussions in parallel, while also reviewing the current implementation of BindingConditions as a self-contained feature.

The current KEP still reflects the earlier fail-then-reschedule model, so I’ll be updating it to remove that framing and align it with the direction we’re now discussing. I’ll share the revised version shortly.

@dom4ha
Copy link
Member

dom4ha commented May 23, 2025

I fully agree that the “fail then reschedule” pattern is problematic and should not be encouraged. I’ll revise the KEP to make that clear, and to better separate the concerns of the mechanism itself from the specific device models it might support.

I think we should focus on updating the KEP first, especially reformulating the purpose and defining which problem it solves. Doing things in the right order should help us to review the implementation and ask the right questions.

At the same time, I recognize that the architectural questions around proxy drivers and planning phases are important and worth exploring.

How important is solving the problem of attachable devices? Even if it's not a priority now, I think it's very important to explore in the context of changes we plan to make in scheduling.

@KobayashiD27
Copy link
Contributor Author

Thanks for the helpful feedback, @dom4ha!
I've updated the KEP to clarify the motivation and goals, with a stronger emphasis on general applicability.
Let me know if anything still feels unclear, happy to iterate further.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 14, 2025
@KobayashiD27
Copy link
Contributor Author

@pohly @dom4ha @thockin
@johnbelamaric @macsko
The KEP update is now complete. We would appreciate it if you could continue to review the implementation.
Thank you.

@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch from 594de1f to 9073eb2 Compare June 25, 2025 00:29
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 25, 2025
@KobayashiD27
Copy link
Contributor Author

/test pull-kubernetes-e2e-gce

@KobayashiD27
Copy link
Contributor Author

The integration tests will not pass until the scheduler_perf bug is fixed...

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2025
@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch from 9073eb2 to 0383ac3 Compare June 30, 2025 07:27
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 30, 2025
@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch 3 times, most recently from 8bd3cc2 to 12320f9 Compare July 1, 2025 06:26
@KobayashiD27
Copy link
Contributor Author

/retest

@KobayashiD27
Copy link
Contributor Author

/assign @liggitt

for API review
Per @johnbelamaric suggestion, I add an API reviewer.

@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch from 12320f9 to e7067cb Compare July 2, 2025 07:46
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: johnbelamaric, KobayashiD27
Once this PR has been reviewed and has the lgtm label, please ask for approval from liggitt. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@KobayashiD27 KobayashiD27 force-pushed the dra-device-binding-conditions branch from e7067cb to 9cb01c9 Compare July 2, 2025 09:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-review Categorizes an issue or PR as actively needing an API review. area/code-generation area/kubelet area/release-eng Issues or PRs related to the Release Engineering subproject area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/release Categorizes an issue or PR as relevant to SIG Release. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on. wg/device-management Categorizes an issue or PR as relevant to WG Device Management.
Projects
Status: Assigned
Status: 👀 In review
Status: Needs Triage
Status: !SIG Auth
Status: Needs Triage
Archived in project
Status: Tracked
Development

Successfully merging this pull request may close these issues.