Skip to content

fix: allocate reusableCPUs first before allocate other available CPUs to avoid CPU leaking when cpu-manager-policy=static #131966

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Chunxia202410
Copy link

@Chunxia202410 Chunxia202410 commented May 26, 2025

What type of PR is this?

/kind bug

What this PR does / why we need it:

In the case of cpu-manager-policy=static, when allocate CPUs to a container of a pod, the CPU manager supports reusing the CPUs which have been allocated to the non-restartable Init containers of the pod.

But In some cases, the app container can not reuse the CPU of Init containers, which lead to CPU leaking.

For example:
defaultCpuSet: {{0,10},{1,11},{2,12},{3,13},{4,14},{5,15},{6,16},{7,17},{8,18},{9,19}}
system reserved CPU: {0}
For a pod, Init container request 1 CPU, and app container request 2 CPUs.
the result in the cpu_manager_state is

{
  "policyName":"static",
  "defaultCpuSet":"0,2-9,12-19",
  "entries":{
    "bc2c2aa2-ec4e-43eb-b656-a0928d92f19e":{
      "test-c-1":"1,11",
      "test-init-c-1":"10"   leaking CPU
    }
  },
  "checksum":3859770841
}

CPU leakage means that this CPU not be used by the app container, and also can not be used by other pod, because it not release to the defaultCpuSet.
CPU leakage may cause the error UnexpectedAdmissionError to occur when create a new pod, because the number of available CPUs in the scheduler does not match the number of available CPUs in kubelet. The detail please refer to #129556 (comment).

There are two possible solutions (#112228 (comment))

Solution1: Allocate resuable CPUs first before allocate other available CPUs.
Solution2: Release reusable CPUs which not be used by the AppContainer after the initContainer exits.

If my implement correctly for Solution2 in #131764, it has been proved that it might not a good solution, because it can not cover the pod restart case.

This PR implement the solution1, allocate reusableCPUs first before allocate other available CPUs.

Which issue(s) this PR fixes:

Fixes #112228

@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 26, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 26, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @Chunxia202410. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. labels May 26, 2025
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 26, 2025
@Chunxia202410
Copy link
Author

/sig node

@ffromani
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 26, 2025
@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch 2 times, most recently from 4815c9a to d687e4e Compare May 27, 2025 02:18
@Chunxia202410
Copy link
Author

/test pull-kubernetes-e2e-kind-ipv6

@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch from d687e4e to 2cc77e6 Compare May 27, 2025 08:17
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 27, 2025
@ffromani
Copy link
Contributor

/assign

this is a quite delicate fix in a key area of cpu management. We will need:

  1. a very solid rationale in the commit message/PR presentation. It's fine to defer to issue/past discussion the full details, but this PR alone should be self-sufficient to grasp the change, the rationale, the implication
  2. extensive test coverage (unit tests are not sufficient I think, but I can possibly be proven wrong)
  3. it is not excluded we will need a flag or a feature gate to change the behavior we had for years and ppl (implicitly?) depend on, much like we had to do for the quota issue (github.com/fix: pods meeting qualifications for static placement when cpu-manager-policy=static should not have cfs quota enforcement #127525)

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Chunxia202410
Once this PR has been reviewed and has the lgtm label, please assign derekwaynecarr for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch from ca0cb48 to 2ad6a19 Compare June 4, 2025 09:33
@Chunxia202410
Copy link
Author

/test pull-kubernetes-unit
/test pull-kubernetes-unit-windows-master

@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch from 2ad6a19 to 0906e4a Compare June 4, 2025 10:39
@Chunxia202410
Copy link
Author

  1. a very solid rationale in the commit message/PR presentation. It's fine to defer to issue/past discussion the full details, but this PR alone should be self-sufficient to grasp the change, the rationale, the implication

Update PR description.

  1. extensive test coverage (unit tests are not sufficient I think, but I can possibly be proven wrong)

Add 5 e2e test cases in cpumanager_test.go.

  1. it is not excluded we will need a flag or a feature gate to change the behavior we had for years and ppl (implicitly?) depend on, much like we had to do for the quota issue (github.com/fix: pods meeting qualifications for static placement when cpu-manager-policy=static should not have cfs quota enforcement #127525)

Add feature gate InheritReusableCPUsFirst

@ffromani , I have modified the PR description and code based on your suggestion, please take a look, thank you very much.

@SergeyKanzhelev SergeyKanzhelev moved this from Triage to Archive-it in SIG Node CI/Test Board Jun 4, 2025
@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch 2 times, most recently from 361a188 to 926bf3c Compare June 6, 2025 03:02
@Chunxia202410
Copy link
Author

/test pull-kubernetes-unit-windows-master

@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch 2 times, most recently from b8f1961 to 157e38a Compare June 10, 2025 06:00
@bart0sh bart0sh moved this from Triage to Needs Reviewer in SIG Node: code and documentation PRs Jun 10, 2025
@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch from 157e38a to e238ae6 Compare June 11, 2025 01:54
@Chunxia202410
Copy link
Author

/test pull-kubernetes-e2e-gce

@@ -976,6 +976,18 @@ const (
// operation when scheduling a Pod by setting the `metadata.labels` field on the submitted Binding,
// similar to how `metadata.annotations` behaves.
PodTopologyLabelsAdmission featuregate.Feature = "PodTopologyLabelsAdmission"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think with the introduction of a new feature gate, versioned_kube_features.go and versioned_feature_list.yaml need to be updated as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks you for your comments, I have already updated the versioned_feature_list.yaml file. Could you clarify if I missed any required modifications in this file?
And the code in versioned_kube_features.go have been moved to kube_features.go in this commit, I have also updated the feature version mappings in kube_features.go accordingly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks i missed that.

@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch 3 times, most recently from 85b45af to 6603c25 Compare June 16, 2025 02:43
@esotsal
Copy link
Contributor

esotsal commented Jun 16, 2025

Is the introduction of a new featureGate, as proposed in this commit, mandatory ? If yes can you please explain why ?

Update: I see it was proposed the addition of featureGate from @ffromani , here but i don't understand why it is needed.

it is not excluded we will need a flag or a feature gate to change the behavior we had for years and ppl (implicitly?) depend on, much like we had to do for the quota issue (github.com/#127525)

Addition of new feature gates for static policy, means future commit(s) for the forthcoming KEP handling Guaranteed pod resizes will need to be considered, resulting to more "if" statements in the code.

@Chunxia202410
Copy link
Author

Chunxia202410 commented Jun 16, 2025

Is the introduction of a new featureGate, as proposed in this commit, mandatory ? If yes can you please explain why ?

@esotsal ,Thank you for raising the question. I'm uncertain whether adding a feature gate is necessary, as this modification doesn't introduce significant additional logic. However, with the current fix in this PR, the CPU allocation behavior for containers would change in certain cases, which might not align with user expectations.

For example:

Scenario:
InitContainer requires 1 CPU, AppContainer requires 2 CPUs.

Previous behavior:

InitContainer allocated CPU 10
AppContainer allocated CPUs 1 and 11

→ AppContainer could acquire sibling CPUs.

This PR's behavior:

InitContainer allocated CPU 10
AppContainer allocated CPUs 1 and 10

→ AppContainer gets CPUs from two different cores.

Sibling cores may be the user's expected result.
Of course, this can be solved by enabling FullPhysicalCPUsOnly.

Additionally, there may be unforeseen impacts requiring further testing. Referencing the approach in PR#127525 mentioned by @ffromani, I adding a feature gate to this PR. This would:

Provide a transition period for the fix.
Allow time to collect user feedback.

@ffromani, what are your thoughts on this feature gate?

…void CPU leaking when cpu-manager-policy=static
@Chunxia202410 Chunxia202410 force-pushed the cpu_manager_reusableCPUs branch from 6603c25 to 5665c58 Compare June 23, 2025 03:03
@k8s-ci-robot
Copy link
Contributor

@Chunxia202410: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-unit-windows-master 5665c58 link false /test pull-kubernetes-unit-windows-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@ffromani
Copy link
Contributor

Is the introduction of a new featureGate, as proposed in this commit, mandatory ? If yes can you please explain why ?

Update: I see it was proposed the addition of featureGate from @ffromani , here but i don't understand why it is needed.

it is not excluded we will need a flag or a feature gate to change the behavior we had for years and ppl (implicitly?) depend on, much like we had to do for the quota issue (github.com/#127525)

I apologize for causing confusion. I mentioned the feature gate as option to consider with the goal of preserving compatibility and contain breakages. It is certainly not a hard requirement for generic changes. We can very much gate this code with existing feature gates which we need anyway because KEP work.

I will read the thread and comment in detail ASAP.

@Chunxia202410
Copy link
Author

I apologize for causing confusion. I mentioned the feature gate as option to consider with the goal of preserving compatibility and contain breakages. It is certainly not a hard requirement for generic changes. We can very much gate this code with existing feature gates which we need anyway because KEP work.

I will read the thread and comment in detail ASAP.

Thank you @ffromani , And I will also feather check whether the feature gate is necessary.

@Chunxia202410
Copy link
Author

Chunxia202410 commented Jun 26, 2025

I check the interaction of this PR logic and all CPU manager options.
There are no impact for these options FullPCPUsOnlyOption, AlignBySocketOption, DistributeCPUsAcrossCoresOption, StrictCPUReservationOption.

But may have some impact for below two options in some cases:
DistributeCPUsAcrossNUMAOption: when this option enable, different request number of CPU lead to different result.

For example:
CPU architecture as below, and available CPUs are {2-15}
image

Case1: For a pod, InitContainer request 6 CPUs, AppContainer request 8 CPUs.

Previous behavior:
InitContainer allocated CPUs {2-7} ==> CPU 2-7 leaking
Appcontainer allocated CPUs {8-15} ==> 8 CPUs on 1 numa nodes.

PR behavior: 
InitContainer allocated CPUs {2-7}
Appcontainer allocated CPUs {2-9} ==> 8 CPUs across 2 numa nodes.
Case 2: 
For a pod, InitContainer request 9 CPUs, AppContainer request 6 CPUs.

Previous behavior:
InitContainer allocated CPUs {2-5, 8-12} ==> CPU 8-12 leaking
Appcontainer allocated CPUs {2-7} ==> 6 CPUs on 1 numa nodes.

PR behavior: 
InitContainer allocated CPUs {2-5, 8-12}
Appcontainer allocated CPUs {2-4, 8-10} ==> 6 CPUs across 2 numa nodes.

PreferAlignByUnCoreCacheOption: when this option enable, AppContainer may can not allocated whole UnCoreCache.

For example:
CPU architecture as below, and available CPUs are {4-15}
image

Case 3: For a pod, InitContainer request 4 CPUs, AppContainer request 8 CPUs.

previous behavior:
InitContainer allocated CPUs {4-7} ==> CPU leaking
Appcontainer allocated CPUs {9-15} ==> whole UnCoreCache

PR behavior: 
InitContainer allocated CPUs {4-7} 
Appcontainer allocated CPUs {4-11} ==> not whole UnCoreCache CPUs

Apart from these two options, even if all options are disabled, using the default CPU allocation logic may not yield optimal results in some cases. like the example in this comment.
In my view, we cannot guarantee receiving the expected CPU allocation result when creating a Pod, just try the best to allocat expected CPUs.
However, CPU leaking is a critical bug that must be resolved. Therefore, I think the impact on specific cases mentioned above is acceptable, and a feature gate may not be necessary.

@ffromani and @esotsal ,What are your opinion about this?

[update the cases]

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 28, 2025
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
Status: Archive-it
Development

Successfully merging this pull request may close these issues.

Static CPU Manager can fail with UnexpectedAdmissionError with init-containers requesting integer CPUs
5 participants