Skip to content

[KEP-2400]: Swap-aware memory eviction (with no additional APIs) #129578

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

iholder101
Copy link
Contributor

@iholder101 iholder101 commented Jan 12, 2025

What type of PR is this?

/kind feature
/sig node

Problem and background

Prior to this PR, kubelet's eviction manager completely overlooked swap memory, leading to several issues:

  • Inaccessible Swap: The memory eviction threshold is configured in such a way that swap is never triggered during node-level pressure, as eviction occurs before the node starts swapping memory.
  • Unfairness & Instability: The eviction manager may evict the "wrong" or innocent pods, failing to address the actual memory pressure.
  • Unexpected Behavior: Pods that exceed their memory limits (with regular and swap memory) are not evicted first, even though they would immediately get killed if swap were not used.

In addition, this PR serves as an alternative to #128137. In that PR, swap is presented as a standalone signal alongside memory. However, I believe this approach is problematic, as it attempts to completely separate memory and swap memory. In reality, the two are inherently connected and should be addressed as a single issue. As a small example for how the two are inherently connected, swap is never used until the memory is full.

What this PR does / why we need it:

This PR addresses the above issues by making the eviction manager swap-aware. The proposed logic is fully backward compatible and requires no additional configuration, making it completely transparent to the user.

The main idea

Let accessible swap be the amount of swap that is accessible by pods according to the LimitedSwap swap behavior [1]. Note that the amount of accessible swap changes in time according to the pods running on the node.

Triggering evictions: The eviction manager considers accessible swap as additional memory capacity.
Eviction ordering: The eviction order will be defined as follows (for more info, see discussion here):

The kubelet uses the following parameters to determine the pod eviction order:
1. Whether the pod's resource usage (memory usage + swap usage) exceeds requests (memory requests + swap requests)
2. [Pod Priority](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/)
3. The pod's resource usage (memory usage + swap usage) relative to requests (memory requests + swap requests)

Which issue(s) this PR fixes:

Fixes #120800

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Make eviction mechanism swap-aware

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2400-node-swap/README.md

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jan 12, 2025
@k8s-ci-robot k8s-ci-robot added area/kubelet area/test sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Jan 12, 2025
@iholder101 iholder101 force-pushed the feature/swap-evictions-no-api branch 2 times, most recently from 6f9cd4d to 12438f5 Compare January 12, 2025 16:37
@iholder101 iholder101 marked this pull request as ready for review January 12, 2025 22:17
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 12, 2025
@iholder101 iholder101 force-pushed the feature/swap-evictions-no-api branch 4 times, most recently from b07084f to de3ab3b Compare January 13, 2025 12:08
@iholder101
Copy link
Contributor Author

/test pull-kubernetes-node-swap-conformance-fedora-serial
/test pull-kubernetes-node-swap-conformance-ubuntu-serial

@pacoxu
Copy link
Member

pacoxu commented Jan 14, 2025

/cc @harche @kannon92

Turn swap on and off and sleep for a few seconds.
This helps stabalize the tests which will start
with a fresh unused swap space.

Signed-off-by: Itamar Holder <[email protected]>
Accessible swap is the amount of swap that is
accessible by Kubernetes pods, according to the
LimitedSwap swap behavior.

This PR treats accessible swap as an additional node
memory capacity in the context of the eviction manager
making memory signal observesations.

Signed-off-by: Itamar Holder <[email protected]>
This PR treats used swap memory as an additional
memory usage in the context of the eviction manager
ranking pods for eviction.

In addition, a pod's accessible swap is considered
as an additional memory request.

Signed-off-by: Itamar Holder <[email protected]>
@iholder101 iholder101 force-pushed the feature/swap-evictions-no-api branch from a55c7d3 to eedc64c Compare March 16, 2025 14:48
@iholder101
Copy link
Contributor Author

/test pull-kubernetes-node-swap-conformance-fedora-serial
/test pull-kubernetes-node-swap-conformance-ubuntu-serial

@iholder101
Copy link
Contributor Author

If you bring in the memory evictions only, it should be fine. Adding them to the serial jobs should be fine.

PR is ready to review: kubernetes/test-infra#34507. Let me know what you think.

@kannon92 the PR is now merged, this PR is rebased on top of it, and seems that the swap-conformance lanes are both affected and passing.

Is anything missing from your side for an LGTM?

@kannon92
Copy link
Contributor

I think the code and concept look good to me.

But I will leave lgtm/approvals for the people that were concerned about this feature. ie google GKE folks.

@iholder101
Copy link
Contributor Author

I think the code and concept look good to me.

But I will leave lgtm/approvals for the people that were concerned about this feature. ie google GKE folks.

Thanks

@dchen1107 @yujuhong PTAL

@ffromani
Copy link
Contributor

/test pull-kubernetes-unit

@kannon92
Copy link
Contributor

/retest

@kannon92
Copy link
Contributor

btw, https://testgrid.k8s.io/sig-node-kubelet#kubelet-swap-conformance-fedora-serial are flaking.

THese jobs do not contain this code but still worth investigating why evictions on a swap enabled node behave differently.

These jobs are not flaking at all in the main lane (https://testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-eviction) so swap seems to impact this.

@k8s-ci-robot
Copy link
Contributor

@iholder101: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-capz-windows-master eedc64c link false /test pull-kubernetes-e2e-capz-windows-master

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

p1AccessibleSwap, p1Err := swapLimitCalculator.CalcPodSwapLimit(*p1)
p2AccessibleSwap, p2Err := swapLimitCalculator.CalcPodSwapLimit(*p2)

p1MemoryRequest.Add(*resource.NewQuantity(p1AccessibleSwap, resource.BinarySI))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This approach of equally treating two unequal resources seems wrong for addressing memory pressure to me.

  1. Swap is a less expensive resource, making it easy to imagine larger swap spaces being provisioned than memory. Disk storage/swap availability cannot be compared with memory usage. For example, a user could set up 40GB of swap space in a 4GB physical memory setup. Even when 50% swap is implicitly allocated to a resource, this condition will likely never be satisfied because plenty of swap capacity will still be available.
  2. Swap is intended as an overflow mechanism, not primary memory; eviction could trigger before swap utilization even begins, especially in large swap configurations.
  3. Swap is significantly slower than memory, and treating both equally for handling memory pressure doesn't sound right. Kubelet's intent for eviction is to handle memory pressure, and if it were to evict a pod that would free up more swap space than memory, it wouldn't help in reducing memory pressure.

The other two criteria (pod priority and relative usage) are reasonable, but they are still affected by this fundamental equivalence problem.

@ajaysundark
Copy link
Contributor

I think existing eviction behavior: OrderedBy(exceededMemoryRequests, priority, memory-usage) is actually more sensible than the proposed approach.

The formula "memory-usage + swap usage > memory requests + swap requests" creates a false equivalence. If swap were to be considered for eviction, better approach would be some form of weighted formula (as suggested by @pacoxu).

Or if we want to simplify without the weights, consider: OrderedBy(exceededMemoryRequests, priority, memoryUsage + swapUsage), keeping exceededMemoryRequests purely based on memory focusing on the actual memory pressure first.

@ajaysundark
Copy link
Contributor

ajaysundark commented Mar 20, 2025

thinking how does this eviction strategy align with when api is introduced - will the api be at pod or container level? We are rationing swap today at the container level, if API will be at the pod-level, the meaning of 'swap accessibilitiy' for the resources would also change. Also, KEP suggests considering swap as a separate resource constraint in the future. If swap becomes a first-class resource in Kubernetes with its own requests and limits, it would make sense to consider swap pressure independent from memory pressure.

@ajaysundark
Copy link
Contributor

As called out in earlier discussions, Kubernetes should consider adding I/O pressure as a factor in eviction decisions, which is more relevant when considering swap usage impact on node-stability.

@iholder101
Copy link
Contributor Author

btw, testgrid.k8s.io/sig-node-kubelet#kubelet-swap-conformance-fedora-serial are flaking.

THese jobs do not contain this code but still worth investigating why evictions on a swap enabled node behave differently.

These jobs are not flaking at all in the main lane (testgrid.k8s.io/sig-node-cri-o#ci-crio-cgroupv2-node-e2e-eviction) so swap seems to impact this.

This actually makes total sense to me, and was one of the reasons I was concerned from running eviction tests on the swap-conformance lane. The failures are over the node not getting into memory pressure, which makes total sense.

I think we need to revert these changes. I've opened a PR here: kubernetes/test-infra#34558.

As explained in the PR:

The reason is that it causes the lane to become flaky.
Eviction tests are very delicate and fragile. They stress out the node entirely, and depend on many delicate factors like the amount of total/free memory on the node, etc. Even if swap is disabled for Kubernetes workloads, the fact it's enabled on the node might easily cause these tests to become flaky. The swap-conformance stress tests are designed to run on this lane and they are very stable.

@iholder101
Copy link
Contributor Author

thinking how does this eviction strategy align with when api is introduced - will the api be at pod or container level? We are rationing swap today at the container level, if API will be at the pod-level, the meaning of 'swap accessibilitiy' for the resources would also change. Also, KEP suggests considering swap as a separate resource constraint in the future.

APIs for swap is a complicated discussion for many reasons, like who controls swap limits (would be dangerous to give the pod owner direct control), how to set concrete swap limits, etc. In fact, the KEP explicitly says:

This KEP aims to introduce basic swap enablement and leave further extensions to follow-up KEPs.
...
For example, to achieve this goal, this KEP does not introduce any APIs that allow customizing how the feature behaves, but instead only determines whether the feature is enabled or disabled.

I find it very problematic that after sig-node had agreed on this with a large consensus it's now being opened again as if nothing was agreed. It makes sense to me that the bar for swap is very high, and that the burden of proof is on me. However, I think it's essential that once we decide on the scope of the feature we would respect these agreements.

In any case, I'm okay with re-discussing the API as long as we look at it from the perspective of the most minimal API that's needed as a first step for the current KEP to GA, while deferring everything that's not strictly necessary to follow-up KEPs which I'll gladly work on.

If swap becomes a first-class resource in Kubernetes with its own requests and limits, it would make sense to consider swap pressure independent from memory pressure.

I don't see swap ever becoming a first-class resource, for many reasons. This was already discussed and ruled out in the past.

@iholder101
Copy link
Contributor Author

iholder101 commented Mar 20, 2025

As called out in earlier discussions, Kubernetes should consider adding I/O pressure as a factor in eviction decisions, which is more relevant when considering swap usage impact on node-stability.

I agree. This sounds like a great topic for a dedicated KEP, as kubelet currently doesn't have any protection mechanism against IO. In addition, we can partially address this concern with documentation, as provisioning swap on a dedicated disk would greatly help in reducing IO pressure on the system.

I'd go another step forward and say that the eviction manager's code is very old and we may want to frame this follow-up KEP around more general eviction improvements.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 22, 2025
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/node Categorizes an issue or PR as relevant to SIG Node. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

[KEP-2400] [Swap]: Verify memory pressure behavior with swap enabled
10 participants