Skip to content

HPA support for pod-level resource specifications #132430

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

laoj2
Copy link

@laoj2 laoj2 commented Jun 20, 2025

What type of PR is this?

/kind feature

What this PR does / why we need it:

Adds HPA support to pod-level resource specifications. See #132237 (comment) and KEP-2837 for motivation/background/proposal.

Which issue(s) this PR is related to:

Fixes #132237

KEP: kubernetes/enhancements#2837

Does this PR introduce a user-facing change?

Yes.

Adds HPA support to pod-level resource specifications. When the pod-level resource feature is enabled, HPAs configured with `Resource` type metrics will calculate the pod resources from `pod.Spec.Resources` field, if specified.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 20, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @laoj2!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @laoj2. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jun 20, 2025
@github-project-automation github-project-automation bot moved this to Needs Triage in SIG Apps Jun 20, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: laoj2
Once this PR has been reviewed and has the lgtm label, please assign mwielgus for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@laoj2
Copy link
Author

laoj2 commented Jun 20, 2025

/assign @adrianmoisey @raywainman @omerap12

/cc @ndixita

@omerap12
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 20, 2025
@omerap12
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 20, 2025
@laoj2 laoj2 force-pushed the fix-hpa-pod-request branch from 5449212 to 22eeb34 Compare June 20, 2025 16:08
@laoj2
Copy link
Author

laoj2 commented Jun 20, 2025

/retest

@laoj2 laoj2 force-pushed the fix-hpa-pod-request branch from 22eeb34 to 0ae73dd Compare June 20, 2025 16:32
@laoj2
Copy link
Author

laoj2 commented Jun 20, 2025

/retest

@adrianmoisey
Copy link
Member

If there's a failed test, the tests should re-run after pushing new code

@laoj2
Copy link
Author

laoj2 commented Jun 24, 2025

/retest

A testcase failed with:

I0620 18:52:47.822400 68527 autoscaling_utils.go:359] ConsumeCPU URL: {https   34.133.64.48 /api/v1/namespaces/horizontal-pod-autoscaling-1519/services/rc-ctrl/proxy/ConsumeCPU  false false durationSec=30&millicores=325&requestSizeMillicores=100  }
I0620 18:53:17.050520 68527 horizontal_pod_autoscaling.go:210] Failed inside E2E framework:
    k8s.io/kubernetes/test/e2e/framework/autoscaling.(*ResourceConsumer).WaitForReplicas(0xc000ac3900, {0x7945f4599328, 0xc004a5c990}, 0x3, 0xd18c2e2800)
    	k8s.io/kubernetes/test/e2e/framework/autoscaling/autoscaling_utils.go:502 +0x4bf
    k8s.io/kubernetes/test/e2e/autoscaling.(*HPAScaleTest).run(0xc000ca9e38, {0x7945f4599328, 0xc004a5c990}, {0x5053314?, 0x0?}, {{0x0, 0x0}, {0x50532b0, 0x2}, {0x508db3e, ...}}, ...)
    	k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:210 +0x31e
    k8s.io/kubernetes/test/e2e/autoscaling.scaleDown({0x7945f4599328?, 0xc004a5c990?}, {0x5053314?, 0x4b8dda0?}, {{0x0, 0x0}, {0x50532b0, 0x2}, {0x508db3e, 0x15}}, ...)
    	k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:280 +0x1cf
    k8s.io/kubernetes/test/e2e/autoscaling.init.func4.4.2({0x7945f4599328?, 0xc004a5c990?})
    	k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:85 +0x85
[FAILED] timeout waiting 15m0s for 3 replicas: Timed out after 900.002s.
Expected
    <int>: 4
to equal
    <int>: 3
In [It] at: k8s.io/kubernetes/test/e2e/autoscaling/horizontal_pod_autoscaling.go:210 @ 06/20/25 18:53:17.05

But it looks like HPA had just downscaled this workload:

I0620 18:53:03.604257      10 horizontal.go:844] "Proposing desired replicas" logger="horizontal-pod-autoscaler-controller" desiredReplicas=2 metric="cpu resource utilization (percentage of request)" tolerances="[down:10.0%, up:10.0%]" timestamp="2025-06-20 18:52:56 +0000 UTC" scaleTarget="ReplicationController/horizontal-pod-autoscaling-1519/rc"
I0620 18:53:03.612039      10 horizontal.go:911] "Successfully rescaled" logger="horizontal-pod-autoscaler-controller" HPA="horizontal-pod-autoscaling-1519/rc" currentReplicas=4 desiredReplicas=3 reason="All metrics below target"
I0620 18:53:03.612457      10 replica_set.go:375] "replicaSet updated. Desired pod count change." logger="replicationcontroller-controller" replicaSet="horizontal-pod-autoscaling-1519/rc" oldReplicas=4 newReplicas=3
I0620 18:53:03.612583      10 controller_utils.go:199] "Controller expectations fulfilled" logger="replicationcontroller-controller" expectations={}
I0620 18:53:03.612661      10 replica_set.go:650] "Too many replicas" logger="replicationcontroller-controller" replicaSet="horizontal-pod-autoscaling-1519/rc" need=3 deleting=1
I0620 18:53:03.612753      10 event.go:389] "Event occurred" logger="horizontal-pod-autoscaler-controller" object="horizontal-pod-autoscaling-1519/rc" fieldPath="" kind="HorizontalPodAutoscaler" apiVersion="autoscaling/v2" type="Normal" reason="SuccessfulRescale" message="New size: 3; reason: All metrics below target"

Copy link
Member

@omerap12 omerap12 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this!
I think we should add e2e tests to cover this change.


// Determine if we should use pod-level requests: see KEP-2837
// https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/2837-pod-level-resource-spec/README.md
usePodLevelRequests := feature.DefaultFeatureGate.Enabled(features.PodLevelResources) &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The feature gate check happens inside the pod loop. we should check it once outside the loop since it won't change between pods.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: Needs Triage
Status: Needs Triage
Development

Successfully merging this pull request may close these issues.

HPA support for pod-level resource specifications
5 participants