Skip to content

Default to closing watch requests during graceful shutdown #130991

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

enj
Copy link
Member

@enj enj commented Mar 21, 2025

The change gives --shutdown-watch-termination-grace-period a default value of 67 seconds so that the API server will wait up to that duration for watches to drain before signaling that requests have been drained.

Among other things, this prevents the API server from terminating gRPC connections to KMS plugins until after draining has been attempted.

xref #130898

/kind bug
/kind api-change

The `--shutdown-watch-termination-grace-period` flag now defaults to 67 seconds.

The change gives `--shutdown-watch-termination-grace-period` a
default value of 67 seconds so that the API server will wait up to
that duration for watches to drain before signaling that requests
have been drained.

Among other things, this prevents the API server from terminating
gRPC connections to KMS plugins until after draining has been
attempted.

Signed-off-by: Monis Khan <[email protected]>
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Mar 21, 2025
@k8s-ci-robot k8s-ci-robot added area/apiserver sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 21, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: enj
Once this PR has been reviewed and has the lgtm label, please assign liggitt for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@enj
Copy link
Member Author

enj commented Mar 21, 2025

@liggitt @deads2k is putting this behind a feature gate the correct way for this to support version emulation?

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 21, 2025
@k8s-triage-robot
Copy link

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@k8s-ci-robot
Copy link
Contributor

@enj: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-unit caa16e6 link true /test pull-kubernetes-unit
pull-kubernetes-e2e-gce caa16e6 link true /test pull-kubernetes-e2e-gce

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@sftim
Copy link
Contributor

sftim commented Mar 25, 2025

Changelog suggestion

-The `--shutdown-watch-termination-grace-period` flag now defaults to 67 seconds.
+Added a grace period for watch closure during API server graceful shutdown. The `--shutdown-watch-termination-grace-period` option now defaults to 67 seconds.

@@ -319,9 +319,6 @@ type Config struct {
// number of active watch request(s) in flight and during shutdown
// it will wait, at most, for the specified duration and allow these
// active watch requests to drain with some rate limiting in effect.
// The default is zero, which implies the apiserver will not keep
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the cost of keeping track of pending watch requests meaningful? I want to make sure we're not defaulting configs into something noticeably more expensive for kube-apiserver to run

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe @wojtek-t knows

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you ask about resource consumption (cpu/mem), it's fairly cheap. We basically only have a rate-limited waiting group:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/waitgroup/ratelimited_waitgroup.go

and closing watches is rate-limited when apiserver is in shutdown:
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/waitgroup/ratelimited_waitgroup.go#L70

Corresponding PR adding this logic: https://github.com/kubernetes/kubernetes/pull/114925/files

So I wouldn't worry about the resource overhead.

What changes though is that apiserver graceful shutdown can now be visibly longer (by this 67s) by default with this change. I think this is fine, but we need to weigh that in.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this also need to be updated

// The default is zero, which implies the apiserver will not keep
// track of active watch request(s) in flight and will not wait
// for them to drain, this maintains backward compatibility.
// This grace period is orthogonal to other grace periods, and
// it is not overridden by any other grace period.
ShutdownWatchTerminationGracePeriod time.Duration

StorageObjectCountTracker: flowcontrolrequest.NewStorageObjectCountTracker(),
// By default, attempt to drain existing watch connections during graceful shutdown.
// Use a value close to a minute that is unique enough to jump out in logs.
ShutdownWatchTerminationGracePeriod: 67 * time.Second,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this only make sense to be >= other tunable timeouts defaulted above, like this one:

		RequestTimeout:                 time.Duration(60) * time.Second,

If someone is setting --request-timeout longer or shorter than the default, would we expect this to be adjusted as well? Will it break something if this stays at the new default of 67s and that is much longer or much shorter than an overridden --request-timeout?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This setting (shutdown watch termination grace period) is affecting only watches.

OTOH, RequestTimeout is not used for watches (it's only used for non-streaming requests). So I don't think we need additional logic to cross-configure those.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// The default is zero, which implies the apiserver will not keep
// track of active watch request(s) in flight and will not wait
// for them to drain, this maintains backward compatibility.
// This grace period is orthogonal to other grace periods, and
// it is not overridden by any other grace period.
ShutdownWatchTerminationGracePeriod time.Duration

@aojea
Copy link
Member

aojea commented Mar 26, 2025

unit test failures are legit, this change on behavior will have an impact on designs that depend on it for upgrades of control planes, @tkashem as he was involved on the implementation

@aojea
Copy link
Member

aojea commented Mar 26, 2025

it feels odd changing a default for an specific setup as reported in the bug, or do we envision this can cause more issues?
why is better to change the default instead of setting the desired value?

@liggitt
Copy link
Member

liggitt commented Mar 26, 2025

it feels odd changing a default for an specific setup

If the default being 0 meant "terminate watch connections instantly when shutting down," that would actually probably be better.

Instead it means "leave watch connections open, but tell the rest of the server they can shut down" which is a really problematic combination.

@aojea
Copy link
Member

aojea commented Mar 26, 2025

Instead it means "leave watch connections open, but tell the rest of the server they can shut down" which is a really problematic combination.

that was the backward compatibility change ... but yes, maybe that was the wrong behavior to maintain

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants