Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support specifying custom LB retry period from cloud provider #94021

Conversation

timoreimann
Copy link
Contributor

@timoreimann timoreimann commented Aug 15, 2020

What type of PR is this?
/kind feature

What this PR does / why we need it:
This change allows cloud providers to specify a custom retry period by returning a RetryError. The purpose is to bypass the work queue-driven exponential backoff algorithm when there is no need to back off.

Specifically, this can be the case when a cloud load balancer operation such as a create or delete is still pending and the cloud API should be polled for completion at a constant interval. A backoff algorithm would not always be reasonable to apply here since there is no API or performance degradation warranting an increasing wait time between API requests.

Which issue(s) this PR fixes:
Fixes #88902

Special notes for your reviewer:
For now, the PR is meant to provide a starting point for discussing. Hence, I have not invested into adding tests. Once/if we have consent on the general direction of this PR, I am going to complete the missing pieces. (Tests have been added.)

Does this PR introduce a user-facing change?:

Support specifying a custom retry period for cloud load-balancer operations

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
Not sure if needed. None so far.

/sig cloud-provider
/cc @andrewsykim

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Aug 15, 2020
@k8s-ci-robot
Copy link
Contributor

Hi @timoreimann. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added area/cloudprovider sig/network Categorizes an issue or PR as relevant to SIG Network. labels Aug 15, 2020
@timoreimann timoreimann force-pushed the support-specifying-custom-lb-retry-period-from-cloud-provider branch from e802d2d to adceb65 Compare September 25, 2020 19:01
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 25, 2020
Copy link
Member

@andrewsykim andrewsykim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@andrewsykim andrewsykim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 29, 2020
@timoreimann
Copy link
Contributor Author

@andrewsykim @MrHohn does this proposal warrant a KEP, or would it suffice to move forward with the PR? In the latter case, I'd like to invest additional time to update the tests and bring the code into a mergeable state.

@andrewsykim
Copy link
Member

@andrewsykim @MrHohn does this proposal warrant a KEP, or would it suffice to move forward with the PR? In the latter case, I'd like to invest additional time to update the tests and bring the code into a mergeable state.

I don't think we need a KEP for this, let's get the tests added and try to merge this for v1.20

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2021
@timoreimann
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2021
@timoreimann
Copy link
Contributor Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 10, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Apr 24, 2023
This change allows cloud providers to specify a custom retry period by
returning a RetryError. The purpose is to bypass the work queue-driven
exponential backoff algorithm when there is no need to back off.

Specifically, this can be the case when a cloud load balancer operation
such as a create or delete is still pending and the cloud API should be
polled for completion at a constant interval. A backoff algorithm would
not always be reasonable to apply here since there is no API or
performance degradation warranting an increasing wait time between API
requests.
@timoreimann timoreimann force-pushed the support-specifying-custom-lb-retry-period-from-cloud-provider branch from 3aec5f5 to 0fcf42f Compare May 1, 2023 18:19
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 1, 2023
@timoreimann
Copy link
Contributor Author

/retest

// fixed duration (as opposed to backing off exponentially).
type RetryError struct {
msg string
retryAfter time.Duration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what should be the interpretation of 0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There'd be no special interpretation. Instead, the retry would be immediate (see also where the value is used).

The need to retry right away may be uncommon or even rare, but I personally wouldn't want to disallow it. Maybe a user's network is very slow, or there are already some natural / drive-by delays that don't warrant another extra wait at the client side?

I think a zero delay can be legitimate, but let me know if you think differently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a zero delay can be legitimate, but let me know if you think differently.

I think is ok, I just wanted to double check we all have the same interpretation

@@ -2281,3 +2362,66 @@ func (l *fakeNodeLister) Get(name string) (*v1.Node, error) {
}
return nil, nil
}

type fakeServiceLister struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you don't need to create these mocks, you can use a cache

		serviceCache := cache.NewIndexer(cache.MetaNamespaceKeyFunc, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc})
		serviceLister := v1listers.NewServiceLister(serviceCache)
		for i := range test.services {
			if err := serviceCache.Add(test.services[i]); err != nil {
				t.Fatalf("%s unexpected service add error: %v", test.name, err)
			}
		}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's much better -- adjusted, thanks!

@timoreimann
Copy link
Contributor Author

@aojea PTAL.

@timoreimann timoreimann force-pushed the support-specifying-custom-lb-retry-period-from-cloud-provider branch from b0ebdc0 to 2ad2c15 Compare May 2, 2023 06:00
@timoreimann
Copy link
Contributor Author

(Updated the copyright year to 2023 real quick)

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented May 2, 2023

@timoreimann: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-bazel-build c8aa7ecd98a42b4d5d6b89b182f5c89a04ec3542 link /test pull-kubernetes-bazel-build
pull-kubernetes-bazel-test c8aa7ecd98a42b4d5d6b89b182f5c89a04ec3542 link /test pull-kubernetes-bazel-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@timoreimann
Copy link
Contributor Author

/retest

@aojea
Copy link
Member

aojea commented May 2, 2023

/lgtm
/approve

Thanks

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 2, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: c97695cb43c46f39dd3542af7bbf6e9f964c05d9

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andrewsykim, aojea, timoreimann

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit f51dad5 into kubernetes:master May 2, 2023
12 checks passed
@k8s-ci-robot k8s-ci-robot added this to the v1.28 milestone May 2, 2023
@timoreimann timoreimann deleted the support-specifying-custom-lb-retry-period-from-cloud-provider branch May 2, 2023 09:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cloudprovider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/network Categorizes an issue or PR as relevant to SIG Network. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow for more flexible retry logic with Cloud Controller Manager
9 participants