-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl prune v2: switch to contains-group-kinds annotation #118942
kubectl prune v2: switch to contains-group-kinds annotation #118942
Conversation
/wip |
f5735ba
to
fe80a1a
Compare
if !ok { | ||
if annotations[DeprecatedApplySetGRsAnnotation] != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given this functionality is in alpha / feature-flagged, I don't know if we want to be compatible with the existing applysets. Three obvious options:
- Do this and drop it after a few releases
- Implement
fsck
that can map from the old annotation to the new one (we proposed this tool in our KEP, it will be handy anyway) - Just switch to the new annotation, it's alpha after all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no preference for which one you choose, since there are not many apply sets out there now. This one seems fine to me.
/retest Looks like a timeout and an fsnotify issue, unrelated (?) |
/kind bugfix |
@justinsb: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind bug |
/retest "Services [It] should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false" is unrelated (hopefully) |
@soltysh can we get this fix in before code freeze? I don't think too many people are yet using the feature, and I'm also a little reluctant to start evangelizing it when we know we have to change one of the core annotations... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems good to me. You mention
regression in that we can no longer apply manifests which contain both a (new) CRD
and an instance of that CRD in one "kubectl apply", when using prune-v2.
Should there be a test (possibly integration or e2e) to validate this regression is fixed, and to keep it from possibly returning?
if !ok { | ||
if annotations[DeprecatedApplySetGRsAnnotation] != "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have no preference for which one you choose, since there are not many apply sets out there now. This one seems fine to me.
Good call, and I tried implementing the test. I was wrong though, kubectl "normal apply" isn't really able to reliably support this scenario either. Sometimes it works, but the deciding factor seems to be if we have a prior version of the GV in the discovery cache, rather than it being a race as I had previously thought. So it's not well-defined behaviour, and probably (?) not a regression, and it's very difficult to write a test because the behaviour isn't straightforward. I'm happy to try to fix the behavior (in a separate PR), but I suspect it would be difficult to do as a non-breaking change (at least for some edge scenarios). This does support the idea you (@seans3) brought up in sig-cli last week though, that we should investigate doing an apply that is able to fix all these gotchas without risking compatibility, using a plugin or a new command. I think this PR is still the correct behaviour for prunev2, in that (1) it is what we agreed in the KEP and (2) it will be an issue if we fix the general apply undefined behaviour here. But I don't think it is a regression. Is it OK if I update the commit message etc to just reference the KEP and reword from regression to "unblock applying a CRD and a CR in the same manifest" or something like that? Simple test in case anyone wants to try it out:
|
fe80a1a
to
9d57700
Compare
I reworded the release note and commit messages to de-emphasize the regression, because it seems that kubectl apply can't actually reliably apply a CRD and an instance of the CRD in the same manifest, so it's not really a regression. I also think we should recognize the existing annotation just so we don't have to worry about the edge-cases here, but I marked the old annotation as deprecated and suggested it could be removed as part of beta or GA of applyset, so we don't have to support it forever. |
/test pull-kubernetes-e2e-gce |
The contains-group-resources was an implementation error, we specified contains-group-kinds in the KEP. Because it is in alpha, we simply switch to the new annotation. We will recognize the old annotation and migrate existing alpha applysets, but support for this migration can be removed in beta/GA of applyset.
9d57700
to
10caecb
Compare
Looks like rebasing cleaned up the pull-kubernetes-e2e-gce test failures. @seans3 are we OK to get this in? |
/approve |
LGTM label has been added. Git tree hash: f63b98e7b413b12f5742c669f17fb63728117846
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: justinsb, seans3 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The contains-group-resources was an implementation error, we specified
contains-group-kinds in the KEP.
Because it is in alpha, we simply switch to the new annotation.