Skip to content

delete one key per delete directive in strategicPatch #92437

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

sweeneyb
Copy link

attempts to address #58477 by only deleting one element per directive

What type of PR is this?
/kind bug

What this PR does / why we need it:
A patch to delete an accidental duplicate key in a resource definition ends up deleting all of the keys. This requires a workaround where a person removes all of the keys, then adds one of them back. Instead, the merged resource definition should reflect what was submitted into the system.

Which issue(s) this PR fixes:
Fixes #58477

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

Fixed a bug where `kubectl apply` removes all entries when attempting to remove a single duplicated entry in a persisted object

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


attempts to address kubernetes#58477 by only deleting one element per directive
@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Jun 23, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @sweeneyb!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @sweeneyb. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 23, 2020
@k8s-ci-robot k8s-ci-robot requested review from apelisse and mengqiy June 23, 2020 17:12
@k8s-ci-robot k8s-ci-robot added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Jun 23, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: sweeneyb
To complete the pull request process, please assign mengqiy
You can assign the PR to them by writing /assign @mengqiy in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 23, 2020
@sweeneyb
Copy link
Author

/assign @mengqiy

@cblecker
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 23, 2020
@sweeneyb
Copy link
Author

/retest

1 similar comment
@sweeneyb
Copy link
Author

/retest

@cblecker
Copy link
Member

@sweeneyb Looks like some of the failures are unrelated to this PR. I'm having a look at what is going on.

@sweeneyb
Copy link
Author

@cblecker thanks. I'll defer retests until tomorrow or I hear something. I can see one test dying being spurious. But it's failed a couple of times now. I don't want to put load on the CI unnecessarily.

@fedebongio
Copy link
Contributor

/cc @apelisse @jennybuckley

@cblecker
Copy link
Member

/retest

@cblecker
Copy link
Member

The above failure is legitimate. You need to run hack/update-gofmt.sh

@sweeneyb
Copy link
Author

/retest

@sweeneyb
Copy link
Author

/retest

1 similar comment
@sweeneyb
Copy link
Author

/retest

@k8s-ci-robot
Copy link
Contributor

@sweeneyb: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kind-ipv6 727c47d link /test pull-kubernetes-e2e-kind-ipv6

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@BenTheElder
Copy link
Member

All required tests are passing.
image

The IPv6 test is soft-blocking, and seems to be having issues unrelated to this PR.

@BenTheElder
Copy link
Member

tide Pending — Not mergeable. Needs approved, lgtm labels.

this status is also the relevant one for knowing what is overall required (it could be more obvious...), this PR won't merge until approved and lgtm-ed, but it's already good on the tests.

tide is the kubernetes merge robot.

@sweeneyb
Copy link
Author

@BenTheElder Thanks for the context. It is pretty obvious in retrospect. There's a lot new, here, so I'm learning where to look in a lot of ways. Thanks again.

@BenTheElder
Copy link
Member

FWIW I think it's a lot and not anywhere near as obvious as it should be.

@mengqiy
Copy link
Member

mengqiy commented Jun 26, 2020

This PR changes the behavior of delete directive$patch: delete.
Before: it deletes all entries that match the mergeKey.
After: it deletes the first entry that matches the mergeKey.

The later approach doesn't solve all of the problems.
Suppose user has a list like this:

- name: foo
  anotherField: abc
- name: foo
  anotherField: xyz

If the user wants to remove

- name: foo
  anotherField: xyz

there is no great way to express that.
If using SMP, the user needs to delete the 1st entry, delete the 2nd entry and then add the 1st entry back.

I'm not sure if the new behavior is better than the existing behavior.
@lavalamp @liggitt @apelisse WDYT?

Some background:
@lavalamp suggested we should remove all matching entries: kubernetes/community#140 (comment)
It is implemented in #38342

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 24, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@issssu
Copy link

issssu commented May 7, 2024

This PR changes the behavior of delete directive$patch: delete. Before: it deletes all entries that match the mergeKey. After: it deletes the first entry that matches the mergeKey.

The later approach doesn't solve all of the problems. Suppose user has a list like this:

- name: foo
  anotherField: abc
- name: foo
  anotherField: xyz

If the user wants to remove

- name: foo
  anotherField: xyz

there is no great way to express that. If using SMP, the user needs to delete the 1st entry, delete the 2nd entry and then add the 1st entry back.

I'm not sure if the new behavior is better than the existing behavior. @lavalamp @liggitt @apelisse WDYT?

Some background: @lavalamp suggested we should remove all matching entries: kubernetes/community#140 (comment) It is implemented in #38342

maybe we can delete last one, rather than the first one. Because when dulplicate envs exist, the first one is active.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

kubectl apply (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object
8 participants