-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keep existing PDB conditions when updating status #122056
Conversation
Please note that we're already in Test Freeze for the Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Mon Nov 27 03:59:11 UTC 2023. |
|
Welcome @dhenkel92! |
Hi @dhenkel92. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
1d45796
to
11c378c
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The concern seems valid, as in general 3rd party controllers should be able to assign their own conditions to objects.
@dhenkel92 out of curiosity, can I ask what is your use case and what kind of information are you tracking in the condition?
@@ -994,6 +994,7 @@ func (dc *DisruptionController) updatePdbStatus(ctx context.Context, pdb *policy | |||
DisruptionsAllowed: disruptionsAllowed, | |||
DisruptedPods: disruptedPods, | |||
ObservedGeneration: pdb.Generation, | |||
Conditions: append([]metav1.Condition{}, pdb.Status.Conditions...), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can just assign the conditions from the copy, so we do not mutate the original pdb
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's actually a pretty good idea, thank you! I tried to copy the array with the append.
ps.VerifyPdbStatus(t, pdbName, 0, 0, 3, 0, map[string]metav1.Time{}) | ||
|
||
actualPDB := ps.Get(pdbName) | ||
condition := apimeta.FindStatusCondition(actualPDB.Status.Conditions, "ExistingTestCondition") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we also test the length of the conditions?
/triage accepted |
Sure! We began developing a system to identify PDBs that are blocked for long periods. We did this because we found that many issues need user action. However, in our case, factors other than failing pods can also block a PDB, which complicates troubleshooting for our users. Therefore, we introduced an additional condition similar to SufficientPods. This condition comes with a message that gives hints to help users with debugging. |
Interesting, thanks for sharing. /lgtm |
LGTM label has been added. Git tree hash: 3a6097129f071439a69ed2067ac31a9b6c3d6904
|
/assign @kow3ns @krmayankk |
Hello @kow3ns @krmayankk, 👋 |
/assign @soltysh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dhenkel92, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@dhenkel92 would it help to backport this fix to older releases? Would you be interested in opening a cherry-pick PRs? |
Thank you for your help @atiratree |
@dhenkel92 thanks for opening these! |
@atiratree The tests for PRs on versions 1.26 and 1.27 were failing. Based on this document, I fixed them and force-pushed to the branch. I hope this was the right procedure. |
…22056-upstream-release-1.28 Automated cherry pick of #122056: keep existing PDB conditions when updating status
…22056-upstream-release-1.29 Automated cherry pick of #122056: keep existing PDB conditions when updating status
…22056-upstream-release-1.27 Automated cherry pick of #122056: keep existing PDB conditions when updating status
What type of PR is this?
/kind bug
What this PR does / why we need it:
When the disruption controller updates the PDB status, it removes all conditions from the new status object and then re-adds the sufficient pods condition. Unfortunately, this behavior removes conditions set by other controllers, leading to multiple consecutive updates. Therefore, this commit ensures that conditions are preserved during updates.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: