-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add warning for dup ports in containers[*].ports #113245
Conversation
@pacoxu: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
9c5964e
to
cff7010
Compare
pkg/api/service/util_test.go
Outdated
}, | ||
}, | ||
expected: []string{ | ||
`spec.ports[2].port: service has duplicate ports: 443`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same port different protocol is legit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have validation on this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a warning, not an error. (For legal configuration, a warning is also not needed most times.)
#47249 (comment)
I think this will cover the patch problem.
In this case, a user may face problems due to the merge key (cannot be port + protocol). So we need a warning.
I am still not sure when should we add this warning. Only if this is an update on ports which have dup ports and update is related?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it seems we should add it only on patches #105610 (comment) per @liggitt comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i may update the pr then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I may add the warning to the bad/wrong place.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
name: a
protocol: TCP
hostPort: 10080
- containerPort: 80
name: a1
protocol: UDP
hostPort: 10080
- containerPort: 443
name: b
protocol: UDP
hostPort: 20080
- name: nginx-1
image: nginx:1.14.2
ports:
- containerPort: 443
protocol: UDP
name: b1
hostPort: 40080
- containerPort: 443
protocol: UDP
name: b2
hostPort: 30080
create a pod based on the yaml above.
- containerPort: 443
protocol: TCP
name: b3
hostPort: 30080
Add it at the end of the yaml and apply again. (As Jordon pointed out, it should be a client-side apply.)
[root@paco ~]# kubectl get deployment.apps/nginx -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":
{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":
{"app":"nginx"}},"template":{"metadata":{"labels":{"app":"nginx"}},"spec":{"containers":
[{"image":"nginx:1.14.2","name":"nginx","ports":
[{"containerPort":80,"hostPort":10080,"name":"a","protocol":"TCP"},
{"containerPort":80,"hostPort":10080,"name":"a1","protocol":"UDP"},
{"containerPort":443,"hostPort":20080,"name":"b","protocol":"UDP"}]},
{"image":"nginx:1.14.2","name":"nginx-1","ports":
[{"containerPort":443,"hostPort":40080,"name":"b1","protocol":"UDP"},
{"containerPort":443,"hostPort":30080,"name":"b2","protocol":"UDP"},
{"containerPort":443,"hostPort":30080,"name":"b3","protocol":"TCP"}]}]}}}}
creationTimestamp: "2022-10-24T04:51:16Z"
generation: 2
labels:
app: nginx
name: nginx
namespace: default
resourceVersion: "1596289"
uid: 18dfffd6-8370-42d1-8560-bcb9182dab34
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
hostPort: 10080
name: a
protocol: TCP
- containerPort: 80
hostPort: 10080
name: a1
protocol: UDP
- containerPort: 443
hostPort: 20080
name: b
protocol: UDP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx-1
ports:
- containerPort: 443
hostPort: 40080
name: b1
protocol: UDP
- containerPort: 443
hostPort: 30080
name: b2
protocol: UDP
You can see b3 port in annotation, but it is de-duped during patching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an example of a scenario where we need to add a warning.
I1024 11:33:34.512706 9813 request.go:1073] Request Body: {"spec":{"$setElementOrder/containers":
[{"name":"nginx"},{"name":"nginx-1"}],"containers":[{"$setElementOrder/ports":[{"containerPort":443},
{"containerPort":443},{"containerPort":443}],"name":"nginx-1","ports":
[{"containerPort":443,"name":"b3","protocol":"TCP"},{"containerPort":443,"hostPort":30080,"name":"b2"},
{"containerPort":443,"hostPort":40080,"name":"b1","protocol":"UDP"}]}]}}
I1024 11:33:34.512775 9813 round_trippers.go:463] PATCH https://10.6.177.41:6443/api/v1/namespaces/default/pods/nginx?fieldManager=kubectl-client-side-apply&fieldValidation=Strict
I1024 11:33:34.512793 9813 round_trippers.go:469] Request Headers:
I1024 11:33:34.512810 9813 round_trippers.go:473] Accept: application/json
I1024 11:33:34.512824 9813 round_trippers.go:473] Content-Type: application/strategic-merge-patch+json
I1024 11:33:34.512839 9813 round_trippers.go:473] User-Agent: kubectl/v1.25.0 (linux/amd64) kubernetes/a866cbe
I1024 11:33:34.517354 9813 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I1024 11:33:34.517386 9813 round_trippers.go:577] Response Headers:
I1024 11:33:34.517403 9813 round_trippers.go:580] Cache-Control: no-cache, private
I1024 11:33:34.517416 9813 round_trippers.go:580] Content-Type: application/json
I1024 11:33:34.517429 9813 round_trippers.go:580] X-Kubernetes-Pf-Flowschema-Uid: 941450d9-e05c-43ec-a527-92b985e02847
I1024 11:33:34.517442 9813 round_trippers.go:580] X-Kubernetes-Pf-Prioritylevel-Uid: 22d95732-37d0-429c-8e14-6c27f0fbb8c7
I1024 11:33:34.517454 9813 round_trippers.go:580] Content-Length: 3412
I1024 11:33:34.517465 9813 round_trippers.go:580] Date: Mon, 24 Oct 2022 03:33:34 GMT
I1024 11:33:34.517478 9813 round_trippers.go:580] Audit-Id: 0bcdcb6b-6ac4-402c-a3f9-ff171119b68b
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[root@paco ~]# cat patch.yaml
spec:
template:
spec:
containers:
- name: nginx-1
ports:
- containerPort: 443
protocol: UDP
name: b1
hostPort: 40080
- containerPort: 443
protocol: UDP
name: b2
hostPort: 30080
- containerPort: 443
protocol: TCP
name: b3
hostPort: 30080
[root@paco ~]# kubectl --v=8 patch deployments.apps --type=strategic nginx --patch-file patch.yaml
I1024 13:01:40.864347 9338 request.go:1073] Request Body: {"spec":{"template":{"spec":{"containers":
[{"name":"nginx-1","ports":[{"containerPort":443,"hostPort":40080,"name":"b1","protocol":"UDP"},
{"containerPort":443,"hostPort":30080,"name":"b2","protocol":"UDP"},
{"containerPort":443,"hostPort":30080,"name":"b3","protocol":"TCP"}]}]}}}}
### deploy yaml after applying
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx-1
ports:
- containerPort: 443
hostPort: 30080
name: b3
protocol: TCP
- containerPort: 443
hostPort: 30080
name: b2
protocol: UDP
resources: {}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the warning is if you are using client-side apply to create (can we detect via the annotation?) or strategic merge patch to update (not sure we have visibility to that here) or maybe are putting duplicate ports in a list in an object that is managed by client-side apply (maybe can tell if the annotation is present), you should stop using client-side apply
@liggitt warning according to the applied annotation is doable.
Or can we add the warning to kubectl client
? The logic would be too special.
Or we can do something similar to #92437, just add a warning when deleteMatchingEntries
remove an entry with merge key('port' or 'containerPort').
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated.
Warning only if it is an update and there are duplicated ports.
- service update
- podTemplate update for ds/deploy/job
- pod update is intended to be empty
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to adding good tests, but I think much of this is "allowed"
pkg/api/pod/warnings.go
Outdated
@@ -243,6 +243,17 @@ func warningsForPodSpecAndMeta(fieldPath *field.Path, podSpec *api.PodSpec, meta | |||
} | |||
} | |||
} | |||
// duplicate containers[*].ports |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3 points:
- Shouldn't this be ACROSS containers in a pod?
- You need to compare with protocol. It's legit to have the same port number on both TCP and UDP
- I think there are people who rely on this - specifically multiple
hostPort
definitions that point to the samecontainerPort
I don't think this is actually a warning unless the ENTIRE port block (containerPort, hostPort, protocol, ...) are identical, which I think is already an error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Shouldn't this be ACROSS containers in a pod?
for the client-side apply / strategic merge patch, across containers is not needed?
- You need to compare with protocol. It's legit to have the same port number on both TCP and UDP
The code check protocol+hostip+hostport.
kubernetes/pkg/apis/core/validation/validation.go
Lines 2786 to 2807 in 36dd5f2
func AccumulateUniqueHostPorts(containers []core.Container, accumulator *sets.String, fldPath *field.Path) field.ErrorList { | |
allErrs := field.ErrorList{} | |
for ci, ctr := range containers { | |
idxPath := fldPath.Index(ci) | |
portsPath := idxPath.Child("ports") | |
for pi := range ctr.Ports { | |
idxPath := portsPath.Index(pi) | |
port := ctr.Ports[pi].HostPort | |
if port == 0 { | |
continue | |
} | |
str := fmt.Sprintf("%s/%s/%d", ctr.Ports[pi].Protocol, ctr.Ports[pi].HostIP, port) | |
if accumulator.Has(str) { | |
allErrs = append(allErrs, field.Duplicate(idxPath.Child("hostPort"), str)) | |
} else { | |
accumulator.Insert(str) | |
} | |
} | |
} | |
return allErrs | |
} |
- I think there are people who rely on this - specifically multiple
hostPort
definitions that point to the samecontainerPort
multiple hostPort
for the same port is valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the 3rd question,
[root@paco ~]# cat diffHostPort.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-1
image: nginx:1.14.2
ports:
- containerPort: 443
protocol: UDP
name: b1
hostPort: 40080
- containerPort: 443
protocol: UDP
name: b2
hostPort: 30080
[root@paco ~]# kubectl apply --server-side=true -f diffHostPort.yaml
Error from server: failed to create manager for existing fields: failed to convert new object
(default/nginx; apps/v1, Kind=Deployment) to smd typed:
.spec.template.spec.containers[name="nginx-1"].ports: duplicate entries for key [containerPort=443,protocol="UDP"]
[root@paco ~]# kubectl apply --server-side=false -f diffHostPort.yaml
deployment.apps/nginx unchanged
It may be weried to users here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error from server: failed to create manager for existing fields: failed to convert new object
(default/nginx; apps/v1, Kind=Deployment) to smd typed:
.spec.template.spec.containers[name="nginx-1"].ports: duplicate entries for key [containerPort=443,protocol="UDP"]
@apelisse this is fun... what's the expected/recommended recovery path for a user trying to fix an error like that using server-side apply? Do they have to use update or client-side apply to fix the error before they can use server-side apply?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the existing object is "invalid" (at least from SSAs perspective), there's no way server-side apply is going to be able to process it, so yeah, I think one would have to fix it with edit/csa/some sort of update/patch before being able to SSA.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uggh. Yeah, it sure weems like the validation lets things through that are not covered by the set of +listMapKey
defined. Is it safe to edit the listMapKeys? We never could do that with patchMergeKey
because we didn't know what clients are out there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#113482 to track
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the listMapKeys are correct, what's incorrect is the validation, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no, people use multiple duplicate blocks with different hostPort
pointing at the same containerPort. It's weird, but I have seen it in the wild.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think I've seen that too now that you mention it. We could possibly use [name,port,protocol]
as the key then? I need to think of the implications of doing that.
pkg/api/service/util.go
Outdated
if len(ports) > 1 { | ||
servicePorts := sets.NewInt64() | ||
for i, port := range ports { | ||
if servicePorts.Has(int64(port.Port)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comments as above - this is not a warning, it's a feature.
pkg/api/service/util_test.go
Outdated
}, | ||
}, | ||
expected: []string{ | ||
`spec.ports[2].port: service has duplicate ports: 443`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can warn about ambiguity BEFORE client-side-apply ruins it all, then yes. But I think same-port, same-protocol, different targetPort is legit?
Can we go back to basics - what problem are we solving? |
I was thinking [port, protocol, hostPort]. I hope that it's more feasible
to fix this sort of problem with server-side than client-side...
…On Mon, Oct 31, 2022 at 4:06 PM Antoine Pelisse ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In pkg/api/pod/warnings.go
<#113245 (comment)>
:
> @@ -243,6 +243,17 @@ func warningsForPodSpecAndMeta(fieldPath *field.Path, podSpec *api.PodSpec, meta
}
}
}
+ // duplicate containers[*].ports
Yeah, I think I've seen that too now that you mention it. We could
possibly use [name,port,protocol] as the key then? I need to think of the
implications of doing that.
—
Reply to this email directly, view it on GitHub
<#113245 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVFPYRJHX7V7JR2OPILWGBGIPANCNFSM6AAAAAARK6TAOM>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
eebf0b7
to
f974aaa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic has become very twisty. I tried to simplify it in https://go.dev/play/p/wJXJGdZXJ_M (and I caught another related case :)
Can you start from that logic and see if I missed anything? Feel free to steal it for this PR, if it works. I think I got all the cases and it's simpler to comprehend (to me, anyway) than this. I beefed it up with some more comments.
d00e62a
to
fd0731a
Compare
Thanks for the demo code. The logic is much clearer. I updated it according to your sample. |
/test pull-kubernetes-e2e-gce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. Just a few small points. Thanks!!
pkg/api/pod/warnings.go
Outdated
for _, other := range others { | ||
if port.HostIP == other.port.HostIP && port.HostPort == other.port.HostPort { | ||
// Exactly-equal is obvious. Validation should already filter for this except when these are unspecified. | ||
warnings = append(warnings, fmt.Sprintf("duplicate ports: %s and %s: %+v", fldPath.Child("ports").Index(i), other.field, port)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two things:
- please make the error like
"%s: duplicate port definition with %s", fldPath.Child("ports").Index(i), other.field
- Don't use the
%+v
dumps - that was just for running in playground :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if port.HostIP == other.port.HostIP && port.HostPort == other.port.HostPort {
// Exactly-equal is obvious. Validation should already filter for this except when these are unspecified.
warnings = append(warnings, fmt.Sprintf("%s: duplicate port definition with %s", fldPath.Child("ports").Index(i), other.field))
} else if port.HostPort == 0 || other.port.HostPort == 0 {
// HostPort = 0 is redundant with any other value, which is odd but not really dangerous. HostIP doesn't matter here.
warnings = append(warnings, fmt.Sprintf("%s: overlapping port definition with %s", fldPath.Child("ports").Index(i), other.field))
} else if a, b := port.HostIP == "", other.port.HostIP == ""; port.HostPort == other.port.HostPort && ((a || b) && !(a && b)) {
// If the HostPorts are the same and either HostIP is not specified while the other is not, the behavior is undefined.
warnings = append(warnings, fmt.Sprintf("%s: dangerously ambiguous port definition with %s", fldPath.Child("ports").Index(i), other.field))
}
Updated. Thanks.
fd0731a
to
9182644
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So close! Just one issue.
/approve
/hold for one change
9182644
to
df0d51d
Compare
Updated. @thockin Thanks for continuing to review this. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: pacoxu, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
LGTM label has been added. Git tree hash: a38743bba83eaf794429ddd6ef8c7f90550f9e57
|
/retest |
/unhold |
/skip |
This sends a warning in case there is an initContainer and a regular container with the same port. That should not be a problem because init containers do not keep running once their job is done and the regular containers start, right? |
sidecars are long-running init containers, so it CAN happen |
yeah fair point, however we can detect sidecars by looking at the restartPolicy and if none is set, we know it is not a long-running init container and thus can safely use the same port as a regular container? I'm asking because prometheus-operator currently creates pods with an (short lived) initContainer that uses the same port as a regular container and I'm wondering whether this must be changed in the operator or the warning relaxed here? |
What type of PR is this?
/kind bug
What this PR does / why we need it:
Which issue(s) this PR fixes:
fix part of #94626
Special notes for your reviewer:
Does this PR introduce a user-facing change?