Description
What happened: No error or warning message when service modified only with partially success.
What you expected to happen: Error or warning message when the service is not fully successful configured by yaml file.
How to reproduce it (as minimally and precisely as possible):
Service creation by this yaml is ok:
apiVersion: v1
kind: Service
metadata:
labels:
app: exampledeployment
name: example-svc
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: exampledeployment
Modifying with this yaml reports success configuration, but it is not true, because port dnsudp is missing.
apiVersion: v1
kind: Service
metadata:
labels:
app: exampledeployment
name: example-svc
spec:
ports:
- name: dnstcp
port: 53
protocol: TCP
targetPort: 53
- name: dnsudp
port: 53
protocol: UDP
targetPort: 53
selector:
app: exampledeployment
$ kubectl apply -f tcp_udp_ports.yaml
service/example-svc configured
$ kubectl get svc example-svc -o yaml
apiVersion: v1
kind: Service
...
ports:
- name: dnstcp
port: 53
protocol: TCP
targetPort: 53
selector:
app: exampledeployment
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
As outputs show: the dnsudp udp/53 was not configured into the service and no error or warning message about it.
Anything else we need to know?:
The service can be created from scratch with kubectl create and both ports are configured:
$ kubectl create -f tcp_udp_ports.yaml
service/example-svc created
$ kubectl get svc example-svc -o yaml
apiVersion: v1
kind: Service
...
ports:
- name: dnstcp
port: 53
protocol: TCP
targetPort: 53
- name: dnsudp
port: 53
protocol: UDP
targetPort: 53
selector:
app: exampledeployment
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
If type of service is changed to LoadBalancer from ClusterIP, the creation will be failed, but the modification of existing service is same as above.
$ kubectl apply -f tcp_udp_ports.yaml
The Service "example-svc" is invalid: spec.ports: Invalid value: []core.ServicePort{core.ServicePort{Name:"dnstcp", Protocol:"TCP", AppProtocol:(*string)(nil), Port:53, TargetPort:intstr.IntOrString{Type:0, IntVal:53, StrVal:""}, NodePort:0}, core.ServicePort{Name:"dnsudp", Protocol:"UDP", AppProtocol:(*string)(nil), Port:53, TargetPort:intstr.IntOrString{Type:0, IntVal:53, StrVal:""}, NodePort:0}}: cannot create an external load balancer with mix protocols
Environments:
- Kubernetes version (use
kubectl version
): v1.19.3, v1.20.1 - Cloud provider or hardware configuration: (no cloud provider) VMs on vmware and QEMU/KVM
- OS (e.g:
cat /etc/os-release
): Ubuntu 20.04 LTS (Focal Fossa), Debian/GNU Linux 10 (buster) - Kernel (e.g.
uname -a
): 4.19.0-12-amd64, 5.4.0-33-generic - Install tools: kubeadm