Closed
Description
What happened:
Deduplicating a duplicate entry from the imagePullSecrets
field causes the entire field to become null
.
What you expected to happen:
The imagePullSecrets
field should be the updated list instead of null
.
How to reproduce it (as minimally and precisely as possible):
- Upload a container to a private registry (in this case, we'll upload the pause container to
example.azurecr.io/kubernetes/pause:3.0
) - Create a docker credentials Secret with permissions to pull the image (in this case, the secret is named
example-secret
) - Create a Deployment manifest with the imagePullSecret defined twice:
kind: Deployment
apiVersion: apps/v1
metadata:
name: pause
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
imagePullSecrets:
- name: example-secret
- name: example-secret
containers:
- name: pause
image: example.azurecr.io/kubernetes/pause:3.0
- Apply the manifest with
kubectl apply
, and check the current value ofimagePullSecret
:
[kube]$ kubectl -n kube-system get deployment pause -o json | jq '.spec.template.spec.imagePullSecrets'
[
{
"name": "example-secret"
},
{
"name": "example-secret"
}
]
- Realize our mistake and remove the duplicate
imagePullSecret
element from the manifest:
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: pause
namespace: kube-system
labels:
app: pause
spec:
replicas: 1
selector:
matchLabels:
app: pause
template:
metadata:
labels:
app: pause
spec:
imagePullSecrets:
- name: example-secret
containers:
- name: pause
image: example.azurecr.io/kubernetes/pause:3.0
- Apply the manifest with
kubectl apply
, and check the current value ofimagePullSecrets
:
[kube]$ kubectl -n kube-system get deployment pause -o json | jq '.spec.template.spec.imagePullSecrets'
null
The imagePullSecrets
field is now entirely missing! Pods will fail with ImagePullBackoff
on any Node where the pause image is not cached.
Note that applying the manifest again will fix the problem:
[kube]$ kubectl apply -f pause.yaml
deployment.apps/pause configured
[kube]$ kubectl -n kube-system get deployment pause -o json | jq '.spec.template.spec.imagePullSecrets'
[
{
"name": "example-secret"
}
]
In other words, kubectl apply is acting in a non-idempotent way in this case.
Anything else we need to know?:
Environment: Tested on Azure, but probably exists in other providers too
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:36:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: Azure
- OS (e.g:
cat /etc/os-release
): Flatcar Linux 2345.3.0 - Kernel (e.g.
uname -a
): - Install tools:
- Network plugin and version (if this is a network-related bug):
- Others: