Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1beta1/Backendconfig securityPolicy.name will be removed after apply #1508

Closed
yokomotod opened this issue Jul 14, 2021 · 16 comments
Closed
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@yokomotod
Copy link

yokomotod commented Jul 14, 2021

Issue

If I create BackendConfig with apiVersion: cloud.google.com/v1beta1, spec.securityPolicy.name field will be gone.

And actually the related LoadBalancer won't be registered to the CloudArmor's target.

Reproduce

apply

$ kubectl apply -f - <<END
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: test
spec:
  securityPolicy:
    name: test
END

then get

$ kubectl get backendconfig test -o yaml
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cloud.google.com/v1beta1","kind":"BackendConfig","metadata":{"annotations":{},"name":"test","namespace":"default"},"spec":{"securityPolicy":{"name":"test"}}}
  creationTimestamp: "2021-07-14T17:12:53Z"
  generation: 1
  managedFields:
  - apiVersion: cloud.google.com/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        .: {}
        f:securityPolicy: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-07-14T17:12:53Z"
  name: test
  namespace: default
  resourceVersion: "206047728"
  selfLink: /apis/cloud.google.com/v1/namespaces/default/backendconfigs/test
  uid: 5e8e15fb-316d-418b-a9e5-32e96c9be173
spec:
  securityPolicy: {}

here securityPolicy is {}

Expected

applied resource should have

spec:
  securityPolicy:
    name: test

Environment

GKE v1.19.10-gke.1000 (not autopilot)

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33", GitTreeState:"clean", BuildDate:"2021-05-12T12:27:07Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1000", GitCommit:"fb668c07d234a3f2c6b9f7a57e030715a6074115", GitTreeState:"clean", BuildDate:"2021-04-29T09:17:21Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"}
@yokomotod
Copy link
Author

v1 works fine

kubectl apply -f - <<END
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: test
spec:
  securityPolicy:
    name: test
END
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cloud.google.com/v1","kind":"BackendConfig","metadata":{"annotations":{},"name":"test","namespace":"default"},"spec":{"securityPolicy":{"name":"test"}}}
  creationTimestamp: "2021-07-14T17:18:02Z"
  generation: 1
  managedFields:
  - apiVersion: cloud.google.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
      f:spec:
        .: {}
        f:securityPolicy:
          .: {}
          f:name: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2021-07-14T17:18:02Z"
  name: test
  namespace: default
  resourceVersion: "206050178"
  selfLink: /apis/cloud.google.com/v1/namespaces/default/backendconfigs/test
  uid: 78138b26-39bf-4d5a-ae9f-4a14b090e362
spec:
  securityPolicy:
    name: test

or using Server-side Apply also fine even with v1beta1

kubectl apply --server-side=true -f - <<END
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: test
spec:
  securityPolicy:
    name: test
END
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  creationTimestamp: "2021-07-14T17:40:49Z"
  generation: 1
  managedFields:
  - apiVersion: cloud.google.com/v1beta1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:securityPolicy:
          f:name: {}
    manager: kubectl
    operation: Apply
    time: "2021-07-14T17:40:49Z"
  name: test
  namespace: default
  resourceVersion: "206061014"
  selfLink: /apis/cloud.google.com/v1/namespaces/default/backendconfigs/test
  uid: b9217a79-31c8-4f09-9784-dc96dc98ca2c
spec:
  securityPolicy:
    name: test

@sekinet
Copy link

sekinet commented Jul 16, 2021

We've got same issue with v1.19.10-gke.1600

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.10-gke.1600", GitCommit:"7b8e568a7fb4c9d199c2ba29a5f7d76f6b4341c2", GitTreeState:"clean", BuildDate:"2021-05-07T09:18:53Z", GoVersion:"go1.15.10b5", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.19) exceeds the supported minor version skew of +/-1

@yokomotod
Copy link
Author

Reproduced with GKE 1.18.20-gke.900. 1.17.17-gke.9100 seems fine.

I think this may be a security risk.
If a user upgrades a cluster from 1.17 to 1.18+ and reapplies an existing v1beta1 BackendConfig which has SecurityPolicy, then LoadBalancer will lose CloudArmor protection.
So, for example, IP-restricted resource could be exposed to the public.

@yokomotod
Copy link
Author

I found that 1.21.2-gke.600 has already solved this problem.

I didn't know what change fixed the issue, but I hope it will be backported to 1.18-1.20.

@rramkumar1
Copy link
Contributor

@skmatti

@skmatti
Copy link
Contributor

skmatti commented Aug 3, 2021

The issue resulted from missing validation for securityPolicy field in the v1beta1 CRD.

The fix (#1512) was backported to GKE v1.20.9-gke.900+ and will be backported to 1.18 and 1.19 as well.

@skmatti
Copy link
Contributor

skmatti commented Aug 3, 2021

/assign

@yokomotod
Copy link
Author

@skmatti Thanks!!

@jbuck
Copy link

jbuck commented Aug 10, 2021

Reproduced with GKE 1.18.20-gke.900. 1.17.17-gke.9100 seems fine.

I think this may be a security risk.
If a user upgrades a cluster from 1.17 to 1.18+ and reapplies an existing v1beta1 BackendConfig which has SecurityPolicy, then LoadBalancer will lose CloudArmor protection.
So, for example, IP-restricted resource could be exposed to the public.

This happened to us. We use Cloud Armor to restrict admin services to VPN-only. Could y'all post a notice on the GKE release notes with this bug and the workaround?

Thanks, Jon

@bowei
Copy link
Member

bowei commented Aug 12, 2021

The release should be update soon @skmatti

@skmatti
Copy link
Contributor

skmatti commented Aug 13, 2021

Our docs are updated with the issue details: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#armor_fields_removed

A release note will be sent early next week about the fix in GKE v1.20.9-gke.900

@skmatti
Copy link
Contributor

skmatti commented Aug 17, 2021

Release notes for GKE v1.20: https://cloud.google.com/kubernetes-engine/docs/release-notes#August_17_2021

Will keep this thread posted on the release notes for 1.19 and 1.18 in next couple of weeks.

@rrondeau
Copy link

I just found this, this is huge for us.
All our platforms were exposed without policies 😨
We are on v1.18.20-gke.900 and v1.20.8-gke.900

@tacumai
Copy link

tacumai commented Aug 25, 2021

I've faced the same problem at 1.20.8-gke.2100 .
thanks for the issue, I got the solution. thanks.

@mark-church
Copy link

GKE 1.19.x is now patched as of GKE 1.19.14-gke.301 and later. These versions are no longer impacted by this issue and can use v1beta1 BackendConfig versions without any Cloud Armor issues.

GKE 1.18.x is targeted to be patched within the next 2 weeks. GKE 1.18.x clusters should continue using the v1 BackendConfig resources as a workaround until GKE 1.18.x is patched.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2021
@freehan freehan closed this as completed Dec 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests