Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows node PLEG not healthy during load test with 1pod/s rate #88153

Closed
YangLu1031 opened this issue Feb 14, 2020 · 20 comments
Closed

Windows node PLEG not healthy during load test with 1pod/s rate #88153

YangLu1031 opened this issue Feb 14, 2020 · 20 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/windows Categorizes an issue or PR as relevant to SIG Windows. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects

Comments

@YangLu1031
Copy link
Contributor

YangLu1031 commented Feb 14, 2020

What happened:
When running windows pod startup latency load test with pod creation speed 1pod/s, the kubelet on the node will become Not Ready with error message PLEG is not healthy: pleg was last seen active 3m8.068354s ago; threshold is 3m0s

Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 04 Feb 2020 19:11:39 -0800   Tue, 04 Feb 2020 19:11:39 -0800   RouteCreated                 NodeController create implicit route
  MemoryPressure       False   Fri, 07 Feb 2020 08:04:21 -0800   Wed, 05 Feb 2020 06:03:47 -0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Fri, 07 Feb 2020 08:04:21 -0800   Wed, 05 Feb 2020 06:03:47 -0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Fri, 07 Feb 2020 08:04:21 -0800   Wed, 05 Feb 2020 06:03:47 -0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                False   Fri, 07 Feb 2020 08:04:21 -0800   Fri, 07 Feb 2020 08:04:21 -0800   KubeletNotReady              PLEG is not healthy: pleg was last seen active 3m8.068354s ago; threshold is 3m0s

What you expected to happen:
Compared with linux nodes load test, 5 pods/s still works fine.

Is there anything we can do to improve the performance on windows node?

How to reproduce it (as minimally and precisely as possible):
For simplicity, created a script to reproduce it:
https://gist.github.com/YangLu1031/a318ad5e92ae1e61102801fdb9109788

Anything else we need to know?:
#45419

Scenarios when this failure happen:
It seems like there are situations in our current GKE Windows clusters where there's a risk of this issue happening and then causing cascading / continuous node failures:

  1. A user slowly brings up 100 pods on Windows Node A in their cluster.
  2. Node A restarts for some reason: it crashes and reboots, the user manually restarts it, whatever.
  3. Node A comes back up in a few minutes and rejoins the cluster.
  4. Kubernetes tries to restart all 100 pods on Node A, all at the same time. Because the pods are not started in a rate-limited manner this time, this leads to the PLEG not healthy issue, and Node A becomes Unhealthy.
  5. Kubernetes notices that Node A is now unhealthy, stops trying to execute the 100 pods on Node A and now tries to run them all on Windows Node B. Now Node B hits the PLEG not healthy issue, ...

Steps to reproduce this cascading node failures thru Deployment & ReplicaController.

  1. Created a cluster with 2 windows nodes
  2. Slowly brought up 30 pods on windows node A thru deployment by gradually updating replicas 10 -> 20 -> 30
  3. Killed the kubelet on node A to simulate the crashes, it became NotReady
  4. After 5 mins, node B got scheduled 30 pods all at same time, then node B became unhealthy.
  5. Also tried setting RollingUpate, but seems not working for this scenario.
    strategy:
    type: RollingUpdate
    rollingUpdate:
    maxSurge: 2 # how many pods we can add at a time
    maxUnavailable: 0 # maxUnavailable define how many pods can be unavailable during the rolling update

/sig windows
/cc @PatrickLang @dineshgovindasamy @pjh @yliaog @ddebroy

@YangLu1031 YangLu1031 added the kind/bug Categorizes issue or PR as related to a bug. label Feb 14, 2020
@k8s-ci-robot k8s-ci-robot added the sig/windows Categorizes an issue or PR as relevant to SIG Windows. label Feb 14, 2020
@AlexeyKasatkin
Copy link

AlexeyKasatkin commented Mar 19, 2020

We saw the same issue in different pod scaling scenarios. Probability of "PLEG not healthy" was connected with parameters of scaling ([number of pods]/[sec] or [number of pods]/[scaling step]) and with Windows instance flavor (more vCPUs == better node "durability"). Cloud providers in use: AWS and Azure.
Environment is based on Docker.
@YangLu1031 Do you use Docker?

@YangLu1031
Copy link
Contributor Author

Yes, we use Docker. Am wondering is there any thing we can do to improve it. Currently we choose 1pod/33s scaling speed for windows, compared to Linux 5pods/s.

@khatrig
Copy link

khatrig commented Jul 13, 2020

I'm seeing this on windows nodes frequently. RDP/SSH gives permission denied. node restart seems to be the only workaround.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 11, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 10, 2020
@immuzz immuzz moved this from Backlog (v1.20) to Open backport PRs in SIG-Windows Dec 3, 2020
@immuzz immuzz moved this from Open backport PRs to Done (v1.21) in SIG-Windows Dec 3, 2020
@immuzz immuzz moved this from Done (v1.21) to Backlog (v1.21) in SIG-Windows Dec 3, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

SIG-Windows automation moved this from Backlog (v1.21) to Done (v1.21) Dec 10, 2020
@ibabou
Copy link
Contributor

ibabou commented Aug 31, 2021

I've been testing recently with containerd to see if the issue still replicates. It seems based on the testing, the errors are different, but we still face the same problem of high throughput not being possible on windows nodes.

Here is a recent test on latest 1.23 alpha and containerd 1.5.4 (gce provider):

$ export NUM_NODES=1
export NUM_WINDOWS_NODES=2
export KUBE_GCE_ENABLE_IP_ALIASES=true
export KUBERNETES_NODE_PLATFORM=windows
export LOGGING_STACKDRIVER_RESOURCE_TYPES=new
export KUBE_WINDOWS_CONTAINER_RUNTIME=containerd
export NODE_SIZE=n2-standard-8 
export WINDOWS_NODE_IMAGE=windows-server-2019-dc-core-v20210713
$ PROJECT=<project> ./cluster/kube-up.sh

Kernel Version: 10.0 17763 (17763.1.amd64fre.rs5_release.180914-1434

$ kubectl get nodes
NAME                                 STATUS                     ROLES    AGE     VERSION
kubernetes-master                    Ready,SchedulingDisabled   <none>   5m35s   v1.23.0-alpha.0.544+cde45fb161c5a4
kubernetes-minion-group-wmtl         Ready                      <none>   5m21s   v1.23.0-alpha.0.544+cde45fb161c5a4
kubernetes-windows-node-group-n872   Ready                      <none>   34s     v1.23.0-alpha.0.544+cde45fb161c5a4
kubernetes-windows-node-group-x79f   Ready                      <none>   36s     v1.23.0-alpha.0.544+cde45fb161c5a4

win1809 taint is removed from the windows nodes:
  taints:
  - effect: NoSchedule
    key: node.kubernetes.io/os
    value: win1809

I created 40 pods against each of the 2 windows nodes (ran the tests serially), with 10 seconds delay between every creation in the first test, and 5 seconds delay in the second test. Also, an initial deployment to the same pod is done to get the image cached on the node.
Here is the script & spec used:

$1 -> no of pods
$2 -> node name
$3 -> delay in seconds
cat test-many-pods.sh 
for i in $(eval echo {1..$1}); 
  do sed -e "s/{NUM}/$2-$i/" -e "s/{NODE_NAME}/$2/" test-one-pod.yaml >> "test-one-pod-$2-$i.yaml"
  echo "executing: kubectl create -f test-one-pod-$2-$i.yaml";
  kubectl create -f test-one-pod-$2-$i.yaml;
  sleep $3; 
done

cat test-one-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-seq-{NUM}
  labels:
    app: test-seq-{NUM}
spec:
  nodeName: {NODE_NAME}
  nodeSelector:
    kubernetes.io/os: windows
  containers:
  - name: test
    resources:
      requests:
        cpu: "120m"
        memory: "250Mi"
      limits:
        cpu: "190m"
        memory: "350Mi"
    command:
      - powershell.exe
      - -command
      - 'Write-Host "I started....."; while ($true) { nslookup google.com; Start-Sleep -Seconds 2; }'
    image: mcr.microsoft.com/windows/servercore:ltsc2019

The first test, all 40 pods were able to get up and running normally, with a delay for sure towards the last set of pods:

test-seq-kubernetes-windows-node-group-n872-1   0/1     Pending   0          0s    <none>   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-1   0/1     ContainerCreating   0          0s    <none>   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-1   1/1     Running             0          6m33s   10.64.1.62   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-2   0/1     Pending             0          0s      <none>       kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-2   0/1     ContainerCreating   0          0s      <none>       kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-2   1/1     Running             0          6s      10.64.1.228   kubernetes-windows-node-group-n872   <none>           <none>
....
test-seq-kubernetes-windows-node-group-n872-35   1/1     Running             0          3m44s   10.64.1.116   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-36   1/1     Running             0          3m34s   10.64.1.94    kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-34   1/1     Running             0          3m56s   10.64.1.101   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-38   1/1     Running             0          3m59s   10.64.1.110   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-40   1/1     Running             0          3m39s   10.64.1.142   kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-39   1/1     Running             0          3m55s   10.64.1.6     kubernetes-windows-node-group-n872   <none>           <none>
test-seq-kubernetes-windows-node-group-n872-37   1/1     Running             0          4m15s   10.64.1.86    kubernetes-windows-node-group-n872   <none>           <none>

In the second test, the first ~25-30 pods\containers were created and started up with no issues. but the remaining ones, go into a loop of CreateContainerError/RunContainerError till finally succeeding after 2-3 trials. It took ~12 minutes for last pod to startup succ. (since its initial creation). The failure in last state & events is as follows:

    State:          Waiting
      Reason:       CreateContainerError
    Last State:     Terminated
      Exit Code:    0
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Mon, 01 Jan 0001 00:00:00 +0000
    Ready:          False

Events:
  Type     Reason   Age                    From     Message
  ----     ------   ----                   ----     -------
  Warning  Failed   5m55s                  kubelet  Error: failed to reserve container name "test_test-seq-kubernetes-windows-node-group-x79f-26_default_b6252871-8191-4f24-a333-bd46e6b80b57_1": name "test_test-seq-kubernetes-windows-node-group-x79f-26_default_b6252871-8191-4f24-a333-bd46e6b80b57_1" is reserved for "1764669bb39acf4560ab1736fc5bc1f7a62749569ffe246357e9df949585778f"
  Warning  Failed   3m13s (x3 over 7m57s)  kubelet  Error: context deadline exceeded

I can notice how the startup start to get significantly delayed when reaching ~20 pods. and then it starts building up, and pods will start failing:

test-seq-kubernetes-windows-node-group-x79f-10   1/1     Running             0          116s    10.64.2.47    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-32   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-32   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-11   1/1     Running             0          117s    10.64.2.131   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-33   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-33   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-34   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-34   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-35   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-35   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-36   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-36   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-37   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-37   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-38   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-38   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-39   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-39   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-40   0/1     Pending             0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-40   0/1     ContainerCreating   0          0s      <none>        kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-13   1/1     Running             0          2m27s   10.64.2.61    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-12   1/1     Running             0          2m45s   10.64.2.231   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-15   1/1     Running             0          2m56s   10.64.2.31    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-14   1/1     Running             0          3m5s    10.64.2.161   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-19   0/1     CreateContainerError   0          3m20s   10.64.2.98    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-16   1/1     Running                0          3m43s   10.64.2.97    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-19   0/1     CreateContainerError   0          3m35s   10.64.2.98    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-20   0/1     CreateContainerError   0          3m32s   10.64.2.18    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-21   0/1     CreateContainerError   0          3m27s   10.64.2.105   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-21   0/1     CreateContainerError   0          3m37s   10.64.2.105   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-20   0/1     CreateContainerError   0          3m48s   10.64.2.18    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-18   1/1     Running                0          4m      10.64.2.5     kubernetes-windows-node-group-x79f   <none>           <none>
.....
test-seq-kubernetes-windows-node-group-x79f-26   0/1     RunContainerError      2          8m41s   10.64.2.80    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-26   1/1     Running                3          10m     10.64.2.80    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-29   1/1     Running                2          10m     10.64.2.69    kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-25   1/1     Running                2          11m     10.64.2.243   kubernetes-windows-node-group-x79f   <none>           <none>
test-seq-kubernetes-windows-node-group-x79f-28   1/1     Running                2          11m     10.64.2.176   kubernetes-windows-node-group-x79f   <none>           <none>

The cpu on this second node kept under 50%.
image

In kubelet logs, I see these repetitive errors:

...start failed in pod
test-seq-kubernetes-windows-node-group-x79f-29_default(347ac790-aab7-4b25-80c3-b32dc6b3bf38): RunContainerError:
context deadline exceeded
E0831 02:13:36.388255    1764 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for
\"test\" with RunContainerError: \"context deadline exceeded\""
pod="default/test-seq-kubernetes-windows-node-group-x79f-29" podUID=347ac790-aab7-4b25-80c3-b32dc6b3bf38
E0831 02:13:42.017659    1764 remote_runtime.go:253] "StartContainer from runtime service failed" err="rpc error: code
= DeadlineExceeded desc = context deadline exceeded"
containerID="0f34f7488b1d74adefb4904352cd4bce465944480f0f7aff37ecfdedbf891f79"

So what's the recommended throttling required on the windows nodes to avoid big delays and failures - should we've guidelines ? And, Is it possible we can get rid of this or enhance it more, or maybe modify scheduler in future ?
At least it seems with latest containerd we don't see the node going into bad state, but again the test above is not the extreme case (it gets worse if image is not cached, or if creation is 1 pod/1 sec).

Earlier tests done on GKE clusters (v1.21 + containerd 1.5.2).

@ibabou
Copy link
Contributor

ibabou commented Aug 31, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@ibabou: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Aug 31, 2021
SIG-Windows automation moved this from Done (v1.21) to Backlog Aug 31, 2021
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Aug 31, 2021
@ibabou
Copy link
Contributor

ibabou commented Aug 31, 2021

Hey +@jsturtevant +@immuzz , I've reopened the issue as we discussed, I included as well a repro on latest k8s + containerd 1.5.4 - the results were slightly better but still saw similar context deadline errors.

@jayunit100
Copy link
Member

lets buld an e2e for the psuedocode in #88153 (comment)

@marosset
Copy link
Contributor

/remove-lifecycle rotten
/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 21, 2021
@k8s-ci-robot k8s-ci-robot removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 21, 2021
@claudiubelu
Copy link
Contributor

/cc @claudiubelu

@NandoTheessen
Copy link

Is there currently a way to limit the new pods per minute/second?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 17, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 19, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

SIG-Windows automation moved this from Backlog (issues) to Done (v1.23) Apr 18, 2022
@silenceper
Copy link

silenceper commented Apr 25, 2022

any update on this , @YangLu1031 did you solve the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/windows Categorizes an issue or PR as relevant to SIG Windows. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
SIG-Windows
  
Done (v1.23)
Development

No branches or pull requests