Skip to content

KO does not follow k8s version skew policy. Cannot upgrade CP when kubelet has an older minor release. #3010

Open
@PoudNL

Description

@PoudNL

Discussed in #3009

Originally posted by PoudNL January 12, 2024
Currently we have a Kubeone provisioned cluster running Kubernetes 1.23.17. This cluster has 3 control-plane nodes and approx. 35 worker nodes. Normally we upgrade the CP nodes to the next minor release and rollover the worker nodes to the same version.
This last step takes a couple of days in our environment, unfortunately speeding up that process is currently not possible.

Because we are running such an old version of Kubernetes, we would like to upgrade to 1.24.x on the control-planes and immediately upgrade to 1.25.x without first upgrading the worker nodes. This is in line with the Kubernetes version skew policy, because CP components can have a version skew of max 2 minors with Kubernetes 1.23.17.

Kubeone documentation states that it follows the version skew policy of Kubernetes, but unfortunately the upgrade from 1.24.x -> 1.25.x with kubeone 1.6.x fails due too the version skew. Kubeadm will report a warning message to let the administrator know that it is not recommended. This warning message will result in Kubeone to fail the process.

[192.168.199.141] [upgrade] Running cluster health checks
[192.168.199.141] [upgrade/version] You have chosen to change the cluster version to "v1.25.16"
[192.168.199.141] [upgrade/versions] Cluster version: v1.24.17
[192.168.199.141] [upgrade/versions] kubeadm version: v1.25.16
[192.168.199.141] [upgrade/version] FATAL: the --version argument is invalid due to these errors:
[192.168.199.141]
[192.168.199.141]       - There are kubelets in this cluster that are too old that have these versions [v1.23.17]
[192.168.199.141]
[192.168.199.141] Can be bypassed if you pass the --force flag
[192.168.199.141] To see the stack trace of this error execute with --v=5 or higher
WARN[21:59:12 CET] Task failed, error was: runtime: running task on "192.168.199.141"
ssh: running kubeadm upgrade on control plane leader
ssh: popen
Process exited with status 1

How can I make Kubeone force the update? I found some documentation about using kubeone upgrade instead of kubeone apply , but this feature is deprecated. Also I don't see any lines in the code that would add the --force parameter to the kubeadm command when upgrading.

So does anyone know how to solve my journey and let Kubeone indeed follow the version skew policy of Kubernetes?

How to reproduce

  • Use kubeone 1.5: to create a cluster with 2 (or more) node controlplanes and at least one worker with version 1.23.x
  • With kubeone 1.5: Upgrade CP nodes to 1.24.x (leave worker(s) on 1.23.x)
  • Upgrade to kubeone 1.6
  • With kubeone 1.6: Try upgrading CP nodes from 1.24.x to 1.25.x

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions