Skip to content

Upgrade is not updating KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion  #11344

@gandhisagar

Description

@gandhisagar

What steps did you take and what happened?

We are doing upgrade of kubernetes cluster deployed using cluster-api (capv- on vsphere infrastructure).

As part of upgrade , we are applying the following changes:

Applying clusterctl upgrade plan
Changing pre-post kubeadm commands
Changing spec.version (e.g. from 1.29.3 to 1.30.4)
Cluster is getting upgraded successfully. we can see all nodes are at 1.30.4. but KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion is not getting updated automatically.

KubeadmControlplane instance

spec:
    kubeadmConfigSpec:
      clusterConfiguration:
        apiServer:
          extraArgs:
            cloud-provider: external
        controllerManager:
          extraArgs:
            cloud-provider: external
        dns: {}
        etcd:
          local:
            extraArgs:
              election-timeout: "2500"
              heartbeat-interval: "500"
        **kubernetesVersion: v1.29.3**
        networking: {}
        scheduler: {}
      files:
      - content: |
          apiVersion: v1
          kind: Pod
          metadata:
            creationTimestamp: null
            name: kube-vip
            namespace: kube-system
          spec:
            containers:
              - args:
                  - manager
                env:
                  - name: vip_arp
                    value: "true"
                  - name: port
                    value: "6443"
                  - name: vip_interface
                    value: ""
                  - name: vip_cidr
                    value: "32"
                  - name: cp_enable
                    value: "true"
                  - name: cp_namespace
                    value: kube-system
                  - name: vip_ddns
                    value: "false"
                  - name: svc_enable
                    value: "false"
                  - name: svc_leasename
                    value: plndr-svcs-lock
                  - name: svc_election
                    value: "true"
                  - name: vip_leaderelection
                    value: "true"
                  - name: vip_leasename
                    value: plndr-cp-lock
                  - name: vip_leaseduration
                    value: "15"
                  - name: vip_renewdeadline
                    value: "10"
                  - name: vip_retryperiod
                    value: "2"
                  - name: address
                    value: 192.168.1.3
                  - name: prometheus_server
                    value: :2112
                image: sspi-test.broadcom.com/registry/kube-vip/kube-vip:v0.6.4
                imagePullPolicy: IfNotPresent
                name: kube-vip
                resources: {}
                securityContext:
                  capabilities:
                    add:
                      - NET_ADMIN
                      - NET_RAW
                volumeMounts:
                  - mountPath: /etc/kubernetes/admin.conf
                    name: kubeconfig
                  - mountPath: /etc/hosts
                    name: etchosts
            hostNetwork: true
            volumes:
              - hostPath:
                  path: /etc/kubernetes/admin.conf
                name: kubeconfig
              - hostPath:
                  path: /etc/kube-vip.hosts
                  type: File
                name: etchosts
          status: {}
        owner: root:root
        path: /etc/kubernetes/manifests/kube-vip.yaml
        permissions: "0644"
      - content: 127.0.0.1 localhost kubernetes
        owner: root:root
        path: /etc/kube-vip.hosts
        permissions: "0644"
      - content: |
         <removed>
        owner: root:root
        path: /etc/pre-kubeadm-commands/50-kube-vip-prepare.sh
        permissions: "0700"
      format: cloud-config
      initConfiguration:
        localAPIEndpoint: {}
        nodeRegistration:
          criSocket: /var/run/crio/crio.sock
          imagePullPolicy: IfNotPresent
          kubeletExtraArgs:
            cloud-provider: external
          name: '{{ local_hostname }}'
      joinConfiguration:
        discovery: {}
        nodeRegistration:
          criSocket: /var/run/crio/crio.sock
          imagePullPolicy: IfNotPresent
          kubeletExtraArgs:
            cloud-provider: external
          name: '{{ local_hostname }}'
      postKubeadmCommands:
      - removed
      preKubeadmCommands:
      - removed
      users:
      - name: capv
        sshAuthorizedKeys:
        -  removed
    machineTemplate:
      infrastructureRef:
        apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
        kind: VSphereMachineTemplate
        name: ssp-cluster
        namespace: ssp-cluster
      metadata: {}
    replicas: 1
    rolloutStrategy:
      rollingUpdate:
        maxSurge: 1
      type: RollingUpdate
    **version: v1.30.4**

Machine object (spec.version) is also 1.30.4

spec:
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfig
name: ssp-cluster-rbjxg
namespace: ssp-cluster
uid: 2f9e1f34-c625-4b3d-a12d-1f2aa44ac084
dataSecretName: ssp-cluster-rbjxg
clusterName: ssp-cluster
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VSphereMachine
name: ssp-cluster-rbjxg
namespace: ssp-cluster
uid: fdf5bbf9-4fe9-4d58-9bad-d5efbb61326f
nodeDeletionTimeout: 10s
providerID: vsphere://42263451-3edc-5138-04d7-a7ea59b9946d
version: v1.30.4

We are following this : https://cluster-api.sigs.k8s.io/tasks/upgrading-clusters#how-to-upgrade-the-kubernetes-control-plane-version

When we tried to update it manually , it fails as forbidden to update this field.

Any suggestion or any specific step in upgrade we are missing out ?

So far tried

  1. Manual Upgrade : FAILED with error: spec.kubeadmConfigSpec.clusterConfiguration.kubernetesVersion: Forbidden: cannot be modified
  2. Force-reconcile : By adding annotation to KubeadmControlPlane (cluster.x-k8s.io/force-reconcile: "true") , No luck
  3. Restarted all pods on management cluster

What did you expect to happen?

We were expecting if KubeadmConfigSpec. ClusterConfiguration. KubernetesVersion is not modifiable then it should automatically get updated to 1.30.4 after upgrade.

Cluster API version

clusterctl version:
clusterctl version: &version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"965ffa1d94230b8127245df750a99f09eab9dd97", GitTreeState:"clean", BuildDate:"2024-03-12T17:15:08Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/amd64"}

bootstrap-kubeadm: v1.7.1
cert-manager: v1.14.2
cluster-api: v1.7.1
control-plane-kubeadm: v1.7.1
infrastructure-vsphere: v1.10.0
ipam-incluster: v0.1.0

Kubernetes version

1.29.3 -> 1.30.4 Upgrade

Anything else you would like to add?

root@sspi-test:/image/VMware-SSP-Installer-5.0.0.0.0.80589143/phoenix# kubectl get kubeadmcontrolplane ssp-cluster -n ssp-cluster
NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
ssp-cluster ssp-cluster true true 1 1 1 0 63m v1.30.4

root@sspi-test:/image/VMware-SSP-Installer-5.0.0.0.0.80589143/phoenix# kubectl get cluster -A
NAMESPACE NAME CLUSTERCLASS PHASE AGE VERSION
ssp-cluster ssp-cluster Provisioned 63m

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/supportCategorizes issue or PR as a support question.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.priority/backlogHigher priority than priority/awaiting-more-evidence.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions