-
Couldn't load subscription status.
- Fork 1.4k
Description
What steps did you take and what happened?
Create an AKS cluster with machinepools with CAPZ then upgrade the spec.version in the MachinePool.
What did you expect to happen?
We rely on status.observedGeneration and different status fields and conditions to determine if the MachinePool is still being upgraded. This works great with MachineDeployments with lots of different infra providers, but for MachinePools there is no clear signal when the upgrade has started.
I've captured the MachinePool objects during an upgrade:
spec.versionwas upgraded tov1.28.3but observedGeneration is still2, this is expected the controllers haven't acted yet.
[2024-01-16 15:54:09] ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
creationTimestamp: "2024-01-16T23:45:43Z"
finalizers:
- machinepool.cluster.x-k8s.io
generation: 3
name: dkoshkin-az-upgrade-8
namespace: default
resourceVersion: "4509"
uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
template:
spec:
version: v1.28.3
status:
availableReplicas: 3
bootstrapReady: true
conditions:
- lastTransitionTime: "2024-01-16T23:51:06Z"
status: "True"
type: Ready
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: BootstrapReady
- lastTransitionTime: "2024-01-16T23:51:06Z"
status: "True"
type: InfrastructureReady
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: ReplicasReady
infrastructureReady: true
observedGeneration: 2
phase: Running
readyReplicas: 3
replicas: 3
The controller picks up the spec change and reconciles it, updating observedGeneration to 3 which matches generation.
This is where I would expect to some status change that the spec is outdated and will be upgraded.
[2024-01-16 15:54:09] ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
creationTimestamp: "2024-01-16T23:45:43Z"
finalizers:
- machinepool.cluster.x-k8s.io
generation: 3
name: dkoshkin-az-upgrade-8
namespace: default
resourceVersion: "4510"
uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
template:
spec:
version: v1.28.3
status:
availableReplicas: 3
bootstrapReady: true
conditions:
- lastTransitionTime: "2024-01-16T23:51:06Z"
status: "True"
type: Ready
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: BootstrapReady
- lastTransitionTime: "2024-01-16T23:51:06Z"
status: "True"
type: InfrastructureReady
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: ReplicasReady
infrastructureReady: true
observedGeneration: 3
phase: Running
readyReplicas: 3
replicas: 3
Then about 10 seconds later we get a status change with Ready and InfrastructureReady conditions changing to False. By this point our wait code has exited since it checks for observedGeneration==generation and the Ready condition.
[2024-01-16 15:54:21] ---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachinePool
metadata:
creationTimestamp: "2024-01-16T23:45:43Z"
finalizers:
- machinepool.cluster.x-k8s.io
generation: 3
name: dkoshkin-az-upgrade-8
namespace: default
resourceVersion: "4574"
uid: 612e7d87-b0d2-4da9-8174-9ca890642246
spec:
template:
spec:
version: v1.28.3
status:
availableReplicas: 3
bootstrapReady: true
conditions:
- lastTransitionTime: "2024-01-16T23:54:21Z"
message: agentpools creating or updating
reason: Creating
severity: Info
status: "False"
type: Ready
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: BootstrapReady
- lastTransitionTime: "2024-01-16T23:54:21Z"
message: agentpools creating or updating
reason: Creating
severity: Info
status: "False"
type: InfrastructureReady
- lastTransitionTime: "2024-01-16T23:45:43Z"
status: "True"
type: ReplicasReady
infrastructureReady: true
observedGeneration: 3
phase: Running
readyReplicas: 3
replicas: 3
Cluster API version
v1.5.3
Kubernetes version
No response
Anything else you would like to add?
We would like to avoid solving this and not have "sleeps" to wait for changes to happen (or not happen) and instead would like to use the status.
I'm looking for some guidance
- Confirming that this is a bug
- and any pointers if this is already handled in the topology reconcilers, as I believe there would be a similar need to know when a MachinePool is outdated and is going to be upgraded
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.