-
Couldn't load subscription status.
- Fork 1.4k
Closed
Labels
area/testingIssues or PRs related to testingIssues or PRs related to testinggood first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.
Milestone
Description
Currently we have a mixture of using CoreDNS tags with and without v in our Kubernetes upgrade test.
We are also not verifying that CoreDNS actually comes up, we only wait until the deployment has the new image tag.
Some context:
- It's about the CoreDNS tag we're upgrading to which is configured via
COREDNS_VERSION_UPGRADE_TO - There was a migration from k8s.gcr.io/coredns to k8s.gcr.io/coredns/coredns and the following images are available in GCR:
- k8s.gcr.io/coredns <= 1.6.5 / 1.6.6 / 1.6.7 / 1.7.0
- k8s.gcr.io/coredns/coredns v1.6.6 / v1.6.7 / v1.6.9 / v1.7.0 / v1.7.1 / v1.8.0 / v1.8.3 / v1.84 / v1.8.5 / v1.8.6
gcloud container images list-tags k8s.gcr.io/coredns&gcloud container images list-tags k8s.gcr.io/coredns/coredns
- In KCP we are automatically switching to the new image repository with v1.8.0
So I would suggest that we use the v prefix for CoreDNS >= v1.8.0
Tasks:
- change
COREDNS_VERSION_UPGRADE_TOtest/e2e/config/docker.yaml: 1.8.4 =>v1.8.6(we should use the default CoreDNS version of Kubernetes 1.23) - Check periodic and presubmit Kubernetes upgrade jobs in test-infra
- Improve Kubernetes upgrade test to verify that the Deployment is rolled out correctly
- I think we should additionally check in
WaitForDNSUpgradethatDeployment.Status.ObservedGeneration>=Deployment.GenerationandDeployment.Spec.ReplicasandDeployment.Status.{AvailableReplicas,UpdatedReplicas}are equal kubectl rollout statuschecks a bit more, but I think a simpler version is good enough for us: https://github.com/kubernetes/kubernetes/blob/bfa4188123ed334d4f5dda3a79994cadf663d8f2/staging/src/k8s.io/kubectl/pkg/polymorphichelpers/rollout_status.go#L59-L92- I'm aware that this will make the test a bit slower, but I would prefer that over not being able to tell if our CoreDNS upgrade works.
- I think we should additionally check in
- I think we should do something similar for the kube-proxy Daemonset in
WaitForKubeProxyUpgrade- Let's discuss this separately and try it in a separate PR.
- I think this could break upgrade tests which are using CI images. IIRC when we are injecting a Kubernetes CI version in CAPA/CAPO we download a specific kube-proxy version from GCS which is not available in any registry. I.e. a new kube-proxy version won't come up on old nodes as the new image wasn't downloaded there.
/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
Metadata
Metadata
Assignees
Labels
area/testingIssues or PRs related to testingIssues or PRs related to testinggood first issueDenotes an issue ready for a new contributor, according to the "help wanted" guidelines.Denotes an issue ready for a new contributor, according to the "help wanted" guidelines.help wantedDenotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.kind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.