|
| 1 | +.. procedure:: |
| 2 | + :style: normal |
| 3 | + |
| 4 | + .. step:: Deploy a new |k8s| cluster. |
| 5 | + |
| 6 | + With this new cluster deployed, you now have two |k8s| clusters. The |
| 7 | + following steps require that you run ``kubectl`` commmands against each of |
| 8 | + these |k8s| clusters. To simplify this, you can configure each of the |k8s| |
| 9 | + contexts with the following commands: |
| 10 | + |
| 11 | + .. code-block:: sh |
| 12 | +
|
| 13 | + kubectl config set-cluster old --server=https://<OLD_CLUSTER_ULR> |
| 14 | + kubectl config set-context old --cluster=old |
| 15 | +
|
| 16 | + kubectl config set-cluster new --server=https://<NEW_CLUSTER_ULR> |
| 17 | + kubectl config set-context new --cluster=new |
| 18 | +
|
| 19 | + .. step:: Deploy |ak8so| ``v2.x`` to your new K8s cluster. |
| 20 | + |
| 21 | + Replace the ``<version>`` placeholder in the following command with your |
| 22 | + desired |ak8so| version, and run the command to deploy |ak8so| to your newly provsioned |
| 23 | + |k8s| cluster. |
| 24 | + |
| 25 | + .. code-block:: sh |
| 26 | +
|
| 27 | + kubectl use-context new |
| 28 | + kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/<version>/deploy/all-in-one.yaml |
| 29 | +
|
| 30 | + .. step:: Scale the replica count of your ``v1.x`` |ak8so| to zero. |
| 31 | + |
| 32 | + Scale the replica count in your ``v1.x`` |ak8so| deployment to ``0`` in |
| 33 | + your "old" |k8s| cluster, so that it will no longer monitor and update |
| 34 | + the associated MongoDB |service| deployment by running: |
| 35 | + |
| 36 | + .. code-block:: sh |
| 37 | + |
| 38 | + kubectl use-context old |
| 39 | + kubectl scale --replicas=0 deployment.apps/mongodb-atlas-operator -n mongodb-atlas-system |
| 40 | +
|
| 41 | + .. step:: Update existing ``AtlasProject`` CR definitions. |
| 42 | + |
| 43 | + Update your existing YAML definitions to align with the following example, |
| 44 | + so they reference the following API secrets and credentials as needed: |
| 45 | + |
| 46 | + .. list-table:: |
| 47 | + :header-rows: 1 |
| 48 | + :widths: 25 25 25 25 |
| 49 | + |
| 50 | + * - CR Section |
| 51 | + - Cloud Provider |
| 52 | + - v1.x |
| 53 | + - v2.x |
| 54 | + |
| 55 | + * - ``.spec.alertConfiguration.notifications`` |
| 56 | + - |
| 57 | + - ``APIToken`` |
| 58 | + - ``APITokenRef`` |
| 59 | + |
| 60 | + * - |
| 61 | + - |
| 62 | + - ``DatadogAPIKey`` |
| 63 | + - ``DatadogAPIKeyRef`` |
| 64 | + |
| 65 | + * - |
| 66 | + - |
| 67 | + - ``FlowdockTokenAPI`` |
| 68 | + - ``FlowdockTokenAPIRef`` |
| 69 | + |
| 70 | + * - |
| 71 | + - |
| 72 | + - ``OpsGenieAPIKey`` |
| 73 | + - ``OpsGenieAPIKeyRef`` |
| 74 | + |
| 75 | + * - |
| 76 | + - |
| 77 | + - ``VictorOpsAPIKey`` |
| 78 | + - ``VictorOpsSecretRef`` |
| 79 | + |
| 80 | + * - |
| 81 | + - |
| 82 | + - ``VictorOpsRoutingKey`` |
| 83 | + - ``VictorOpsSecretRef`` |
| 84 | + (expected to have both ``VictorOps`` values) |
| 85 | + |
| 86 | + * - ``.spec.encryptionAtRest`` |
| 87 | + - AWS |
| 88 | + - ``AccessKeyID``, ``SecretAccessKey``, ``CustomerMasterKeyID``, ``RoleID`` |
| 89 | + - ``CloudProviderAccessRoles`` |
| 90 | + |
| 91 | + * - |
| 92 | + - Azure |
| 93 | + - ``SubscriptionID``, ``KeyVaultName``, ``KeyIdentifier``, ``Secret`` |
| 94 | + - ``secretRef`` |
| 95 | + |
| 96 | + * - |
| 97 | + - GCP |
| 98 | + - ``ServiceAccountKey``, ``KeyVersionResourceID`` |
| 99 | + - ``secretRef`` |
| 100 | + |
| 101 | + As a result of the updates you made in the previous steps, |
| 102 | + your resulting CRD might look similar to the following example: |
| 103 | + |
| 104 | + .. code-block:: yaml |
| 105 | +
|
| 106 | + apiVersion: atlas.mongodb.com/v1 |
| 107 | + kind: AtlasProject |
| 108 | + metadata: |
| 109 | + name: my-project |
| 110 | + labels: |
| 111 | + app.kubernetes.io/version: 1.6.0 |
| 112 | + spec: |
| 113 | + name: Test Atlas Operator Project |
| 114 | + projectIpAccessList: |
| 115 | + - ipAddress: "<Public-IP-of-K8s-Cluster>" |
| 116 | + comment: "This IP is added to your Atlas Project's Access List." |
| 117 | +
|
| 118 | + .. step:: Update existing ``AtlasDeployment`` CR definitions. |
| 119 | + |
| 120 | + - If your existing YAML definition includes an ``advancedDeploymentSpec``, |
| 121 | + rename that section to ``deploymentSpec``. |
| 122 | + |
| 123 | + - If your existing YAML definition includes a ``deploymentSpec``, |
| 124 | + update that section to align with the following ``deploymentSpec`` example. |
| 125 | + |
| 126 | + - If your existing YAML definition includes a ``serverlessSpec``, |
| 127 | + no changes are required. |
| 128 | + |
| 129 | + As a result of the updates you made in the previous steps, |
| 130 | + your resulting CRD might look similar to the following example: |
| 131 | + |
| 132 | + .. code-block:: yaml |
| 133 | + |
| 134 | + deploymentSpec: |
| 135 | + clusterType: REPLICASET |
| 136 | + name: advanced-deployment-2 |
| 137 | + mongoDBMajorVersion: "5.0" |
| 138 | + replicationSpecs: |
| 139 | +
|
| 140 | + - regionConfigs: |
| 141 | + regionName: EASTERN_US |
| 142 | + - electableSpecs: |
| 143 | + nodeCount: 4 |
| 144 | + instanceSize: M10 |
| 145 | + autoScaling: |
| 146 | + compute: |
| 147 | + scaleDownEnabled: true |
| 148 | + enabled: true |
| 149 | + minInstanceSize: M10 |
| 150 | + maxInstanceSize: M20 |
| 151 | + providerName: GCP |
| 152 | + backingProviderName: GCP |
| 153 | + priority: 7 |
| 154 | +
|
| 155 | + regionName: US_EAST_2 |
| 156 | + - electableSpecs: |
| 157 | + nodeCount: 1 |
| 158 | + instanceSize: M10 |
| 159 | + autoScaling: |
| 160 | + compute: |
| 161 | + scaleDownEnabled: true |
| 162 | + enabled: true |
| 163 | + minInstanceSize: M10 |
| 164 | + maxInstanceSize: M20 |
| 165 | + providerName: AWS |
| 166 | + backingProviderName: AWS |
| 167 | + priority: 6 |
| 168 | +
|
| 169 | + .. step:: Apply the newly generated AKO 2.x compatible resources in your new |k8s| cluster. |
| 170 | + |
| 171 | + Run the following command to deploy your updated |ak8so| resources: |
| 172 | + |
| 173 | + .. code-block:: sh |
| 174 | + |
| 175 | + kubectl use-context new |
| 176 | + kubectl apply -f resource.yaml |
| 177 | +
|
| 178 | + .. step:: Scale up the number of replica set members in the upgraded deployment. |
| 179 | + |
| 180 | + Set the replica count to ``1`` in your |ak8so| 2.x deployment, |
| 181 | + so that the new |ak8so| picks up migrated resources. |
| 182 | + Because these resources are semantically equal to your existing AKO 1.9.x |
| 183 | + custom resources, your MongoDB |service| resources themselves won't change. |
| 184 | + |
| 185 | + .. step:: Verify new resource statuses. |
| 186 | + |
| 187 | + Run the following commands to verify the statuses of your newly deployed custom resources: |
| 188 | + |
| 189 | + .. code-block:: sh |
| 190 | + |
| 191 | + kubectl use-context new |
| 192 | + kubectl describe atlasprojects <your-project-name> |
| 193 | + kubectl describe atlasdeployments <your-deployment-name> |
0 commit comments