Skip to content

Commit c34c1ac

Browse files
feat(container): update the api
#### container:v1 The following keys were added: - schemas.AutoscaledRolloutPolicy (Total Keys: 4) - schemas.BlueGreenSettings.properties.autoscaledRolloutPolicy.$ref (Total Keys: 1) - schemas.Cluster.properties.enterpriseConfig.deprecated (Total Keys: 1) - schemas.ClusterUpdate.properties.desiredEnterpriseConfig.deprecated (Total Keys: 1) - schemas.DNSEndpointConfig.properties.enableK8sCertsViaDns.type (Total Keys: 1) - schemas.DNSEndpointConfig.properties.enableK8sTokensViaDns.type (Total Keys: 1) - schemas.DesiredEnterpriseConfig.deprecated (Total Keys: 1) - schemas.EnterpriseConfig.deprecated (Total Keys: 1) #### container:v1beta1 The following keys were added: - schemas.AutoscaledRolloutPolicy.properties.waitForDrainDuration (Total Keys: 2)
1 parent fe11498 commit c34c1ac

10 files changed

+210
-72
lines changed

docs/dyn/container_v1.projects.locations.clusters.html

Lines changed: 56 additions & 27 deletions
Large diffs are not rendered by default.

docs/dyn/container_v1.projects.locations.clusters.nodePools.html

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -470,6 +470,9 @@ <h3>Method Details</h3>
470470
},
471471
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool. If the strategy is ROLLING, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade. 1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. 2. maxUnavailable controls the number of nodes that can be simultaneously unavailable. 3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings. 1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained. 2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted. # Upgrade settings control disruption and speed of the upgrade.
472472
&quot;blueGreenSettings&quot;: { # Settings for blue-green upgrade. # Settings for blue-green upgrade strategy.
473+
&quot;autoscaledRolloutPolicy&quot;: { # Autoscaled rollout policy utilizes the cluster autoscaler during blue-green upgrade to scale both the blue and green pools. # Autoscaled policy for cluster autoscaler enabled blue-green upgrade.
474+
&quot;waitForDrainDuration&quot;: &quot;A String&quot;, # Optional. Time to wait after cordoning the blue pool before draining the nodes. Defaults to 3 days. The value can be set between 0 and 7 days, inclusive.
475+
},
473476
&quot;nodePoolSoakDuration&quot;: &quot;A String&quot;, # Time needed after draining entire blue pool. After this period, blue pool will be cleaned up.
474477
&quot;standardRolloutPolicy&quot;: { # Standard rollout policy is the default policy for blue-green. # Standard policy for the blue-green upgrade.
475478
&quot;batchNodeCount&quot;: 42, # Number of blue nodes to drain in a batch.
@@ -996,6 +999,9 @@ <h3>Method Details</h3>
996999
},
9971000
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool. If the strategy is ROLLING, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade. 1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. 2. maxUnavailable controls the number of nodes that can be simultaneously unavailable. 3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings. 1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained. 2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted. # Upgrade settings control disruption and speed of the upgrade.
9981001
&quot;blueGreenSettings&quot;: { # Settings for blue-green upgrade. # Settings for blue-green upgrade strategy.
1002+
&quot;autoscaledRolloutPolicy&quot;: { # Autoscaled rollout policy utilizes the cluster autoscaler during blue-green upgrade to scale both the blue and green pools. # Autoscaled policy for cluster autoscaler enabled blue-green upgrade.
1003+
&quot;waitForDrainDuration&quot;: &quot;A String&quot;, # Optional. Time to wait after cordoning the blue pool before draining the nodes. Defaults to 3 days. The value can be set between 0 and 7 days, inclusive.
1004+
},
9991005
&quot;nodePoolSoakDuration&quot;: &quot;A String&quot;, # Time needed after draining entire blue pool. After this period, blue pool will be cleaned up.
10001006
&quot;standardRolloutPolicy&quot;: { # Standard rollout policy is the default policy for blue-green. # Standard policy for the blue-green upgrade.
10011007
&quot;batchNodeCount&quot;: 42, # Number of blue nodes to drain in a batch.
@@ -1349,6 +1355,9 @@ <h3>Method Details</h3>
13491355
},
13501356
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool. If the strategy is ROLLING, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade. 1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. 2. maxUnavailable controls the number of nodes that can be simultaneously unavailable. 3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings. 1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained. 2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted. # Upgrade settings control disruption and speed of the upgrade.
13511357
&quot;blueGreenSettings&quot;: { # Settings for blue-green upgrade. # Settings for blue-green upgrade strategy.
1358+
&quot;autoscaledRolloutPolicy&quot;: { # Autoscaled rollout policy utilizes the cluster autoscaler during blue-green upgrade to scale both the blue and green pools. # Autoscaled policy for cluster autoscaler enabled blue-green upgrade.
1359+
&quot;waitForDrainDuration&quot;: &quot;A String&quot;, # Optional. Time to wait after cordoning the blue pool before draining the nodes. Defaults to 3 days. The value can be set between 0 and 7 days, inclusive.
1360+
},
13521361
&quot;nodePoolSoakDuration&quot;: &quot;A String&quot;, # Time needed after draining entire blue pool. After this period, blue pool will be cleaned up.
13531362
&quot;standardRolloutPolicy&quot;: { # Standard rollout policy is the default policy for blue-green. # Standard policy for the blue-green upgrade.
13541363
&quot;batchNodeCount&quot;: 42, # Number of blue nodes to drain in a batch.
@@ -1878,7 +1887,7 @@ <h3>Method Details</h3>
18781887
&quot;queuedProvisioning&quot;: { # QueuedProvisioning defines the queued provisioning used by the node pool. # Specifies the configuration of queued provisioning.
18791888
&quot;enabled&quot;: True or False, # Denotes that this nodepool is QRM specific, meaning nodes can be only obtained through queuing via the Cluster Autoscaler ProvisioningRequest API.
18801889
},
1881-
&quot;resourceLabels&quot;: { # Collection of [GCP labels](https://{$universe.dns_names.final_documentation_domain}/resource-manager/docs/creating-managing-labels). # The resource labels for the node pool to use to annotate any related Google Compute Engine resources.
1890+
&quot;resourceLabels&quot;: { # Collection of [Resource Manager labels](https://{$universe.dns_names.final_documentation_domain}/resource-manager/docs/creating-managing-labels). # The resource labels for the node pool to use to annotate any related Google Compute Engine resources.
18821891
&quot;labels&quot;: { # Map of node label keys and node label values.
18831892
&quot;a_key&quot;: &quot;A String&quot;,
18841893
},
@@ -1907,6 +1916,9 @@ <h3>Method Details</h3>
19071916
},
19081917
&quot;upgradeSettings&quot;: { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. These upgrade settings configure the upgrade strategy for the node pool. Use strategy to switch between the strategies applied to the node pool. If the strategy is ROLLING, use max_surge and max_unavailable to control the level of parallelism and the level of disruption caused by upgrade. 1. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. 2. maxUnavailable controls the number of nodes that can be simultaneously unavailable. 3. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). If the strategy is BLUE_GREEN, use blue_green_settings to configure the blue-green upgrade related settings. 1. standard_rollout_policy is the default policy. The policy is used to control the way blue pool gets drained. The draining is executed in the batch mode. The batch size could be specified as either percentage of the node pool size or the number of nodes. batch_soak_duration is the soak time after each batch gets drained. 2. node_pool_soak_duration is the soak time after all blue nodes are drained. After this period, the blue pool nodes will be deleted. # Upgrade settings control disruption and speed of the upgrade.
19091918
&quot;blueGreenSettings&quot;: { # Settings for blue-green upgrade. # Settings for blue-green upgrade strategy.
1919+
&quot;autoscaledRolloutPolicy&quot;: { # Autoscaled rollout policy utilizes the cluster autoscaler during blue-green upgrade to scale both the blue and green pools. # Autoscaled policy for cluster autoscaler enabled blue-green upgrade.
1920+
&quot;waitForDrainDuration&quot;: &quot;A String&quot;, # Optional. Time to wait after cordoning the blue pool before draining the nodes. Defaults to 3 days. The value can be set between 0 and 7 days, inclusive.
1921+
},
19101922
&quot;nodePoolSoakDuration&quot;: &quot;A String&quot;, # Time needed after draining entire blue pool. After this period, blue pool will be cleaned up.
19111923
&quot;standardRolloutPolicy&quot;: { # Standard rollout policy is the default policy for blue-green. # Standard policy for the blue-green upgrade.
19121924
&quot;batchNodeCount&quot;: 42, # Number of blue nodes to drain in a batch.

0 commit comments

Comments
 (0)