Skip to content

Conversation

@tjungblu
Copy link
Contributor

@tjungblu tjungblu commented Sep 16, 2025

This PR configures the upgrade parameters according to the size of the control plane. This also ensures that during the event of an unhealthy node, the rolling upgrade can proceed.

@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Sep 16, 2025
@openshift-ci-robot
Copy link
Contributor

@tjungblu: This pull request references Jira Issue OCPBUGS-57444, which is invalid:

  • expected the bug to target the "4.21.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Sep 16, 2025
@coderabbitai
Copy link

coderabbitai bot commented Sep 16, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Updates OAuth OpenShift deployment manifest and controller: replicas now use control plane count; rolling update parameters (MaxUnavailable, MaxSurge) are computed from control plane size. Adds unit test verifying rolling update calculations.

Changes

Cohort / File(s) Summary
OpenShift deployment manifest
bindata/oauth-openshift/deployment.yaml
spec.strategy.rollingUpdate.maxUnavailable changed from 1 to 0; a comment was added noting the deployment controller adjusts this value. maxSurge remains 0.
Deployment controller logic
pkg/controllers/deployment/deployment_controller.go
Switched replica calculation to use control plane node count (renamed variable, updated error messages, nil-check). Added import k8s.io/apimachinery/pkg/util/intstr. Introduced setRollingUpdateParameters(controlPlaneCount int32, deployment *appsv1.Deployment) which sets MaxUnavailable = max(controlPlaneCount-1, 1) and MaxSurge = controlPlaneCount via intstr.FromInt32, and applied it to the expected Deployment. Reconciliation flow otherwise unchanged.
Unit tests
pkg/controllers/deployment/deployment_controller_test.go
Added TestSetRollingUpdateParameters (table-driven) covering control plane counts 1–5; asserts MaxUnavailable and MaxSurge are non-nil, of type Int, and hold expected IntVal values.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "OCPBUGS-57444: set appropriate rolling update settings" succinctly and accurately summarizes the primary change—adjusting rolling update parameters—and directly relates to the deployment/rollingUpdate modifications in the changeset.
Description Check ✅ Passed The PR description correctly states that upgrade parameters are configured based on control plane size and that this enables rolling upgrades to proceed on unhealthy nodes, which matches the code changes that compute and set RollingUpdate MaxUnavailable/MaxSurge and the added unit test.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

cursor[bot]

This comment was marked as outdated.

@openshift-ci openshift-ci bot requested review from ibihim and liouk September 16, 2025 13:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
bindata/oauth-openshift/deployment.yaml (1)

10-13: Note: Static rolling update values will be overridden by controller.

These hardcoded values (maxUnavailable: 1, maxSurge: 0) will be dynamically overridden by the controller based on control plane node count. Consider adding a comment to indicate these are placeholder values.

pkg/controllers/deployment/deployment_controller.go (1)

268-269: Consider extracting rolling update calculation for testability.

The inline calculation logic could benefit from being extracted into a separate function for easier unit testing and potential reuse.

+// calculateRollingUpdateStrategy determines optimal rolling update parameters based on control plane count
+func calculateRollingUpdateStrategy(controlPlaneCount int32) (maxUnavailable, maxSurge intstr.IntOrString) {
+    maxUnavailable = intstr.FromInt32(max(controlPlaneCount-1, 1))
+    maxSurge = intstr.FromInt32(controlPlaneCount)
+    return maxUnavailable, maxSurge
+}
+
 // Given the control plane sizes, we adjust the max unavailable and max surge values to mimic "MinAvailable".
 // We always ensure it is controlPlaneCount - 1, as this allows us to keep have at least a single replica running.
 // We also set MaxSurge to always be exactly the control plane count, as this allows us to more quickly replace failing
 // deployments with a new replica set. This does not clash with the pod anti affinity set above.
-maxUnavailable := intstr.FromInt32(max(*controlPlaneCount-1, 1))
-maxSurge := intstr.FromInt32(*controlPlaneCount)
+maxUnavailable, maxSurge := calculateRollingUpdateStrategy(*controlPlaneCount)
 expectedDeployment.Spec.Strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
 expectedDeployment.Spec.Strategy.RollingUpdate.MaxSurge = &maxSurge
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between 8e30dc7 and 7b65173.

📒 Files selected for processing (2)
  • bindata/oauth-openshift/deployment.yaml (2 hunks)
  • pkg/controllers/deployment/deployment_controller.go (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (6)
bindata/oauth-openshift/deployment.yaml (2)

30-30: LGTM! Consistent terminology update to control-plane.

The node selector change from master to control-plane aligns with Kubernetes' standardized terminology and matches the controller changes.


42-44: LGTM! Toleration properly updated for control-plane nodes.

The toleration key change ensures OAuth pods can be scheduled on control-plane nodes with the NoSchedule taint.

pkg/controllers/deployment/deployment_controller.go (4)

14-14: LGTM! Required import for IntOrString type.

The intstr package import is correctly added to support the new dynamic MaxUnavailable and MaxSurge configuration.


257-263: LGTM! Clear transition from master to control-plane terminology.

The variable naming and error message properly reflect the shift to control-plane nodes, maintaining consistency with the deployment manifest changes.


264-272: Solid rolling update strategy implementation with appropriate safeguards.

The implementation correctly:

  • Ensures at least one replica remains available (max(controlPlaneCount-1, 1))
  • Allows faster rollouts with maxSurge = controlPlaneCount
  • Won't conflict with pod anti-affinity rules

The comments clearly explain the "MinAvailable" mimicking behavior.


268-268: Confirmed — Go 1.24 declared; builtin max is supported.

go.mod declares "go 1.24.0", so the use of max at pkg/controllers/deployment/deployment_controller.go:268 is valid; no change required.

@tjungblu
Copy link
Contributor Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Sep 16, 2025
@openshift-ci-robot
Copy link
Contributor

@tjungblu: This pull request references Jira Issue OCPBUGS-57444, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @xingxingxia

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested a review from xingxingxia September 16, 2025 16:32
@openshift-ci-robot
Copy link
Contributor

@tjungblu: This pull request references Jira Issue OCPBUGS-57444, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.21.0) matches configured target version for branch (4.21.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @xingxingxia

In response to this:

This PR configures the upgrade parameters according to the size of the control plane.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@tjungblu
Copy link
Contributor Author

/retest-required

topologyKey: kubernetes.io/hostname
tolerations:
- key: node-role.kubernetes.io/master
- key: node-role.kubernetes.io/control-plane
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems the taints are not yet there:

    message: '0/6 nodes are available: 3 node(s) didn''t match Pod''s node affinity/selector,
      3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption:
      0/6 nodes are available: 6 Preemption is not helpful for scheduling.'

@tjungblu tjungblu force-pushed the OCPBUGS-57444 branch 2 times, most recently from c1e805a to aeebe6a Compare September 17, 2025 06:20
@liouk
Copy link
Member

liouk commented Sep 17, 2025

/lgtm

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Sep 17, 2025
@liouk
Copy link
Member

liouk commented Sep 17, 2025

Can you add some unit test checking this behavior?

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 17, 2025
@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Sep 17, 2025
@liouk
Copy link
Member

liouk commented Sep 17, 2025

Thanks @tjungblu!

/lgtm
/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 17, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 17, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: liouk, tjungblu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 17, 2025
@tjungblu
Copy link
Contributor Author

/cherry-pick release-4.19 release-4.18 release-4.17 release-4.16

@openshift-cherrypick-robot

@tjungblu: once the present PR merges, I will cherry-pick it on top of release-4.19 in a new PR and assign it to you.

In response to this:

/cherry-pick release-4.19 release-4.18 release-4.17 release-4.16

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@tjungblu
Copy link
Contributor Author

shoot, we've already branched

/cherry-pick release-4.20 release-4.19 release-4.18 release-4.17 release-4.16

@openshift-cherrypick-robot

@tjungblu: once the present PR merges, I will cherry-pick it on top of release-4.20 in a new PR and assign it to you.

In response to this:

shoot, we've already branched

/cherry-pick release-4.20 release-4.19 release-4.18 release-4.17 release-4.16

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
pkg/controllers/deployment/deployment_controller.go (1)

262-264: Nil-deref guard added — good.

This addresses the prior review about potential panic on nil controlPlaneCount.

🧹 Nitpick comments (4)
pkg/controllers/deployment/deployment_controller_test.go (2)

11-101: Good table-driven coverage; add 0-node edge case.

Please include a test for controlPlaneCount = 0 to prevent accidental negative math and to document the desired behavior when no CP nodes are selectable.

Apply this diff to extend the table:

 		{
 			name:                   "five control plane nodes",
 			controlPlaneCount:      5,
 			expectedMaxUnavailable: 4, // max(5-1, 1) = max(4, 1) = 4
 			expectedMaxSurge:       5,
 		},
+		{
+			name:                   "zero control plane nodes",
+			controlPlaneCount:      0,
+			expectedMaxUnavailable: 0,
+			expectedMaxSurge:       0,
+		},

52-67: Also cover nil RollingUpdate to guard against panics.

Add a test that passes a Deployment with Strategy.RollingUpdate == nil; the helper should initialize it.

Additional test (outside the shown hunk):

func TestSetRollingUpdateParameters_AllowsNilRollingUpdate(t *testing.T) {
	deployment := &appsv1.Deployment{
		ObjectMeta: metav1.ObjectMeta{Name: "test", Namespace: "ns"},
		Spec: appsv1.DeploymentSpec{
			Strategy: appsv1.DeploymentStrategy{
				Type:          appsv1.RollingUpdateDeploymentStrategyType,
				RollingUpdate: nil,
			},
		},
	}
	setRollingUpdateParameters(3, deployment)
	if deployment.Spec.Strategy.RollingUpdate == nil {
		t.Fatalf("RollingUpdate should be initialized")
	}
	if got := deployment.Spec.Strategy.RollingUpdate.MaxUnavailable.IntVal; got != 2 {
		t.Fatalf("MaxUnavailable = %d, want 2", got)
	}
	if got := deployment.Spec.Strategy.RollingUpdate.MaxSurge.IntVal; got != 3 {
		t.Fatalf("MaxSurge = %d, want 3", got)
	}
}
pkg/controllers/deployment/deployment_controller.go (2)

262-264: Tighten error wording.

Minor grammar fix for clarity.

-		return nil, false, append(errs, fmt.Errorf("found nil control plane nodes count"))
+		return nil, false, append(errs, fmt.Errorf("control plane node count is nil"))

321-325: Clarify comment to reflect single-node caveat.

Current comment claims “always keep at least a single replica running,” which isn’t true for a single-node control plane under required anti-affinity.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to Reviews -> Disable Knowledge Base setting

📥 Commits

Reviewing files that changed from the base of the PR and between aeebe6a and 0f3451c.

📒 Files selected for processing (3)
  • bindata/oauth-openshift/deployment.yaml (1 hunks)
  • pkg/controllers/deployment/deployment_controller.go (3 hunks)
  • pkg/controllers/deployment/deployment_controller_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • bindata/oauth-openshift/deployment.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cursor Bugbot
🔇 Additional comments (2)
pkg/controllers/deployment/deployment_controller_test.go (1)

11-49: LGTM overall.

Solid, readable assertions; types validated explicitly.

pkg/controllers/deployment/deployment_controller.go (1)

14-14: Import looks correct.

Comment on lines +257 to 268
// Set the replica count to the number of control plane nodes.
controlPlaneCount, err := c.countNodes(expectedDeployment.Spec.Template.Spec.NodeSelector)
if err != nil {
return nil, false, append(errs, fmt.Errorf("failed to determine number of master nodes: %v", err))
return nil, false, append(errs, fmt.Errorf("failed to determine number of control plane nodes: %v", err))
}
expectedDeployment.Spec.Replicas = masterNodeCount
if controlPlaneCount == nil {
return nil, false, append(errs, fmt.Errorf("found nil control plane nodes count"))
}

expectedDeployment.Spec.Replicas = controlPlaneCount
setRollingUpdateParameters(*controlPlaneCount, expectedDeployment)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Behavioral check: replicas tied to CP count — OK.

Replica assignment from countNodes and the nil-guard look correct. After fixing the helper, this should roll safely on N>=2 and degrade gracefully on N==1.

Run this to catch the undefined symbols before CI:


🏁 Script executed:

#!/bin/bash
# Verify no usage of intstr.FromInt32 and bare max(…) in Go sources
rg -nP --type=go '\bintstr\.FromInt32\s*\(' -C1
rg -nP --type=go '(?<![A-Za-z0-9_])max\s*\(' -C1

Length of output: 569


🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "---- intstr.FromInt32 usages ----"
rg -n --type=go 'intstr\.FromInt32' -C1 || true

echo "---- occurrences of max(...) in Go files ----"
rg -n --type=go '\bmax\s*\(' -C3 || true

echo "---- definitions of func max(...) ----"
rg -n --type=go 'func\s+max\s*\(' -C3 || true

echo "---- show setRollingUpdateParameters context (lines ~310-340) ----"
sed -n '310,340p' pkg/controllers/deployment/deployment_controller.go || true

echo "---- show file imports (top 1-140 lines) ----"
sed -n '1,140p' pkg/controllers/deployment/deployment_controller.go || true

Length of output: 11363


Fix undefined 'max' in setRollingUpdateParameters

setRollingUpdateParameters calls max(controlPlaneCount-1, 1) but no max function is defined; add an int32 max helper (e.g. func max(a, b int32) int32 { if a > b { return a } return b }) or replace with an inline conditional.
Location: pkg/controllers/deployment/deployment_controller.go:325-329

🤖 Prompt for AI Agents
In pkg/controllers/deployment/deployment_controller.go around lines 257 to 268,
setRollingUpdateParameters uses an undefined max(...) when computing
max(controlPlaneCount-1, 1); add a small int32 max helper (for example func
max(a, b int32) int32 { if a > b { return a } return b }) near the top of the
file or replace the call with an inline conditional (if controlPlaneCount-1 > 1
{ ... } else { ... }) so the expression compiles and correctly returns the
larger int32 value.

Comment on lines +321 to +330
// Given the control plane sizes, we adjust the max unavailable and max surge values to mimic "MinAvailable".
// We always ensure it is controlPlaneCount - 1, as this allows us to keep have at least a single replica running.
// We also set MaxSurge to always be exactly the control plane count, as this allows us to more quickly replace failing
// deployments with a new replica set. This does not clash with the pod anti affinity set above.
func setRollingUpdateParameters(controlPlaneCount int32, deployment *appsv1.Deployment) {
maxUnavailable := intstr.FromInt32(max(controlPlaneCount-1, 1))
maxSurge := intstr.FromInt32(controlPlaneCount)
deployment.Spec.Strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
deployment.Spec.Strategy.RollingUpdate.MaxSurge = &maxSurge
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix compile-time errors and add defensive init.

max is undefined in this file, and intstr.FromInt32 does not exist (use FromInt). Also guard against a nil RollingUpdate to avoid panics. Clamp the 0-node case to 0 unavailability.

Apply this diff:

-// Given the control plane sizes, we adjust the max unavailable and max surge values to mimic "MinAvailable".
-// We always ensure it is controlPlaneCount - 1, as this allows us to keep have at least a single replica running.
-// We also set MaxSurge to always be exactly the control plane count, as this allows us to more quickly replace failing
-// deployments with a new replica set. This does not clash with the pod anti affinity set above.
+// Given the control plane size, adjust MaxUnavailable and MaxSurge to approximate a "MinAvailable"-like policy.
+// When controlPlaneCount > 1, we keep at least one replica running (MaxUnavailable = controlPlaneCount - 1).
+// On single-node control planes, anti-affinity may cause a brief gap during updates.
+// MaxSurge is set to controlPlaneCount to speed up replacement; anti-affinity prevents co-location.
 func setRollingUpdateParameters(controlPlaneCount int32, deployment *appsv1.Deployment) {
-	maxUnavailable := intstr.FromInt32(max(controlPlaneCount-1, 1))
-	maxSurge := intstr.FromInt32(controlPlaneCount)
-	deployment.Spec.Strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
-	deployment.Spec.Strategy.RollingUpdate.MaxSurge = &maxSurge
+	if deployment.Spec.Strategy.RollingUpdate == nil {
+		deployment.Spec.Strategy.Type = appsv1.RollingUpdateDeploymentStrategyType
+		deployment.Spec.Strategy.RollingUpdate = &appsv1.RollingUpdateDeployment{}
+	}
+	var mu int32
+	switch {
+	case controlPlaneCount <= 0:
+		mu = 0
+	case controlPlaneCount == 1:
+		mu = 1
+	default:
+		mu = controlPlaneCount - 1
+	}
+	maxUnavailable := intstr.FromInt(int(mu))
+	maxSurge := intstr.FromInt(int(controlPlaneCount))
+	deployment.Spec.Strategy.RollingUpdate.MaxUnavailable = &maxUnavailable
+	deployment.Spec.Strategy.RollingUpdate.MaxSurge = &maxSurge
 }
🤖 Prompt for AI Agents
In pkg/controllers/deployment/deployment_controller.go around lines 321-330,
replace the use of the undefined max and non-existent intstr.FromInt32 and add a
defensive init: ensure deployment.Spec.Strategy.RollingUpdate is non-nil
(allocate a new RollingUpdate if nil), compute maxUnavailableInt :=
int(controlPlaneCount-1) and if maxUnavailableInt < 0 set it to 0 (clamping the
0-node case), set maxSurgeInt := int(controlPlaneCount), then use
intstr.FromInt(maxUnavailableInt) and intstr.FromInt(maxSurgeInt) to create
values and assign them to deployment.Spec.Strategy.RollingUpdate.MaxUnavailable
and .MaxSurge respectively.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 17, 2025

@tjungblu: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-agnostic-ipv6 0f3451c link false /test e2e-agnostic-ipv6

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@tjungblu
Copy link
Contributor Author

@xingxingxia are you able to verify this please?

@xingxingxia
Copy link
Contributor

Sure. I'd do a post-merge verification due to currently other tasks on me make me not available for pre-merge testing several PRs.
/verified later @xingxingxia

@openshift-ci-robot openshift-ci-robot added verified-later verified Signifies that the PR passed pre-merge verification criteria labels Sep 18, 2025
@openshift-ci-robot
Copy link
Contributor

@xingxingxia: This PR has been marked to be verified later by @xingxingxia.

In response to this:

Sure. I'd do a post-merge verification due to currently other tasks on me make me not available for pre-merge testing several PRs.
/verified later @xingxingxia

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-bot openshift-merge-bot bot merged commit ed0d09e into openshift:master Sep 18, 2025
16 of 17 checks passed
@openshift-ci-robot
Copy link
Contributor

@tjungblu: Jira Issue OCPBUGS-57444: All pull requests linked via external trackers have merged:

This pull request has the verified-later tag and will need to be manually moved to VERIFIED after testing. Jira Issue OCPBUGS-57444 has been moved to the MODIFIED state.

In response to this:

This PR configures the upgrade parameters according to the size of the control plane. This also ensures that during the event of an unhealthy node, the rolling upgrade can proceed.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-cherrypick-robot

@tjungblu: new pull request created: #791

In response to this:

/cherry-pick release-4.19 release-4.18 release-4.17 release-4.16

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-cherrypick-robot

@tjungblu: new pull request created: #792

In response to this:

shoot, we've already branched

/cherry-pick release-4.20 release-4.19 release-4.18 release-4.17 release-4.16

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-merge-robot
Copy link
Contributor

Fix included in accepted release 4.21.0-0.nightly-2025-09-18-193221

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria verified-later

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants