-
Notifications
You must be signed in to change notification settings - Fork 129
Description
Hi everyone, hope you're doing well.
I have a question about the intended way to configure autoscaling for the NGINX data plane, which might be more of a usage question than a bug report.
I would like to use a HorizontalPodAutoscaler
(HPA) to automatically scale the nginx
data plane pods. I'm running on a GKE Autopilot cluster, where the number of running pods directly impacts costs, making autoscaling a critical feature for us.
For context, I'm already using a custom image because we were hitting the default worker_connections
limit of 1024, which was causing instability in our high-traffic environment. My custom image raises this limit to 16k.
I tried creating a standard HPA resource targeting the nginx
deployment. However, it seems to be in conflict with another controller. The HPA successfully scales the deployment up based on CPU, but something almost immediately scales it back down to the number of replicas defined in the Helm values.yaml
(or to 1 if not specified). This creates a loop where the HPA and another controller are "fighting" over the replica count.

My Setup
Here is the configuration I am using:
1. CRD Installation
I installed the CRDs using the following command:
kubectl kustomize "https://github.com/nginx/nginx-gateway-fabric/config/crd/gateway-api/standard?ref=v2.0.2" | kubectl apply -f -
2. Helm Values (ngf-values.yaml
)
I installed the chart using these values:
nginxGateway:
kind: deployment
gatewayClassName: nginx
config:
logging:
level: debug
replicas: 1
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"
nginx:
image:
repository: my-custom-repo/ngf-custom # Placeholder for my custom image
tag: 2.0.2-16k
kind: deployment
replicas: 3 # I have also tried removing this line
container:
resources:
requests:
cpu: 2
memory: "2048Mi"
service:
type: LoadBalancer
loadBalancerIP: "STATIC_IP_ADDRESS"
externalTrafficPolicy: Local
3. Gateway Manifest
I created the following Gateway
resource:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: nginx
namespace: nginx-gateway
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prd
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- name: zedia-cert-tls
kind: Secret
allowedRoutes:
namespaces:
from: All
My Question
Is autoscaling the data plane pods a supported feature? If so, what is the correct way to configure it to avoid this conflict with the controller? Am I missing a setting in the values.yaml
or in a CRD that enables HPA support and disables the static replica management?
Thank you for your time!
Metadata
Metadata
Assignees
Labels
Type
Projects
Status