-
Notifications
You must be signed in to change notification settings - Fork 278
Description
I've noticed that when the podTerminationGracePeriod is reached the NTH looks to crash, I'm unsure if it is my settings or an actual problem with NTH. We unfortunately have some pods that sometimes seem to hang around indefinitely, but I had figured after reaching the grace period it would simply forcibly kill those pods.
aws-node-termination-handler arguments:
dry-run: false,
node-name: <snip>,
metadata-url: http://169.254.169.254,
kubernetes-service-host: 172.20.0.1,
kubernetes-service-port: 443,
delete-local-data: true,
ignore-daemon-sets: true,
pod-termination-grace-period: 120,
node-termination-grace-period: 120,
enable-scheduled-event-draining: false,
enable-spot-interruption-draining: false,
enable-sqs-termination-draining: true,
enable-rebalance-monitoring: false,
metadata-tries: 3,
cordon-only: false,
taint-node: false,
json-logging: false,
log-level: info,
webhook-proxy: ,
webhook-headers: <not-displayed>,
webhook-url: ,
webhook-template: <not-displayed>,
uptime-from-file: ,
enable-prometheus-server: true,
prometheus-server-port: 9092,
aws-region: us-east-1,
queue-url: <snip>,
check-asg-tag-before-draining: true,
managed-asg-tag: aws-node-termination-handler/managed,
aws-endpoint: ,
I unfortunately don't have the logs, but the last bit of output is always the list of pods it is evicting, followed by the error There was a problem while trying to cordon and drain the node saying it reached the pod grace period. It then exit(1) and a new pod is spun up, which sees the old message, and picks up where the last one left off. It seems like when it is trying to forcibly kill the nuisance pods (assuming it actually does this) is when the NTH dies, as then the next one comes in all of the pods are gone and it simply wraps up the lifecycle event and deletes the SQS message.