-
Notifications
You must be signed in to change notification settings - Fork 110
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Prerequisites
Please answer the following questions for yourself before submitting an issue. YOU MAY DELETE THE PREREQUISITES SECTION.
- I am running the latest version
- I checked the documentation and found no answer
- I checked to make sure that this issue has not already been filed
Expected Behavior
terraform apply command in layer2-k8s folder mustn't return an error if nothing changed.
Current Behavior
If you set aws_loadbalancer_controller_enable = true and run terraform apply twice in layer2-k8s folder, you get an error due to terraform helm provider (hashicorp/terraform-provider-helm#711)
Failure Information (for bugs)
Error: Provider produced inconsistent final plan
When expanding the plan for
module.eks_alb_ingress[0].helm_release.aws_loadbalancer_controller to include new values
learned so far during apply, provider "registry.terraform.io/hashicorp/helm"
produced an invalid new value for .manifest: was
cty.StringVal("...long string-escaped JSON block of the old manifest..."),
but now
cty.StringVal("...long string-escaped JSON block of the new manifest...").
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
If you set for helm provider
experiments {
manifest = false
}
you won't get the error above.
Steps to Reproduce
Please provide detailed steps for reproducing the issue.
- switch to
layer2-k8sfolder - set
aws_loadbalancer_controller_enable = true - run
terraform apply - run
terraform applyagain
Context
- Affected module version:
- OS:
- Terraform version: 0.15.1
Any other relevant info including logs
There is a workaround kubernetes-sigs/aws-load-balancer-controller#2264, but it isn't available in the helm repo yet.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working