-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to modify launch configurations #1584
Comments
👍 |
1 similar comment
👍 |
adding
causes:
and destroying just those two resources and attempting to recreate causes
|
👍 |
1 similar comment
👍 |
This is fixed with a combination of the lifecycle and removal of the names from the launch configurations, so that a generated name is used instead! |
@maguec I'm still running into the same issue. If I create a launch config without In this example, once launch config was created, I changed the instance size from m1 to m3:
Running apply:
|
I've hit this sane issue when switching ami's. My solution for now is to
|
I'm also hitting this issue when switching the AMI in my |
The combination of LCs being immutable and also unable to be dropped while being referenced by an ASG makes this a scenario where the Adding:
to your launch configs should take care of the issu. |
@phinze are you supposed to comment out all Otherwise, runs into the cycle issue. I guess I'm confused about this. |
I experienced the same issue today while trying to update ami id for my autoscale launch configuration. Not sure the issues have been resolved ? By adding "lifecycle" into the config, it ruined the I am using terraform v0.6.8. |
@anhcuong I have no issues updating the ami id for an ASG, using the new name-prefix in the launch configuration and using |
Currently I am using the |
FWIW, issues with |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I am unable to modify the launch group configuration:
The following commands work
but when attempting to change the ami assocated with blue_ami I receive the following error:
provider "aws" {
region = "${var.region}"
}
resource "aws_route53_record" "clmweb-dns" {
zone_id = "${var.route53_zone_id}"
name = "${var.environment}-clm-web.${var.route53_domain}"
type = "CNAME"
ttl = "300"
records = ["${aws_elb.clmweb_lb.dns_name}"]
}
resource "aws_elb" "clmweb_lb" {
name = "clm-web-terraform-elb"
internal = true
security_groups = ["${aws_security_group.clm-web_server.id}"]
subnets = ["${var.subnet_ids.0}", "${var.subnet_ids.1}", "${var.subnet_ids.2}"]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/"
interval = 10
}
}
resource "aws_security_group" "clm-web_server" {
name = "clm-web_server"
description = "Used for all clm-web servers"
vpc_id = "${var.vpc_id}"
ingress { #SSH in from the VPC
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "10.0.0.0/8" ]
}
ingress { #SSH in from the VPC
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "10.0.0.0/8" ]
}
}
resource "aws_launch_configuration" "clm-web_asgconf_blue" {
name = "clm-web_asgconf_blue"
image_id = "${var.blue_ami}"
instance_type = "${var.instance_size}"
key_name = "ClearCareVPC"
user_data = "{"run_env":"${var.environment}", "role": "clm-web", "atlas_token": "${var.atlas_token}", "atlas_account": "${var.atlas_account}"}"
}
resource "aws_autoscaling_group" "clm-web_asg_blue" {
availability_zones = ["${var.availability_zones.0}", "${var.availability_zones.1}", "${var.availability_zones.2}"]
name = "clm-web_asg_blue"
max_size = "${var.blue_count}"
min_size = "${var.blue_count}"
health_check_grace_period = 300
load_balancers = ["${aws_elb.clmweb_lb.name}"]
health_check_type = "EC2"
desired_capacity = "${var.blue_count}"
force_delete = true
launch_configuration = "${aws_launch_configuration.clm-web_asgconf_blue.name}"
vpc_zone_identifier = ["${var.subnet_ids.0}", "${var.subnet_ids.1}", "${var.subnet_ids.2}"]
tag {
key = "Name"
value = "clm-web"
propagate_at_launch = true
}
tag {
key = "environment"
value = "${var.environment}"
propagate_at_launch = true
}
tag {
key = "role"
value = "clm-web"
propagate_at_launch = true
}
tag {
key = "color"
value = "blue"
propagate_at_launch = true
}
}
resource "aws_launch_configuration" "clm-web_asgconf_green" {
name = "clm-web_asgconf_green"
image_id = "${var.green_ami}"
instance_type = "${var.instance_size}"
key_name = "ClearCareVPC"
user_data = "{"run_env":"${var.environment}", "role": "clm-web", "atlas_token": "${var.atlas_token}", "atlas_account": "${var.atlas_account}"}"
}
resource "aws_autoscaling_group" "clm-web_asg_green" {
availability_zones = ["${var.availability_zones.0}", "${var.availability_zones.1}", "${var.availability_zones.2}"]
name = "clm-web_asg"
max_size = "${var.green_count}"
min_size = "${var.green_count}"
health_check_grace_period = 300
load_balancers = ["${aws_elb.clmweb_lb.name}"]
health_check_type = "EC2"
desired_capacity = "${var.green_count}"
force_delete = true
launch_configuration = "${aws_launch_configuration.clm-web_asgconf_green.name}"
vpc_zone_identifier = ["${var.subnet_ids.0}", "${var.subnet_ids.1}", "${var.subnet_ids.2}"]
tag {
key = "Name"
value = "clm-web"
propagate_at_launch = true
}
tag {
key = "environment"
value = "${var.environment}"
propagate_at_launch = true
}
tag {
key = "role"
value = "clm-web"
propagate_at_launch = true
}
tag {
key = "color"
value = "green"
propagate_at_launch = true
}
}
variable "blue_ami" {
description = "the AMI to use for Blue"
}
variable "blue_count" {
description = "the number of Blue instances to run"
}
variable "green_ami" {
description = "the AMI to use for Blue"
}
variable "green_count" {
description = "the number of Green instances to run"
}
variable "instance_size" {
description = "The instance size to use"
}
variable "environment" {
description = "The environment"
}
variable "availability_zones" {
description = "A mapping of the availability zones for the region"
default = {
"0" = "us-west-2a"
"1" = "us-west-2b"
"2" = "us-west-2c"
}
}
variable "subnet_ids" {
description = "A mapping of the availability zones for the region"
default = {
"0" = "subnet-12345678"
"1" = "subnet-12345679"
"2" = "subnet-12345670"
}
}
variable "route53_zone_id" {
description = "The zone_id of the route53 zone"
default = "8675309"
}
variable "route53_domain" {
description = "The name of the route53 zone"
default = "tommytutone.it"
}
variable "region" {
description = "the region we are gong to run in"
}
variable "vpc_id" {
description = "VPC ID"
}
variable "instance_size" {
description = "EC2 instance size"
default = "t2.micro"
}
variable "atlas_token" {
description = "The atlas token for the consul services"
default = "jennyjennywhocanIturnto"
}
variable "atlas_account" {
description = "The account for the consul services"
default = "tommytutone"
}
The text was updated successfully, but these errors were encountered: