-
Notifications
You must be signed in to change notification settings - Fork 9.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ASG Instance Refresh #13785
Comments
If the syntax is acceptable, I'm open to writing a PR against it. |
Any progress on this? |
Hi all! 👋 Just wanted to let you know that we recognize the value and huge popularity of this feature for the provider and hope to have an update for you sometime after we ship v3.0. |
@breathingdust would it be included in v2.x of the provider? Or just 3.x? |
Hi @RyPeck 👋. At this time we will not be backporting features into v2.x releases so all new features from this point on are to be found in v3.x releases exclusively. |
Can the PR be merged now since v3 is out and all tests passed on the PR? I think this would be a fantastic feature that I am really keen to take advantage of :D |
Any info on when this will get into Terraform ? This is a must . There is workaround with launch configuration to force a recreation of the ASG but I don't have a way when using launch templates ... |
Any updates on this? |
@dragosrosculete could you please share the workaround with launch configuration? |
Sorry for the late reply, this is how to use with launch templates. I am doing it for an EKS worker . You can just take the part you need. What you are interested is in the name_prefix . I am generating one in the Launch template and then reference it in the ASG. locals {
kubernetes_node_self_managed_general_userdata = <<USERDATA
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh --apiserver-endpoint '${aws_eks_cluster.kubernetes.endpoint}' --b64-cluster-ca '${aws_eks_cluster.kubernetes.certificate_authority[0].data}' '${var.kubernetes_cluster_name}' --kubelet-extra-args '--node-labels=lifecycle=Ec2Spot,kube/nodetype=general'
USERDATA
}
resource "aws_launch_template" "kubernetes_node_self_managed_general" {
iam_instance_profile {
name = aws_iam_instance_profile.kubernetes_node.name
}
name_prefix = "eks-general"
update_default_version = true
image_id = var.kubernetes_worker_ami
instance_type = "t3a.large"
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 30
}
}
vpc_security_group_ids = [aws_security_group.kubernetes_node_self_managed.id]
user_data = base64encode(local.kubernetes_node_self_managed_general_userdata)
key_name = var.kubernetes_ec2_ssh_key
}
resource "aws_autoscaling_group" "kubernetes_node_self_managed_general" {
desired_capacity = 1
max_size = 4
min_size = 1
# forcing roll update. No other option untill Refresh is implemented
name_prefix = aws_launch_template.kubernetes_node_self_managed_general.latest_version
vpc_zone_identifier = [
data.terraform_remote_state.current_account_network.outputs.subnet_old_kubernetes_internal_a,
data.terraform_remote_state.current_account_network.outputs.subnet_old_kubernetes_internal_b,
data.terraform_remote_state.current_account_network.outputs.subnet_old_kubernetes_internal_c
]
mixed_instances_policy {
launch_template {
launch_template_specification {
launch_template_id = aws_launch_template.kubernetes_node_self_managed_general.id
version = "$Latest"
}
override {
instance_type = "t3a.large"
}
}
instances_distribution {
on_demand_base_capacity = "1"
on_demand_percentage_above_base_capacity = "0"
}
}
lifecycle {
ignore_changes = [desired_capacity]
create_before_destroy = true
}
tags = concat(
[
{
"key" = "Name"
"value" = var.kubernetes_cluster_name
"propagate_at_launch" = true
},
{
"key" = "Managed"
"value" = "terraform"
"propagate_at_launch" = true
},
{
"key" = "kubernetes.io/cluster/${var.kubernetes_cluster_name}"
"value" = "owned"
"propagate_at_launch" = true
},
{
"key" = "k8s.io/cluster-autoscaler/${var.kubernetes_cluster_name}"
"value" = "owned"
"propagate_at_launch" = true
},
{
"key" = "k8s.io/cluster-autoscaler/enabled"
"value" = "true"
"propagate_at_launch" = true
}
]
)
} |
This has been released in version 3.22.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks! |
Community Note
Description
When an auto-scaling group is updated in Terraform (e.g. modification of the Launch Template version), its instances remain unchanged, and a complete roll-out requires intervention by other means.
This is in contrast to AWS::AutoScaling::AutoScalingGroup in CloudFormation, which orchestrates a roll-out, and waits until it's completed or times out.
AWS have recently introduced ASG Instance Refresh, which can be used to recycle the instances of an auto-scaling group.
Use ASG Instance Refresh to provide the user the option to automatically refresh an entire ASG in response to changes to an auto-scaling group. The proposed change introduces a new
aws_autoscaling_group
block,instance_refresh
, that instructs Terraform to create and monitor an instance refresh in response to any changes to the ASG's properties, except for (1) those that do not affect individual instances, such asmax_size
; and (2) properties explicitly ignored by way of lifecycle.The
instance_refresh
would be disabled by default, thus ensuring backwards compatibility.New or Affected Resource(s)
Potential Terraform Configuration
References
The text was updated successfully, but these errors were encountered: