Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Desired Size for both Self managed and EKS managed not working after the EKS deployment #1924

Closed
sourabhsharma487 opened this issue Mar 8, 2022 · 5 comments
Labels

Comments

@sourabhsharma487
Copy link

sourabhsharma487 commented Mar 8, 2022

Scenario: I needed to provision a couple of EKS clusters using both EKS managed and Self managed node groups. I tried provisioning it with this code using the 18.8.1 version and I am able to deploy it for both node groups.

Problem statement: This code is not accepting "desired_size" changes in terms of adding/removing the nodes after the EKS cluster deployment is done based on the need.

Versions:
Terraform-aws-eks: 18.8.1
Terraform 1.1.6
OS: Windows 10 Enterprise

EKS: 1.21

`

terraform-aws-eks/main.tf

module "eks" {
source = "terraform-aws-modules/eks/aws"

version = "18.8.1"
vpc_id = var.vpc_id
subnet_ids = var.public_subnets
cluster_endpoint_private_access = var.cluster_endpoint_private_access
cluster_endpoint_public_access = var.cluster_endpoint_public_access
cluster_name = var.cluster_name
cluster_version = local.cluster_version
tags = local.tags

eks_managed_node_group_defaults = {
ami_type = "AL2_x86_64"
disk_size = 20
instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
vpc_security_group_ids = [aws_security_group.all_worker_mgmt.id]
}

eks_managed_node_groups = {
# blue = {}
green = {
min_size = 1
max_size = 5
desired_size = 4
instance_types = ["t3.large"]
capacity_type = "SPOT"
labels = {
Environment = "eks-terraform"
}
taints = {
dedicated = {
key = "dedicated"
value = "gpuGroup"
effect = "NO_SCHEDULE"
}
}
tags = {
Maintainer = "Sourabh Sharma"
}
}
}
}
`

@bryantbiggs
Copy link
Member

Desired size is ignored due to the nature of Kubernetes and predominant use of autoscaling (i.e. - cluster autoscaler, Karpenter, etc.). Therefore changing desired_size after provisioning will not influence the number of nodes. If you need to force more nodes into the pool, you can raise the min_size as a work around

@llacoste
Copy link

Desired size is ignored due to the nature of Kubernetes and predominant use of autoscaling (i.e. - cluster autoscaler, Karpenter, etc.). Therefore changing desired_size after provisioning will not influence the number of nodes. If you need to force more nodes into the pool, you can raise the min_size as a work around

I'm afraid only increasing the min size only works if it is smaller than or equal to your desired size. If you increase the minimum beyond the desired capacity, then you will get an error: InvalidParameterException: Minimum capacity <Some Number> can't be greater than desired size <Some smaller number>

@gordonmurray
Copy link

Desired size is ignored due to the nature of Kubernetes and predominant use of autoscaling (i.e. - cluster autoscaler, Karpenter, etc.).

This makes sense. However the desired size as well as min/max can be changed in the EKS UI in the AWS Console. I would assume if the AWS UI can do it, it could also be possible for the Terraform module to do it also.

Screenshot from 2022-08-11 17-09-45

@github-actions
Copy link

github-actions bot commented Nov 9, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 9, 2022
@bryantbiggs
Copy link
Member

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants