-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add support for additional volumes in launch templates and launch configurations #800
Conversation
Why is the code format failing on the local.tf? |
Is there a valid use case for extra volumes mounted to worker nodes? Can the same be done with managed nodes? My thinking is that Kubernetes nodes are autoscaled and would be recycled / scaled up and down by the Cluster Autoscaler. EBS volumes are generally destroyed when an instance is terminated. I'd think that stateful workloads in need of block storage such as Kafka, MySQL or Ceph would use PersistentVolume's and PersistentVolumeClaims along with an appropriate CSI driver to create, mount and handle lifecycle of volumes? I might be wrong and may have missed some valuable context somewhere. I'm also not an expert in that field so always appreciate these learning opportunities 😄 |
First, I am also learning, not an expert at all. For most stateful applications, as you mention, the CSI way to claim storage is the way to request storage. However, for storage itself, say, Ceph, I found that it might be easier to manage and control to manually specify or add devices, and have a list of specific nodes and their devices. In my case, I was using rook.io, and to make it similar to on-prem, this was the only way. Of course, I would love to see other people's opinion on this matter. And of course, for this I would specify that ASG should not scale up/down automatically (protect_from_scale_in = true) |
With the new AWS bottlerocket os, the AMI for the worker nodes automatically mounts a secondary volume with a default size of 20GB to act as the containerd cache for the node. In order to change the size of this volume, I need the ability to override this default size to something larger. I literally just wrote something very similar to this patch, so it would be great to get the PR merged in. |
It looks like the lint issues are a missing white space at |
I messed up with my git history a little bit, I think I fixed it. (Sorry, I'm not very expert) @snstanton, does this PR help your use case? How would you specify which volumes is the one that is being used for the cache? |
Now the user can specify additional volumes for its instances. This is specially useful when the cluster is going to be used for specific applications such as storage. Example 1 (Just required fields) additional_volumes = [{ block_device_name = "/dev/sdb", volume_size = 200, delete_on_termination = false }] Example 2 (Multiple additional volumes & All features) additional_volumes = [{ block_device_name = "/dev/sdb", volume_size = 200, volume_type = "io1" iops = 100 encrypted = true kms_key_id = "arn:aws:kms:us-east-1:89349:key/a4-ds-34" delete_on_termination = false },{ block_device_name = "/dev/sdc", volume_size = 500, delete_on_termination = false }]
Yes, in fact I branched and added this PR and it works perfectly. Here's what I have in my config:
|
You can fix the Semantic Pull Request failure by changing the PR title to something like:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jaimehrubiks for your PR. I just added some minor comments/questions.
Furthermore, can you add the same logic in launch configuration ?
…lumes As @barryib pointed out, locals already defaults an empty list, so it is no needed to specify it in the expression, making it more readable Co-Authored-By: Thierno IB. BARRY <[email protected]>
Will take a look at launch configuration tomorrow, to see if it is similar. |
All changes applied. I did not test this feature on working groups (launch configuration) |
Nice @jaimehrubiks. Thanks for your contribution. |
I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
PR o'clock
Description
Now the user can specify additional volumes for its instances. This is
specially useful when the cluster is going to be used for specific
applications such as storage.
Example 1 (Just required fields)
Example 2 (Multiple additional volumes & All features)
Relevant Issues:
#687
#572
Note: I only implemented this feature on launch template workers
Checklist