-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google Cloud: Update instances on instance group when instance_template is changed #3875
Comments
Good catch on the update issue. I will implement approach 1 since we'd rather not support the Alpha API as it's subject to change. |
As you pointed out, it's bad in prod to restart all the instances at once. So I added the |
Thank you @lwander, it sounds like a good idea! Since we're testing Terraform on our new staging environment, I'll try out your PR locally to see if it works well. |
If you make the IGM depend on the template, then changing the template should recreate the IGM (and all its instances). If this doesn't happen this is a core bug. Of course that's not the behavior you want :) What I would actually do is create an IGM for each version of your software -- then you can independently scale them / tear them down and implement whatever phased rollout policy you want. |
No, |
Let's navel-gaze... In an ideal would one could imagine a declarative deployment agent being able to do 2 things:
This IGM rollout stuff gets you some but not all of the way to 2. You actually need application knowledge to do it properly, and I've only seen it done in PAAS providers. What IGM currently supports is definitely useful though, although unfortunately it is exposed imperatively so requires some mapping to expose naturally in Terraform. However you can, with a bit of work, build (2) on top of (1) by executing multiple Terraform applies from a continuous deployment agent that is aware of your application's monitoring. However you'd need a ForceNew instance_template for that, which would invalidate the current support for (2). Maybe the right thing to do is like the startup_script -- have one forcenew field and one updateable field. |
For anyone reading this in the future: @sparkprime and I talked this out and agreed to keep
|
Closing since #3892 has been merged |
Awesome, thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
On resource google_compute_instance_group_manager, you can specify
instance_template
resource for the instance group. When I modify google_compute_instance_template resource, associated google_compute_instance_group_manager is updated but its instances are not updated. I had to manually delete VM instances so Instance Group can recreate instances with new instance template. I can think two possible approach to this:1. Recreate instances after updating instance-groups
By using gcloud compute instance-groups managed recreate-instances after updating an instance group, you can trigger recreation of the instances. This is probably the easiest way to solve a problem, but instance group will have no active instances for a while, which is not ideal on production environment.
2. Use rolling update
By using gcloud alpha compute rolling-updates start, you can gracefully retire old instances and create new instances. It has several flags that can be used to control the instance count and interval between each batch updates. We've used this command before we investigate Terraform, and it worked pretty well even though it's still in alpha state.
The text was updated successfully, but these errors were encountered: