-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
waiting for CloudFormation stack to reach "DELETE_COMPLETE" status: RequestCanceled: waiter context canceled caused by: context deadline exceeded #843
Comments
You could try |
Thanks for the feedback, I'll retry through the console and report back. |
Just looked at a previous failure and this is the error that caused the delete to fail:
I'm running |
Have a look if there is an ELB in that VPC? We have #103 to address that, and I certainly would like to prioritise it. |
There isn't. I had an elasticache cache cluster and a cache cluster subnet on the VPC, but both are deleted before |
Ok, so if you try deleting the VPC via the console, it should give you a
few hints.
…On Fri, 7 Jun 2019, 10:43 am Rafael Vanoni, ***@***.***> wrote:
There isn't. I had an elasticache cache cluster and a cache cluster subnet
on the VPC, but both are deleted before eksctl delete is run. I also
tried adding --timeout and the operation still fails, so the problem might
be somewhere else (and not just a matter of waiting longer).
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#843>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAB5MS3K2M5NHJ5CUFFTS2TPZIUTVANCNFSM4HULYKFQ>
.
|
Just re-tried |
I'm still unable to delete the eks cluster, even though all of its resources are gone. Any ideas? |
Renaming this to more accurately indicate the error. |
Ok, so you must have managed to delete the VPC. Please go to CloudFormation console and delete from there, if fails on the first attempt - try to leave the VPC out (it will offer a menu). |
I've moved on from work involving this issue, so if you can't reproduce it on your end please feel free to close it. Thanks for taking a look at it. |
Thanks for letting us know. This can be closed as it was addressed in #1010. |
Fix broken gomocks
I've been running into AWS timeouts when trying to delete a cluster (see output below) even with the wait option.
I took a look at the eksctl source and I'm wondering if we could set api.DefaultWaitTimeout to an "infinite" value when the -w option is passed. I'd be happy to propose a PR if that makes sense to the dev team. The problem this causes is that I have to go into the web interface and delete the stack(s) manually.
The text was updated successfully, but these errors were encountered: