-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GLBC] GCE resources of non-snapshotted ingresses are not deleted #31
Comments
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
This was fixed by #590 and is shipped in v1.5.0. |
From @nicksardo on March 13, 2017 20:34
For the GLBC to know about load balancer GCE resources, it must first have snapshotted an ingress object that created those resources. This happens when an ingress object is deleted while the GLBC is offline or starting up.
The following are test logs from gce-gci-latest-upgrade-etcd:
Successful Test #334 : glbc.log
Failed Test #332 : glbc.log
The L7Pool
GC
func deletes resources of ingresses stored in the l7.snapshotter that are not mentioned by name in the arg slice. Because the test ingress was never stored in the snapshot cache, the GCE resources are never deleted.The failed test log also contains multiple blocks of the following:
The GLBC knows about the extraneous backends because the BackendPool uses the
CloudListingPool
. This implementation calls aList
func to reflect the current state of GCE.Copied from original issue: kubernetes/ingress-nginx#431
The text was updated successfully, but these errors were encountered: