-
Notifications
You must be signed in to change notification settings - Fork 404
Operation cannot be fulfilled exception #199
Comments
Think if it fails like this we should indeed not crash, don't think trying again will help as the poller version thats trying to do this is out of date. Should have been killed and replaced by a new poller, but probably just hasn't happened yet. I'll see if I can take a look at it soon ™️ |
I tested it locally, I think in Kubernetes it will be restarted and will work fine. I will close this issue, don't think it's important. |
Yes, it should recover as is but restart counter will tick up :) |
@arruzk I disagree: this is important and should be reopened. @Flydiverny It's more than a cosmetic increment of the restart counter: Kubernetes does exponential backoff in restarting exited containers. The more frequently the failure occurs, the longer Kubernetes will wait before attempting a restart which results in a growing loss of availability for this pod during which secrets are not polled or updated. For example: in our development environment alone the kubernetes-external-secrets container has crashed 9528 times over that last 4 days, 17 hours with longer and longer periods of time where the parent pod sits in the CrashLoopBackOff status unable to maintain any secrets. Currently this exception appears to be unhandled and the Node.js process is crashing which likely preempts the poller replacement behavior you mention @Flydiverny. I agree this exception should be handled (and perhaps logged at info or debug levels) without crashing. |
Thanks for the quick fix @Flydiverny! |
This is still happening on my cluster, with version
Note that I'm running 3 replicas simultaneously, only 2 replicas are crashing with this error, and the remaining replica is running perfectly. Edit: just noticed this was fixed in 2.2.0. I'm using the latest Helm chart (2.2.0) but it seems the app version is still 2.1.0. Cause of confusion. |
Application is crashing if you try to change ExternalSecret resource at the same time as application is trying to update the secret in Kubernetes.
Probably the best solution - try one more time instead of crashing.
The text was updated successfully, but these errors were encountered: