Skip to content
This repository has been archived by the owner on Jul 26, 2022. It is now read-only.

Operation cannot be fulfilled exception #199

Closed
arruzk opened this issue Nov 7, 2019 · 6 comments · Fixed by #215
Closed

Operation cannot be fulfilled exception #199

arruzk opened this issue Nov 7, 2019 · 6 comments · Fixed by #215

Comments

@arruzk
Copy link
Contributor

arruzk commented Nov 7, 2019

Application is crashing if you try to change ExternalSecret resource at the same time as application is trying to update the secret in Kubernetes.

Probably the best solution - try one more time instead of crashing.


Error: Operation cannot be fulfilled on externalsecrets.kubernetes-client.io "test-secrets-syslogger": the object has been modified; please apply your changes to the latest version and try again
    at /Users/u/projects/tmp/kubernetes-external-secrets/node_modules/kubernetes-client/backends/request/client.js:214:25
    at Request._callback (/Users/u/projects/tmp/kubernetes-external-secrets/node_modules/kubernetes-client/backends/request/client.js:162:14)
    at Request.self.callback (/Users/u/projects/tmp/kubernetes-external-secrets/node_modules/request/request.js:185:22)
    at Request.emit (events.js:210:5)
    at Request.EventEmitter.emit (domain.js:476:20)
    at Request.<anonymous> (/Users/u/projects/tmp/kubernetes-external-secrets/node_modules/request/request.js:1161:10)
    at Request.emit (events.js:210:5)
    at Request.EventEmitter.emit (domain.js:476:20)
    at IncomingMessage.<anonymous> (/Users/u/projects/tmp/kubernetes-external-secrets/node_modules/request/request.js:1083:12)
    at Object.onceWrapper (events.js:299:28) {
  code: 409,
  statusCode: 409
}
[nodemon] app crashed - waiting for file changes before starting...

@Flydiverny
Copy link
Member

Think if it fails like this we should indeed not crash, don't think trying again will help as the poller version thats trying to do this is out of date. Should have been killed and replaced by a new poller, but probably just hasn't happened yet. I'll see if I can take a look at it soon ™️

@arruzk
Copy link
Contributor Author

arruzk commented Nov 8, 2019

I tested it locally, I think in Kubernetes it will be restarted and will work fine. I will close this issue, don't think it's important.

@arruzk arruzk closed this as completed Nov 8, 2019
@Flydiverny
Copy link
Member

Yes, it should recover as is but restart counter will tick up :)

@iAnomaly
Copy link
Contributor

iAnomaly commented Nov 12, 2019

@arruzk I disagree: this is important and should be reopened.

@Flydiverny It's more than a cosmetic increment of the restart counter: Kubernetes does exponential backoff in restarting exited containers. The more frequently the failure occurs, the longer Kubernetes will wait before attempting a restart which results in a growing loss of availability for this pod during which secrets are not polled or updated.

For example: in our development environment alone the kubernetes-external-secrets container has crashed 9528 times over that last 4 days, 17 hours with longer and longer periods of time where the parent pod sits in the CrashLoopBackOff status unable to maintain any secrets.

Currently this exception appears to be unhandled and the Node.js process is crashing which likely preempts the poller replacement behavior you mention @Flydiverny. I agree this exception should be handled (and perhaps logged at info or debug levels) without crashing.

@iAnomaly
Copy link
Contributor

Thanks for the quick fix @Flydiverny!

@TarekAS
Copy link

TarekAS commented Nov 18, 2019

This is still happening on my cluster, with version 2.1.0.

{"level":50,"time":1574077106905,"pid":19,"hostname":"secretsmanager-kubernetes-external-secrets-7857bc646c-s5gh7","type":"Error","stack":"Error: Operation cannot be fulfilled on externalsecrets.kubernetes-client.io \"my-external-secret\": the object has been modified; please apply your changes to the latest version and try again\n    at /app/node_modules/kubernetes-client/backends/request/client.js:214:25\n    at Request._callback (/app/node_modules/kubernetes-client/backends/request/client.js:162:14)\n    at Request.self.callback (/app/node_modules/request/request.js:185:22)\n    at Request.emit (events.js:210:5)\n    at Request.EventEmitter.emit (domain.js:476:20)\n    at Request.<anonymous> (/app/node_modules/request/request.js:1161:10)\n    at Request.emit (events.js:210:5)\n    at Request.EventEmitter.emit (domain.js:476:20)\n    at IncomingMessage.<anonymous> (/app/node_modules/request/request.js:1083:12)\n    at Object.onceWrapper (events.js:299:28)","code":409,"statusCode":409,"msg":"failure while polling the secret kube-system/my-external-secret","v":1}
Error: Operation cannot be fulfilled on externalsecrets.kubernetes-client.io "my-external-secret": the object has been modified; please apply your changes to the latest version and try again
    at /app/node_modules/kubernetes-client/backends/request/client.js:214:25
    at Request._callback (/app/node_modules/kubernetes-client/backends/request/client.js:162:14)
    at Request.self.callback (/app/node_modules/request/request.js:185:22)
    at Request.emit (events.js:210:5)
    at Request.EventEmitter.emit (domain.js:476:20)
    at Request.<anonymous> (/app/node_modules/request/request.js:1161:10)
    at Request.emit (events.js:210:5)
    at Request.EventEmitter.emit (domain.js:476:20)
    at IncomingMessage.<anonymous> (/app/node_modules/request/request.js:1083:12)
    at Object.onceWrapper (events.js:299:28) {
  code: 409,
  statusCode: 409
}
npm info lifecycle [email protected]~start: Failed to exec start script
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start: `./bin/daemon.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm timing npm Completed in 6067ms

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/node/.npm/_logs/2019-11-18T11_38_26_926Z-debug.log

Note that I'm running 3 replicas simultaneously, only 2 replicas are crashing with this error, and the remaining replica is running perfectly.

Edit: just noticed this was fixed in 2.2.0. I'm using the latest Helm chart (2.2.0) but it seems the app version is still 2.1.0. Cause of confusion.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
4 participants