Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Gracefully handle and retry results bulk indexing failures #45711

Closed
benwtrent opened this issue Aug 19, 2019 · 3 comments · Fixed by #49508
Closed

[ML] Gracefully handle and retry results bulk indexing failures #45711

benwtrent opened this issue Aug 19, 2019 · 3 comments · Fixed by #49508
Labels
>enhancement :ml Machine learning

Comments

@benwtrent
Copy link
Member

Currently, job results processor does not do any retrying on bulk indexing failures. On certain classes of failures, we should attempt to retry after some random, exponential back-off. This, of course, causes some back pressure on the overall processing and should be considered in implementation.

Details of how results are processed and indexed can be seen in org.elasticsearch.xpack.ml.job.process.autodetect.output.AutodetectResultProcessor.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core

@droberts195
Copy link
Contributor

We should also retry indexing of state documents that fail to index first time due to an overloaded cluster. Failing to index a state document ruins the model snapshot it relates to, and will stop the job restarting with that particular model snapshot, so in many ways is even worse than losing a results document. (The method that needs changing is AutodetectStateProcessor.persist.)

@droberts195
Copy link
Contributor

We should also retry indexing of state documents that fail to index first time due to an overloaded cluster. Failing to index a state document ruins the model snapshot it relates to, and will stop the job restarting with that particular model snapshot, so in many ways is even worse than losing a results document. (The method that needs changing is AutodetectStateProcessor.persist.)

I split this out into a separate issue: #50143

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>enhancement :ml Machine learning
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants