-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve handling of out-of-order chunks #4916
Comments
I haven't tested it myself, but for your quick and easy solution; Line 148 in d08a12a
Wouldn't |
Ah, excellent - I was looking for a standalone metric and didn't realise it would be a label on another metric. Thanks for pointing that out! I'd still like to have a discussion about whether we could add an option to automate back up and deletion of problematic time series (either at the time-series, chunk or block level). I'd like to know if the Thanos team would be likely to accept this as a proposal, or if there are potential issues with this I haven't foreseen. |
I think this is a reasonable approach. Right now, when OOO chunks/series happen, the compactor will halt and the only thing I can do besides debugging is just use the bucket rewrite tool to delete the problematic series. This can be provided as a flag to just delete the OOO chunks/series if users don't really care about debugging this case. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Is your proposal related to a problem?
When Compactor encounters an out-of-order chunk, it is able to skip compaction of that block if --compact.skip-block-with-out-of-order-chunks is set (from this MR). This is very useful for stopping a single broken pod from breaking compaction for an entire deployment.
What I am concerned about is the long-term behaviour here. We have observed that, when blocks go uncompacted for too long, metrics query performance becomes extremely poor. If too many blocks are having compaction skipped, I expect that they will eventually have the same effect on performance, and it will not be obvious to an operator why this is happening.
Describe the solution you'd like
EDIT: This solution is already implemented
As a quick and easy solution, we would like to add a metric to track blocks that are being skipped due to compaction (e.g.
thanos_compact_skipped_compaction_blocks_total
. This would allow us to write dashboards and alerts that make it clear if this is happening.Describe alternatives you've considered
It would be nice if Compactor had the ability to clean up the storage account that contains the out-of-order chunk. This could be as simple as backing up the block to a separate storage account, and deleting the offending out-of-order time series (or the entire chunk, or the entire block). This could lead to some data loss from the storage account, but it would be retrievable, and would make Compactor much more self-sufficient (i.e. requiring less manual intervention when things go wrong).
The text was updated successfully, but these errors were encountered: