You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When rolling out a new version of Thanos receive, the latency of ingestion requests massively spike so I tried to investigate why and I could nail it down to the fact that the replication strategy always waits for all requests to other instances to finish, even if quorum requests have already succeeded.
What you expected to happen:
Only wait for quorum success of replication requests.
How to reproduce it (as minimally and precisely as possible):
3x replication thanos receive setup with one instance being unavailable.
Full logs to relevant components:
n/a
Anything else we need to know:
My env is on Kubernetes, but this is irrelevant to the issue described.
Hello 👋 Looks like there was no activity on this issue for last 30 days. Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity for next week, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.
Thanos, Prometheus and Golang version used:
master-2020-05-05-e5804d80
Object Storage Provider:
s3
What happened:
When rolling out a new version of Thanos receive, the latency of ingestion requests massively spike so I tried to investigate why and I could nail it down to the fact that the replication strategy always waits for all requests to other instances to finish, even if quorum requests have already succeeded.
What you expected to happen:
Only wait for quorum success of replication requests.
How to reproduce it (as minimally and precisely as possible):
3x replication thanos receive setup with one instance being unavailable.
Full logs to relevant components:
n/a
Anything else we need to know:
My env is on Kubernetes, but this is irrelevant to the issue described.
@bwplotka @krasi-georgiev @metalmatze @squat @kakkoyun
The text was updated successfully, but these errors were encountered: