Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

receive: Only wait for write quorum #2621

Merged
merged 1 commit into from
May 20, 2020

Conversation

brancz
Copy link
Member

@brancz brancz commented May 18, 2020

  • I added CHANGELOG entry for this change.
  • Change is not relevant to the end user.

Changes

This patch modifies receive replication slightly, in that it doesn't
always wait for all requests to complete anymore. If quorum amount of
replication requests were successful it now does not wait for the
remaining request to finish as it's not necessary to reach quorum
anymore. In error cases where quorum is not reached, it still continues
to wait for all requests to finish in an attempt to return a quorum
error.

Additionally this patch moves log lines printed in the parallelize
requests function to debug logging. Calling functions already print the
resulting error(s), so this was previously just noise, even in cases
where requests actually succeeded.

Let me know if you think there should be a changelog entry for this, in reality there is not really a user noticable change, other than less noisy logs and lower latency for requests.

Verification

  • Tested various failure cases manually with the quickstart script to ensure that errors are still printed appropriately.
  • Duplicated the existing replication tests to 1) just wait for quorum 2) run tests plus consistency delay as on a test run there should never be any problem, verifying overall correctness.

@bwplotka @krasi-georgiev @metalmatze @squat @kakkoyun

@brancz brancz force-pushed the improved-replication branch 3 times, most recently from f60b63f to f6fb934 Compare May 18, 2020 09:18
Copy link
Member

@kakkoyun kakkoyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to mention about the added flag in the changelog.
And also I believe this fixes #2567

level.Error(h.logger).Log("msg", "storing locally", "err", err, "endpoint", endpoint)
}
ec <- err
ec <- errors.Wrapf(err, "storagin locally, endpoint %v", endpoint)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ec <- errors.Wrapf(err, "storagin locally, endpoint %v", endpoint)
ec <- errors.Wrapf(err, "storing locally, endpoint %v", endpoint)

@brancz
Copy link
Member Author

brancz commented May 18, 2020

@kakkoyun good point! Added! :)

@brancz brancz force-pushed the improved-replication branch 6 times, most recently from ce24cc8 to 2258e46 Compare May 18, 2020 14:58
Copy link
Member

@kakkoyun kakkoyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🥇

Copy link
Contributor

@metalmatze metalmatze left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love those integration test additions!
For a full-on review I would need to check this out locally start diving into the gritty details. From a reviewing point of this, all LGTM! 😊 👍

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I am generally happy with direction, but I am not fan of current parallelizeRequests syntax, we might work on API a bit.

Essentially it confused me, then also confused you (there is a small bug) ;p Let's maybe find something better..

Also you use a lot this errors.Wrap(nil, .... and tsdbError.MultiError.Add(nil) and it just looks too scary to me. It might be just me, but I feel like it should be antipatterns, mentioned in code style ): It's just scares reader a lot (: For me it's just opportunity for easier errors.

@@ -324,29 +326,39 @@ func (h *Handler) forward(ctx context.Context, tenant string, r replica, wreq *p
}
h.mtx.RUnlock()

return h.parallelizeRequests(ctx, tenant, replicas, wreqs)
n, ec := h.parallelizeRequests(ctx, tenant, replicas, wreqs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not just for err := range <-ec? without n?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we need to know the potential number of results so we know when we have reached quorum, it could also return the quorum amount instead, but that would really result in the same code

defer func() {
go func() {
for {
err, more := <-ec
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defer cancel()
for err := range <-ec {

Would do the work I think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok because you relied on err != nil on caller side.... and you forgot about it here, I would really recommend my suggestion in comment above =D

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And yes, I understand this is how we know if it was success or not.. but maybe we can come up with cleaner API

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that we want to drain the channel, and stop when it's actually empty and closed, so we need to have the more returned.

Any suggestions for a better API? The problem is that we need to know in advance how many potential results we would be getting from the channel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But for ... <-ec has exactly the same semantics, no?

Copy link
Member

@bwplotka bwplotka May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will stop iterating when it's closed. If empty err, more := <-ec will block as well

level.Error(h.logger).Log("msg", "storing locally", "err", err, "endpoint", endpoint)
}
ec <- err
ec <- errors.Wrapf(err, "storing locally, endpoint %v", endpoint)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should just return if no error?

Copy link
Member

@bwplotka bwplotka May 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you rely on err != nil on caller side, maybe I would do it here for readability, but not a blocker.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

errors.Wrap returns nil if the input err is nil

go func(endpoint string) {
ec <- h.replicate(ctx, tenant, wreqs[endpoint])
defer wg.Done()
ec <- errors.Wrap(h.replicate(ctx, tenant, wreqs[endpoint]), "could not replicate write request")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should pass error only on error?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

errors.Wrap returns nil if the input err is nil

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am totally aware of that but IMO it's extremely confusing and prone to error. Plus adds major overhead on critical path.

continue
}

if uint64(countCause(errs, isNotReady)) >= (h.options.ReplicationFactor+1)/2 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth to keep this important number in some function.. (h.options.ReplicationFactor+1)/2

return nil
}
}
errs.Add(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am so confused, why we are passing err nil to multiError?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(it can flow through above)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nil errors are not actually added

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again: I am totally aware of that but IMO it's extremely confusing and prone to error.

@brancz
Copy link
Member Author

brancz commented May 19, 2020

Also you use a lot this errors.Wrap(nil, .... and tsdbError.MultiError.Add(nil) and it just looks too scary to me. It might be just me, but I feel like it should be antipatterns, mentioned in code style ): It's just scares reader a lot (: For me it's just opportunity for easier errors.

I don't see anything in the style guide that's violated here.

The only other API that I could think of that wouldn't end up in just a rearrangement of the current code is using two channels, one for errors one for successes, but that would complicate draining them a lot.

@brancz brancz force-pushed the improved-replication branch 2 times, most recently from 3de93f8 to 9c72f1e Compare May 19, 2020 08:44
@bwplotka
Copy link
Member

I don't see anything in the style guide that's violated here.

Yea, I am proposing to add that.

Plus only this PR is doing so, 100% of Thanos codebase is not putting nil to multierror and does not wrap nils with message.

@bwplotka
Copy link
Member

Let me think about API, I totally see the aim of it, maybe we can find something cleaner

return ctx.Err()
case err, more := <-ec:
if !more {
return errs
Copy link
Member

@bwplotka bwplotka May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about this case, but I thought that this will never happen (: We either have success or errors quorum kind of in my previous version (:

Copy link
Member Author

@brancz brancz May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The attempt here is to return the best possible error at the possible cost of higher latency, for example in a 3x replication, and 2x return tsdb-not-ready/unavailable, and 1 conflict, would end up with a generalized error when in reality a retry is likely to resolve the error.

It's a trade off, either better error reporting or lower latency. Since the request is failing in this case anyways, I prefer better errors over latency.

Copy link
Member

@bwplotka bwplotka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, just some question (:

}
return errors.Wrap(err, "could not replicate write request")
if countCause(err, isConflict) >= quorum {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think it should len - quorum

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's take the example of replication factor 3, which has a quorum factor of 2, and we get 1 conflict error. 3-2=1, so we would be returning conflict, even though write quorum was met.

Copy link
Member

@bwplotka bwplotka May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, then it has to be > len(reqs) - quorum, right? (not >=)

I think we are both right if we assume that quorum is always +1 than half. This is however very depending on quorum value.... This algorithm should never assume things like that. Let's say quorum is 1 for some reason, with replication 3, then this logic will not hold true, vs > len(reqs) - quorum is always correct. (:

Copy link
Member Author

@brancz brancz May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is however very depending on quorum value....

what do you mean by this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked offline. Not a blocker so merging.

This patch modifies receive replication slightly, in that it doesn't
always wait for all requests to complete anymore. If quorum amount of
replication requests were successful it now does not wait for the
remaining request to finish as it's not necessary to reach quorum
anymore. In error cases where quorum is not reached, it still continues
to wait for all requests to finish in an attempt to return a quorum
error.

Additionally this patch moves log lines printed in the parallelize
requests function to debug logging. Calling functions already print the
resulting error(s), so this was previously just noise, even in cases
where requests actually succeeded.

Signed-off-by: Frederic Branczyk <[email protected]>
@bwplotka bwplotka merged commit 929864c into thanos-io:master May 20, 2020
@bwplotka
Copy link
Member

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants