Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lease: rate limit revoke runLoop #8099

Merged
merged 1 commit into from
Jun 14, 2017

Conversation

gyuho
Copy link
Contributor

@gyuho gyuho commented Jun 14, 2017

Fix #8097.

}
revokerLimiter.Wait(context.Background())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use c.ctx instead of the context.Background()? and if Wait returns an error, how should we handle it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait won't never return error if we pass Background, unless we mis-configure in the beginning

}
revokerLimiter.Wait(context.Background())
lid := lease.ID
s.goAttach(func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maxPendingRevokes is designed to take at most maxPendingRevokes(16 in this case) lease revoke at same time. By removing that, it means that there can be 1000 go routines that revoking the lease at the same time. Is that fine? I think the original design was to limit number of simultaneous revokes call to etcd.

maxPendingRevokes = 16
// TODO: make this configurable?
// maximum number of leases to revoke per second
leaseRevokeRate = 1000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leaseRevokesPerSecond?

}
revokerLimiter.Wait(context.Background())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this rate limiting probably belongs in lessor.runLoop instead of the etcd server

@gyuho gyuho force-pushed the rate-limit-lease-expiration branch from cc9f075 to 1e5a8ac Compare June 14, 2017 19:55
@gyuho gyuho changed the title etcdserver: rate limit lease revoke per second lease: rate limit revoke runLoop Jun 14, 2017
lease/lessor.go Outdated
@@ -412,7 +419,8 @@ func (le *lessor) Stop() {
func (le *lessor) runLoop() {
defer close(le.doneC)

for {
revokerLimiter := rate.NewLimiter(rate.Limit(leaseRevokesPerSecond), leaseRevokesPerSecond)
for revokerLimiter.Wait(context.Background()) == nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work since findExpiredLeases can give len(ls) > leaseRevokesPerSecond. Maybe just have something like if len(ls) > leaseRevokesPerSecond/2 { ls = ls[:leaseRevokesPerSecond/2]} before posting to expiredC?

@gyuho gyuho force-pushed the rate-limit-lease-expiration branch 2 times, most recently from 515ee34 to 6b9338c Compare June 14, 2017 21:16
lease/lessor.go Outdated
@@ -422,6 +426,10 @@ func (le *lessor) runLoop() {
le.mu.Unlock()

if len(ls) != 0 {
// keep half for next iteration
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// rate limit?

Copy link
Contributor

@heyitsanthony heyitsanthony left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm thanks

@gyuho gyuho force-pushed the rate-limit-lease-expiration branch from 6b9338c to 0011b78 Compare June 14, 2017 21:22
@fanminshi
Copy link
Member

lgtm

@gyuho gyuho merged commit ee0c805 into etcd-io:master Jun 14, 2017
@gyuho gyuho deleted the rate-limit-lease-expiration branch June 14, 2017 22:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

3 participants