Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rolling window limit support #193

Closed
wants to merge 34 commits into from
Closed
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
84e3323
add UnixNanoNow to TimeSource
walbertus Nov 19, 2020
8f517f4
add cache implementation for windowed rate limit
walbertus Nov 19, 2020
b1aadd1
add near limit test for windowed rate limit implementation
walbertus Nov 20, 2020
f2c6324
add random jitter test for windowed rate limit implementation
walbertus Nov 20, 2020
ab6f360
add benchmark for windowed ratelimit
walbertus Nov 20, 2020
e516c20
add local cache to windowed ratelimit
walbertus Nov 20, 2020
4a4c9cc
configure rate limit algorithm to use from setting
walbertus Nov 23, 2020
b69f04d
add readme on rate limit algorithm
walbertus Nov 23, 2020
d72f951
use constant for rate limit algorithm
walbertus Nov 25, 2020
e5704b6
move max min to utils
walbertus Dec 7, 2020
b07edff
add more explanation on rolling window implementation
walbertus Dec 7, 2020
48446cf
throw error when rate limit cache type is unknown
walbertus Dec 7, 2020
927b23e
move time conversion related to utils
walbertus Dec 31, 2020
f0d3c8b
move pipeline and cache key method from cache implementation
walbertus Jan 5, 2021
5e2676b
Fix fixed_window algorithm generate same cache keys
zufardhiyaulhaq Jan 13, 2021
ffff44a
fix conflicting files
zufardhiyaulhaq Jan 14, 2021
33f39f9
add memcached rolling window code
zufardhiyaulhaq Jan 14, 2021
846ba8e
generate memcached client mock with MockGen
zufardhiyaulhaq Jan 17, 2021
e4e25cc
refactor rate limit decision to algorithm package
zufardhiyaulhaq Jan 20, 2021
244e801
fix broken rolling window limiter with localcache
zufardhiyaulhaq Jan 20, 2021
510abf5
refactor redis & memcached unit tests
zufardhiyaulhaq Jan 21, 2021
307c27d
add compile time check and fix readme
zufardhiyaulhaq Jan 26, 2021
122cbf5
Merge github.com:envoyproxy/ratelimit into rolling-window-limit
zufardhiyaulhaq Feb 4, 2021
953958e
Merge github.com:envoyproxy/ratelimit into rolling-window-limit
zufardhiyaulhaq Feb 4, 2021
baf012b
refactor algorithm interfaces
zufardhiyaulhaq Feb 4, 2021
1face4c
fix mocks & add memcached windowed test
zufardhiyaulhaq Feb 9, 2021
5cc166b
Merge github.com:envoyproxy/ratelimit into rolling-window-limit
zufardhiyaulhaq Feb 9, 2021
8ce4708
fix format settings.go
zufardhiyaulhaq Feb 9, 2021
d2d32b4
fix memcached windowed unit test
zufardhiyaulhaq Feb 9, 2021
bc25eb7
add unit tests for windowed memcached
zufardhiyaulhaq Feb 10, 2021
10404f4
add fixed algorithm unit test
zufardhiyaulhaq Feb 10, 2021
3ec9cca
add rolling window algorithm unit test
zufardhiyaulhaq Feb 10, 2021
73df311
Refactor fixed and rolling window
Feb 15, 2021
a400046
refactor & add base window testing
zufardhiyaulhaq Feb 20, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 14 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,20 @@ The rate limit block specifies the actual rate limit that will be used when ther
Currently the service supports per second, minute, hour, and day limits. More types of limits may be added in the
future based on user demand.

### Rate limit algorithm

Ratelimit supports two algorithms:

1. Fixed window
For a limit of 60 requests per hour, there can only 60 requests in a single time window (e.g: 01:00 - 01:59).
Fixed window algorithm does not care when did the request arrive, all 60 can arrive at 01:01 or 01:50 and the limit will still reset at 02:00.

2. Rolling window
For a limit of 60 requests per hour. Initially it is able to take a burst of 60 requests at once, then the limit is restored by 1 each minute. Requests are allowed as long as there's still some available limit.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:s/it is able/it is possible

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be even more clear to rephrase like 'Initially rate limiter can take a burst of...`


Configure rate limit algorithm with `RATE_LIMIT_ALGORITHM` environment variable.
Use `FIXED_WINDOW` and `ROLLING_WINDOW` respectively.

### Examples

#### Example 1
Expand Down
3 changes: 2 additions & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ require (
github.com/coocood/freecache v1.1.0
github.com/envoyproxy/go-control-plane v0.9.7
github.com/fsnotify/fsnotify v1.4.7 // indirect
github.com/golang/mock v1.4.1
github.com/golang/mock v1.4.4
github.com/golang/protobuf v1.4.2
github.com/gorilla/mux v1.7.4-0.20191121170500-49c01487a141
github.com/kavu/go_reuseport v1.2.0
Expand All @@ -26,4 +26,5 @@ require (
google.golang.org/grpc v1.27.0
google.golang.org/protobuf v1.25.0 // indirect
gopkg.in/yaml.v2 v2.3.0
rsc.io/quote/v3 v3.1.0 // indirect
)
2 changes: 2 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfU
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.4.1 h1:ocYkMQY5RrXTYgXl7ICpV0IXwlEQGwKIsery4gyXa1U=
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4 h1:l75CXGRSwbaYNpl/Z2X1XIIAMSCquvXgpVZDhwEIJsc=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
Expand Down
131 changes: 131 additions & 0 deletions src/algorithm/fixed_window.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
package algorithm

import (
"math"

"github.com/coocood/freecache"
pb "github.com/envoyproxy/go-control-plane/envoy/service/ratelimit/v3"
"github.com/envoyproxy/ratelimit/src/config"
"github.com/envoyproxy/ratelimit/src/utils"
logger "github.com/sirupsen/logrus"
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would suggest to add a compile time check to both window implementations, to ensure that window type implements RatelimitAlgorithm interface. It's a useful technique in Go, eg:
var _ RatelimitAlgorithm = (*FixedWindowImpl)(nil)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is done

var _ RatelimitAlgorithm = (*FixedWindowImpl)(nil)

type FixedWindowImpl struct {
timeSource utils.TimeSource
cacheKeyGenerator utils.CacheKeyGenerator
localCache *freecache.Cache
nearLimitRatio float32
}

func (fw *FixedWindowImpl) GetResponseDescriptorStatus(key string, limit *config.RateLimit, results int64, isOverLimitWithLocalCache bool, hitsAddend int64) *pb.RateLimitResponse_DescriptorStatus {
if key == "" {
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OK,
CurrentLimit: nil,
LimitRemaining: 0,
}
}
if isOverLimitWithLocalCache {
fw.PopulateStats(limit, 0, uint64(hitsAddend), uint64(hitsAddend))
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OVER_LIMIT,
CurrentLimit: limit.Limit,
LimitRemaining: 0,
DurationUntilReset: utils.CalculateFixedReset(limit.Limit, fw.timeSource),
}
}

isOverLimit, limitRemaining, durationUntilReset := fw.IsOverLimit(limit, int64(results), hitsAddend)

if !isOverLimit {
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OK,
CurrentLimit: limit.Limit,
LimitRemaining: uint32(limitRemaining),
DurationUntilReset: utils.CalculateFixedReset(limit.Limit, fw.timeSource),
}
} else {
if fw.localCache != nil {
durationUntilReset = utils.MaxInt(1, durationUntilReset)

err := fw.localCache.Set([]byte(key), []byte{}, durationUntilReset)
if err != nil {
logger.Errorf("Failing to set local cache key: %s", key)
}
}

return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OVER_LIMIT,
CurrentLimit: limit.Limit,
LimitRemaining: uint32(limitRemaining),
DurationUntilReset: utils.CalculateFixedReset(limit.Limit, fw.timeSource),
}
}
}

func (fw *FixedWindowImpl) IsOverLimit(limit *config.RateLimit, results int64, hitsAddend int64) (bool, int64, int) {
limitAfterIncrease := results
limitBeforeIncrease := limitAfterIncrease - int64(hitsAddend)
overLimitThreshold := int64(limit.Limit.RequestsPerUnit)
nearLimitThreshold := int64(math.Floor(float64(float32(overLimitThreshold) * fw.nearLimitRatio)))

if limitAfterIncrease > overLimitThreshold {
if limitBeforeIncrease >= overLimitThreshold {
fw.PopulateStats(limit, 0, uint64(hitsAddend), 0)
} else {
fw.PopulateStats(limit, uint64(overLimitThreshold-utils.MaxInt64(nearLimitThreshold, limitBeforeIncrease)), uint64(limitAfterIncrease-overLimitThreshold), 0)
}

return true, 0, int(utils.UnitToDivider(limit.Limit.Unit))
} else {
if limitAfterIncrease > nearLimitThreshold {
if limitBeforeIncrease >= nearLimitThreshold {
fw.PopulateStats(limit, uint64(hitsAddend), 0, 0)
} else {
fw.PopulateStats(limit, uint64(limitAfterIncrease-nearLimitThreshold), 0, 0)
}
}

return false, overLimitThreshold - limitAfterIncrease, int(utils.UnitToDivider(limit.Limit.Unit))
}
}

func (fw *FixedWindowImpl) IsOverLimitWithLocalCache(key string) bool {
if fw.localCache != nil {
_, err := fw.localCache.Get([]byte(key))
if err == nil {
return true
}
}
return false
}

func (fw *FixedWindowImpl) GenerateCacheKeys(request *pb.RateLimitRequest,
limits []*config.RateLimit, hitsAddend int64) []utils.CacheKey {
return fw.cacheKeyGenerator.GenerateCacheKeys(request, limits, uint32(hitsAddend), fw.timeSource.UnixNow())
}

func (fw *FixedWindowImpl) PopulateStats(limit *config.RateLimit, nearLimit uint64, overLimit uint64, overLimitWithLocalCache uint64) {
limit.Stats.NearLimit.Add(nearLimit)
limit.Stats.OverLimit.Add(overLimit)
limit.Stats.OverLimitWithLocalCache.Add(overLimitWithLocalCache)
}

func (fw *FixedWindowImpl) GetExpirationSeconds() int64 {
return 0
}

func (fw *FixedWindowImpl) GetResultsAfterIncrease() int64 {
return 0
}

func NewFixedWindowAlgorithm(timeSource utils.TimeSource, localCache *freecache.Cache, nearLimitRatio float32, cacheKeyPrefix string) *FixedWindowImpl {
return &FixedWindowImpl{
timeSource: timeSource,
cacheKeyGenerator: utils.NewCacheKeyGenerator(cacheKeyPrefix),
localCache: localCache,
nearLimitRatio: nearLimitRatio,
}
}
20 changes: 20 additions & 0 deletions src/algorithm/ratelimit_algorithm.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package algorithm

import (
pb "github.com/envoyproxy/go-control-plane/envoy/service/ratelimit/v3"
"github.com/envoyproxy/ratelimit/src/config"
"github.com/envoyproxy/ratelimit/src/utils"
)

type RatelimitAlgorithm interface {
IsOverLimit(limit *config.RateLimit, results int64, hitsAddend int64) (bool, int64, int)
IsOverLimitWithLocalCache(key string) bool

GetResponseDescriptorStatus(key string, limit *config.RateLimit, results int64, isOverLimitWithLocalCache bool, hitsAddend int64) *pb.RateLimitResponse_DescriptorStatus
GetExpirationSeconds() int64
GetResultsAfterIncrease() int64

GenerateCacheKeys(request *pb.RateLimitRequest,
limits []*config.RateLimit, hitsAddend int64) []utils.CacheKey
PopulateStats(limit *config.RateLimit, nearLimit uint64, overLimit uint64, overLimitWithLocalCache uint64)
}
161 changes: 161 additions & 0 deletions src/algorithm/rolling_window.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,161 @@
package algorithm

import (
"math"

"github.com/coocood/freecache"
pb "github.com/envoyproxy/go-control-plane/envoy/service/ratelimit/v3"
"github.com/envoyproxy/ratelimit/src/config"
"github.com/envoyproxy/ratelimit/src/utils"
"github.com/golang/protobuf/ptypes/duration"
logger "github.com/sirupsen/logrus"
)

const DummyCacheKeyTime = 0

var _ RatelimitAlgorithm = (*RollingWindowImpl)(nil)

type RollingWindowImpl struct {
timeSource utils.TimeSource
cacheKeyGenerator utils.CacheKeyGenerator
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same for cacheKeyGenerator, we no longer need this field in both window impls, as it has been moved to base window.

localCache *freecache.Cache
nearLimitRatio float32
arrivedAt int64
tat int64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: either expand the field names or add explanatory comments.

newTat int64
diff int64
}

func (rw *RollingWindowImpl) GetResponseDescriptorStatus(key string, limit *config.RateLimit, results int64, isOverLimitWithLocalCache bool, hitsAddend int64) *pb.RateLimitResponse_DescriptorStatus {
if key == "" {
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OK,
CurrentLimit: nil,
LimitRemaining: 0,
}
}

if isOverLimitWithLocalCache {
rw.PopulateStats(limit, 0, uint64(hitsAddend), uint64(hitsAddend))

secondsToReset := utils.UnitToDivider(limit.Limit.Unit)
secondsToReset -= utils.NanosecondsToSeconds(rw.timeSource.UnixNanoNow()) % secondsToReset
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OVER_LIMIT,
CurrentLimit: limit.Limit,
LimitRemaining: 0,
DurationUntilReset: &duration.Duration{Seconds: secondsToReset},
}
}

isOverLimit, limitRemaining, durationUntilReset := rw.IsOverLimit(limit, int64(results), hitsAddend)

if !isOverLimit {
return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OK,
CurrentLimit: limit.Limit,
LimitRemaining: uint32(limitRemaining),
DurationUntilReset: utils.NanosecondsToDuration(rw.newTat - rw.arrivedAt),
}
} else {
if rw.localCache != nil {
durationUntilReset = utils.MaxInt(1, durationUntilReset)

err := rw.localCache.Set([]byte(key), []byte{}, durationUntilReset)
if err != nil {
logger.Errorf("Failing to set local cache key: %s", key)
}
}

return &pb.RateLimitResponse_DescriptorStatus{
Code: pb.RateLimitResponse_OVER_LIMIT,
CurrentLimit: limit.Limit,
LimitRemaining: 0,
DurationUntilReset: utils.NanosecondsToDuration(int64(math.Ceil(float64(rw.tat - rw.arrivedAt)))),
}
}
}

func (rw *RollingWindowImpl) IsOverLimit(limit *config.RateLimit, results int64, hitsAddend int64) (bool, int64, int) {
now := rw.timeSource.UnixNanoNow()

// Time during computation should be in nanosecond
rw.arrivedAt = now
// Tat is set to current request timestamp if not set before
rw.tat = utils.MaxInt64(results, rw.arrivedAt)
totalLimit := int64(limit.Limit.RequestsPerUnit)
period := utils.SecondsToNanoseconds(utils.UnitToDivider(limit.Limit.Unit))
quantity := int64(hitsAddend)

// GCRA computation
// Emission interval is the cost of each request
emissionInterval := period / totalLimit
// New tat define the end of the window
rw.newTat = rw.tat + emissionInterval*quantity
// We allow the request if it's inside the window
allowAt := rw.newTat - period
rw.diff = rw.arrivedAt - allowAt

previousAllowAt := rw.tat - period
previousLimitRemaining := int64(math.Ceil(float64((rw.arrivedAt - previousAllowAt) / emissionInterval)))
previousLimitRemaining = utils.MaxInt64(previousLimitRemaining, 0)
nearLimitWindow := int64(math.Ceil(float64(float32(limit.Limit.RequestsPerUnit) * (1.0 - rw.nearLimitRatio))))
limitRemaining := int64(math.Ceil(float64(rw.diff / emissionInterval)))
hitNearLimit := quantity - (utils.MaxInt64(previousLimitRemaining, nearLimitWindow) - nearLimitWindow)

if rw.diff < 0 {
rw.PopulateStats(limit, uint64(utils.MinInt64(previousLimitRemaining, nearLimitWindow)), uint64(quantity-previousLimitRemaining), 0)

return true, 0, int(utils.NanosecondsToSeconds(-rw.diff))
} else {
if hitNearLimit > 0 {
rw.PopulateStats(limit, uint64(hitNearLimit), 0, 0)
}

return false, limitRemaining, 0
}
}

func (rw *RollingWindowImpl) IsOverLimitWithLocalCache(key string) bool {
if rw.localCache != nil {
_, err := rw.localCache.Get([]byte(key))
if err == nil {
return true
}
}
return false
}

func (rw *RollingWindowImpl) GetExpirationSeconds() int64 {
if rw.diff < 0 {
return utils.NanosecondsToSeconds(rw.tat-rw.arrivedAt) + 1
}
return utils.NanosecondsToSeconds(rw.newTat-rw.arrivedAt) + 1
}

func (rw *RollingWindowImpl) GetResultsAfterIncrease() int64 {
if rw.diff < 0 {
return rw.tat
}
return rw.newTat
}

func (rw *RollingWindowImpl) GenerateCacheKeys(request *pb.RateLimitRequest,
limits []*config.RateLimit, hitsAddend int64) []utils.CacheKey {
return rw.cacheKeyGenerator.GenerateCacheKeys(request, limits, uint32(hitsAddend), DummyCacheKeyTime)
}

func (rw *RollingWindowImpl) PopulateStats(limit *config.RateLimit, nearLimit uint64, overLimit uint64, overLimitWithLocalCache uint64) {
limit.Stats.NearLimit.Add(nearLimit)
limit.Stats.OverLimit.Add(overLimit)
limit.Stats.OverLimitWithLocalCache.Add(overLimitWithLocalCache)
}

func NewRollingWindowAlgorithm(timeSource utils.TimeSource, localCache *freecache.Cache, nearLimitRatio float32, cacheKeyPrefix string) *RollingWindowImpl {
return &RollingWindowImpl{
timeSource: timeSource,
cacheKeyGenerator: utils.NewCacheKeyGenerator(cacheKeyPrefix),
localCache: localCache,
nearLimitRatio: nearLimitRatio,
}
}
Loading