Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to support dynamic pool? #714

Open
ningyougang opened this issue Sep 24, 2024 · 1 comment
Open

How to support dynamic pool? #714

ningyougang opened this issue Sep 24, 2024 · 1 comment

Comments

@ningyougang
Copy link

ningyougang commented Sep 24, 2024

Because this repo(envoyproxy/ratelimit) is using mediocregopher/radix to manage redis connection
and there has comment as below, e.g.

// Pool is dynamic in that it can create more connections on-the-fly to handle
// increased load. The maximum number of extra connections (if any) can be
// configured, along with how long they are kept after load has returned to
// normal.
(https://github.com/mediocregopher/radix/blob/v3/pool.go#L263)

So it seems envoyproxy/ratelimit support dynamaic pool by default.

I deployed this repo with below k8s deployment spec

click here to see the k8s deployment spec
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
  labels:
    kind: ratelimit-service
  name: test-ratelimit
  namespace: test-namespace
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/name: test-ratelimit
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: test-ratelimit
        kind: ratelimit-service
    spec:
      containers:
      - args:
        - /bin/ratelimit
        env:
        - name: LOG_FORMAT
          value: text
        - name: LOG_LEVEL
          value: info
        - name: DECISION_LOG_SCOPE
          value: over_limit
        - name: REDIS_SOCKET_TYPE
          value: tcp
        - name: REDIS_TYPE
          value: cluster
        - name: REDIS_URL
          value: xxx.xxx.xxx.xxx:6379,xxx.xxx.xxx.xxx:6379,xxx.xxx.xxx.xxx:6379,xxx.xxx.xxx.xxx:6379
        - name: REDIS_POOL_SIZE
          value: "1"     <----------------- this is key config
        - name: REDIS_AUTH
          value: xxxx
        - name: REDIS_TLS
          value: "false"
        - name: REDIS_PIPELINE_WINDOW
          value: 150us
        - name: REDIS_PIPELINE_LIMIT
          value: "0"
        - name: RUNTIME_ROOT
          value: /data
        - name: RUNTIME_SUBDIRECTORY
          value: ratelimit
        - name: RUNTIME_IGNOREDOTFILES
          value: "true"
        - name: RUNTIME_WATCH_ROOT
          value: "false"
        - name: LOCAL_CACHE_SIZE_IN_BYTES
          value: "1048576"
        - name: CACHE_KEY_PREFIX
          value: test-prefix_
        - name: USE_STATSD
          value: "true"
        image: ratelimit:tag   # this repo's image i built
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sleep
              - "10"
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: debug
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: ratelimit-service
        ports:
        - containerPort: 8080
          name: json
          protocol: TCP
        - containerPort: 8081
          name: rpc
          protocol: TCP
        - containerPort: 6070
          name: debug
          protocol: TCP
        - containerPort: 9102
          name: rl-metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /
            port: debug
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "4"
            memory: 2Gi
          requests:
            cpu: "0"
            memory: 128Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /data/ratelimit/config
          name: rl-configs
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: ratelimit-cm
        name: rl-configs
      - configMap:
          defaultMode: 420
          name: ratelimit-st-cm
        name: st-configs

After deployed successfully, i did below test, it seems dynamic pool is not supported
Test result as below

RPS ratelimit pod number REDIS_POOL_SIZE active connections number total connections number 429 requests ratio
0 1 1 4 4 N/A
5000 1 1 4 4 100%
10000 1 1 4 4 99.9%
15000 1 1 4 4 34%
sometimes ratelimit pod crashed
20000 1 1 4
before pod crash, this value was not changed
4 ratelimit pod crashed

(active connections number can be got from cx_active metric, total connections number can be got from cx_total metric)

I have two questions.

  • from above test, it seems new connections are not created under higher RPS, there has any method that make the connection created automatically under higher RPS?

  • It seems there has any relation between REDIS_POOL_SIZE and active connections number, e.g.

    active connections number = REDIS_POOL_SIZE * 4
    

    someone knows the reason?

@ningyougang ningyougang changed the title How to supoprt dynamaic pool? How to support dynamaic pool? Sep 24, 2024
@ningyougang ningyougang changed the title How to support dynamaic pool? How to support dynamic pool? Sep 24, 2024
@ningyougang ningyougang reopened this Sep 27, 2024
@ningyougang
Copy link
Author

I opened a issue here: mediocregopher/radix#355
It seems it is not supported that the connection number is adjusted automatically by RPS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant