-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory scaling issue with large number of services #945
Comments
IN order to reduce the memory cost, we need to tune the scale up/in param, cc @nlgwcy |
ok, maybe the step of scaleup/scalein is too big. I will optimize it later. |
I tried to use Inspektor-Gadget, but it couldn't account for the memory of the inner_map, so there was no memory change.
|
I used the bpftool command to view bpf_map and statistics, and found that there is memory change.
before:
after:
Below are the startup logs for kmesh. I started 1000 services, deleted them, and then brought them up again.
|
Motivation:
Our business case needs memory to scale as number of services increases, especially in scenarios where the scale is very high.
1. What we did
Environment Details:
We started scaling up in batches of 500 services using the below yaml file and command.
After every 500 services, we measured the memory consumption using inspektor gadget. The command used to measure the memory is
kubectl gadget top ebpf --sort comm
2. What we observed:
The total memory usage of the kmesh bpf map remained constant, even though the number of entries in the bpf map increased (table below).
The detailed table that has the memory consumption is attached below. Please refer to the column MAPMEMORY in the table.
memory_hce.txt
3. Why we think this is a problem
Our business case requires memory usage to scale as we deploy more services, instead of remaining fixed.
The text was updated successfully, but these errors were encountered: