Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The memory usage of jetstream pod is large [v2.10.20] #5870

Open
AllenZMC opened this issue Sep 10, 2024 · 13 comments
Open

The memory usage of jetstream pod is large [v2.10.20] #5870

AllenZMC opened this issue Sep 10, 2024 · 13 comments
Labels
defect Suspected defect such as a bug or regression

Comments

@AllenZMC
Copy link

AllenZMC commented Sep 10, 2024

Observed behavior

The memory usage of jetstream pod is large and remains at 2Gi. At this time, the jetstream stream does not have any memory data. Why is this?
截屏2024-09-10 13 15 48

In addition, when the pod is restarted, the memory usage will be very small, about tens of MB.

What is the reason for this?

Expected behavior

When the jetstream stream does not have any memory data, it will theoretically reduce memory usage.

Server and client version

nats-server: v2.10.20

Host environment

No response

Steps to reproduce

No response

@AllenZMC AllenZMC added the defect Suspected defect such as a bug or regression label Sep 10, 2024
@AllenZMC AllenZMC changed the title The memory usage of jetstream is large. The memory usage of jetstream pods is large. Sep 10, 2024
@AllenZMC AllenZMC changed the title The memory usage of jetstream pods is large. The memory usage of jetstream pod is large. Sep 10, 2024
@wallyqs
Copy link
Member

wallyqs commented Sep 10, 2024

if the system is running in k8s, need to make sure that you are setting GOMEMLIMIT so that that GC does not work based on the host memory instead.

@wallyqs wallyqs changed the title The memory usage of jetstream pod is large. The memory usage of jetstream pod is large [v2.10.20] Sep 10, 2024
@neilalexander
Copy link
Member

Can you please provide memory profiles so we can see what's going on?

Either nats server request profile allocs using the system account, or https://docs.nats.io/running-a-nats-service/nats_admin/profiling.

@AllenZMC
Copy link
Author

AllenZMC commented Sep 10, 2024

Can you please provide memory profiles so we can see what's going on?

Either nats server request profile allocs using the system account, or https://docs.nats.io/running-a-nats-service/nats_admin/profiling.

@neilalexander
How to configure prof_port = 65432 in jetstream?
The jetstream pod fails to run with an error message: "This parameter is not recognized"

@neilalexander
Copy link
Member

It's a configuration option on the NATS Server config file, if you are doing it under Kubernetes you may need to do something like this:

config:
  merge:
     prof_port: 65432

@neilalexander
Copy link
Member

Also note that you don't need prof_port enabled to use the NATS CLI, you just need access to the system account.

@AllenZMC
Copy link
Author

mem.prof.zip
@neilalexander Here is the memory profile.

@derekcollison
Copy link
Member

Thanks @AllenZMC! @neilalexander will take a look.

@neilalexander
Copy link
Member

Thanks for the profile, was this taken when the memory usage was inflated? It is only showing 30MB heap allocations. Are you using GOMEMLIMIT?

@AllenZMC
Copy link
Author

AllenZMC commented Sep 20, 2024

Thanks for the profile, was this taken when the memory usage was inflated? It is only showing 30MB heap allocations. Are you using GOMEMLIMIT?

It is indeed obtained when the memory is very large. I also feel confused.

And what is GOMEMLIMIT? NATS runs in a container. Is this configuration in the Go program or somewhere else?

@derekcollison
Copy link
Member

How are you measuring memory?

@AllenZMC
Copy link
Author

How are you measuring memory?

container_memory_working_set_bytes{pod='a', namespace='b', container='c'}

@derekcollison
Copy link
Member

Is that RSS or VSS?

@AllenZMC
Copy link
Author

AllenZMC commented Sep 24, 2024

Is that RSS or VSS?

container_memory_working_set_bytes is a metric of the k8s container platform, and it is the amount of physical memory actually used by the container, excluding memory that can be reclaimed, not RSS or VSS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
defect Suspected defect such as a bug or regression
Projects
None yet
Development

No branches or pull requests

4 participants