Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Responsible Memory Usage #562

Open
gavofyork opened this issue Mar 2, 2020 · 12 comments
Open

Responsible Memory Usage #562

gavofyork opened this issue Mar 2, 2020 · 12 comments

Comments

@gavofyork
Copy link
Member

gavofyork commented Mar 2, 2020

News: https://hackmd.io/pvj8fsC3QbSc7o54ZH4qJQ

There are memory leaks in the codebase. Not only do we need to fix them, we need a proper attitude to ensure that they do not happen again.

Thus we need to achieve three things:

  1. A solid and well-documented approach to heap tracing.

  2. To rewrite our optional allocations under an attitude of zero-cost optimisations and, in general, ensure that all dynamic allocations are bounded.

  3. To integrate all dynamic allocations into our triad of reporting apparatus: prometheus, telemetry and informant.

Heap tracing

It should not take days of research to figure out how to determine what lines in the codebase are resulting in so much memory being allocated. We need a clear, well-maintained document on how to setup and run Substrate (and derivative clients like Polkadot) in order to get full traces of allocations.

There are several tools that may be used (heaptrack, jemalloc, Massif/Valgrind, ...) and it is not clear which are viable, and/or sensible.

Zero-cost optimisations

This is principle a little like the "pay for what you use" mantra of Zero-Cost Abstractions that Rust gives. Essentially, it should be possible to "turn off" or reduce non-mandatory memory allocations to a large degree. This would come at a potentially large performance cost, but we will assume for now that under some circumstances this is acceptable, and in any case, having the ability to tune how much memory is allowed to be allocated in each of the various dynamic subsystems is useful.

We need to introduce CLI options such that, at the extreme end, a user (or, developer) can minimise expected memory usage. In many cases, this memory usage (given by buffer sizes, cache limits, queue capacities and pre-allocation values) is hard-coded. In other cases, it's unbounded. It should, at all times, be bounded whose value is given by the user to be configured over the CLI.

It should be configurable for all dynamically allocating subsystems. Every buffer, queue and cache. Every time something it allocated and kept around for later use. Every allocation that isn't strictly required for normal operation. If a buffer or queue is automatically grown under some circumstance, it should be instantly shrunk again once the need is gone.

One example would be the 1GB that is given to rocks db's cache size - it should be possible to lower this, all the way to zero, ideally. There are, I'm sure, many examples.

Items that allocate large amounts but that are used transiently (such the memory slab for wasm execution) should be configurable so that they remain allocated only as long as they are being actively used. This will mean that every execution will need to allocate afresh, but we don't care.

The end result of this effort is to be able to lower the expected amount of memory used in stable-state (i.e. when not importing or authoring) to around 250 MB, even with dozens of peers.

Footprint reporting

In addition to this, every dynamically allocated queue, buffer and cache should be continuously reporting its size and footprint out to telemetry, informant and/or prometheus. We should obviously begin with the lower-hanging fruit - those that we suspect of being bad behavers, but eventually we should have an attitude of reporting everything.

We probably want to do this only in a certain mode of operation, to avoid performance costs under normal operation.

Related: paritytech/substrate#4679

@burdges

This comment was marked as outdated.

@gnunicorn
Copy link
Contributor

@burdges could you elaborate? I can't find that crate in our dependency tree.

@burdges

This comment was marked as outdated.

@darkfriend77
Copy link

darkfriend77 commented Feb 16, 2021

what are the tools suggested to investigate the bad behaviour in v3.0.0 concerning memory?

paritytech/substrate#8117

heaptrack didn't worked out for me here

@paritytech paritytech deleted a comment Feb 24, 2023
@ghost

This comment was marked as outdated.

@bkchr
Copy link
Member

bkchr commented Feb 28, 2023

Thought this is an open project. Who deletes valid comments in such an nontransparent manner?

No one should have this right. Solana outages happen because of such arrogant behavior, especially by mvines (systemic-flaws/dapp-platforms#5).

Just going around spamming issues is a reason to have comments removed. You can go around and you will find from time to time comments that have been removed. There are no comments being removed that are about the topic of the issue. Sorry if that is against your believes.

@ghost

This comment was marked as outdated.

@ghost

This comment was marked as off-topic.

@bkchr
Copy link
Member

bkchr commented Feb 28, 2023

At the point you commented on multiple issues, which looked like spam.

@ghost

This comment was marked as off-topic.

@ghost

This comment was marked as outdated.

@ghost
Copy link

ghost commented Mar 5, 2023

(just for the sake of completeness)

@gavofyork , this is actually an issue suitable for a bounty:

  • Experience Level: high
  • Bounty Amount: in the range of $3K to $10K
    • You work things out alone (no Q&A with core-devs, they are busy with other work)
    • Payout Rules: we like it, we use it, we pay you
  • Task: provide a mechanism to detect and prevent memory leaks/over-usage early
    • Should be usable by all ParityTech rust repositories
    • Can depend on payed services (an open-source option must exist though)

Place such a simple(!) bounty (without essays about processing etc., it is a standard-problem), and you should have a result soon (instead of having the issue open for further 2 years).

Triage

Needs to be removed from https://github.com/paritytech/substrate/milestone/10 (closed milestone should not have issues assigned)

@altonen altonen transferred this issue from paritytech/substrate Aug 24, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
helin6 pushed a commit to boolnetwork/polkadot-sdk that referenced this issue Feb 5, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Mar 26, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Mar 27, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 8, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 9, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 10, 2024
serban300 pushed a commit to serban300/polkadot-sdk that referenced this issue Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog 🗒
Development

No branches or pull requests

6 participants