You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ah, on Windows, large OS pages (2Mib) (called huge pages on Linux), or huge OS pages (1GiB) are "pinned" in memory and cannot be de-committed. In particular huge OS pages are always mapped to physical memory and cannot be shared between processes for example (which I why you generally need to give permission on Windows for processes to "lock" memory).
Maybe this is too restrictive though since on Linux these restrictions may not hold (in particular since we "decommit" on Linux by using mprotect with PROT_NONE).
Yes, for long running processes like dragonfly we would like to have some kind of control over the RSS usage. We have even implemented a special administrative command that calls mi_heap_collect for all the heaps in the system. It gives us confidence that Dragonfly itself does not leak memory and all the memory is accountable.
Another issue that we see when using large pages (2MB) is that the RSS usage for a long-running process can constantly grow for some usage patterns: the gap between RSS and sum(heap){mi_heap_visit_blocks(heap)} widens with time. We took a decision to stop using large 2MB pages recently, and we see that the memory usage of our servers became more stable now.
Seems that mimalloc does not decommit memory that was comitted using large pages. See
mimalloc/src/arena.c
Line 540 in 03020fb
and
mimalloc/src/arena.c
Line 832 in 03020fb
what's the motivation behind this? why large pages can not be decomitted?
The text was updated successfully, but these errors were encountered: