-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
2.0 regression: large overhead of libsolv
's solver_unifyrules
when multichannels are used
#3393
Comments
Used |
I cannot reproduce the errors which you report using On my machine, installing those packages take around 1.5GiB of memory storage in the @ndevenish: Could you provide the difference of your instances' resource usage when using |
On this environment file |
This is exactly 700 GB btw |
RHEL8, 16GB memory machine:
|
Possibly because it seems to be in a package-cache-fetching loop? |
The This is a regression of micromamba 2.0.0. |
Ah, excellent detective work. Removing the |
Yes, we must only parse the subdirectory once. |
conda-forge:: prefix on package specification was causing redownload and reparsing for every dependency. See mamba-org/mamba#3393
|
conda-forge:: prefix on package specification was causing redownload and reparsing for every dependency. See mamba-org/mamba#3393
Actually, the channel duplication is not the only cause: most of the runtime after its correction is also due to a costly quick sort execution in Using
With the Without the I guess this might be due to comparison function for package solvable when the resolution is run. |
micromamba
libsolv
's solver_unifyrules
when multichannels are used
From @ndevenish in the QS lobby on gitter:
"
Is there any known issues with current
micromamba
about resource usage, possibly related to Centos/RHEL? I've had two separate people come to me this week with issues with:a) using
micromamba
in a container build dying because it filled their entire temp disk (when installing very few packages).b) being what looked like OOM killed after taking >60% of their memory. Both tasks which have worked before.
The out-of-disk-space instance was running:
micromamba create -y -c conda-forge gnuplot python numpy pymca workflows>=1.7 xraylib zocalo
and it took at least 4GB of scratch disk space (the smallest of possible locations that podman was using to do container working on their system).
The other instance didn't get past resolving (an admittedly rather large requirement) but was using >9GB of ram on a 16GB machine the last time I checked before it died.
"
The text was updated successfully, but these errors were encountered: