Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mamba does not respect Pytorch's cpuonly #336

Closed
alanhdu opened this issue Jun 17, 2020 · 22 comments
Closed

Mamba does not respect Pytorch's cpuonly #336

alanhdu opened this issue Jun 17, 2020 · 22 comments

Comments

@alanhdu
Copy link
Contributor

alanhdu commented Jun 17, 2020

To install PyTorch without CUDA, the official docs recommend installing a special cpuonly package (see https://pytorch.org/): that is, we can do something like conda create -n test -c pytorch python=3 pytorch=1.5 cpuonly.

But on Mamba, this will create an environent that installs both cpuonly and cudatoolkit (and installs a Pytorch version that depends on CUDA):

$ mamba --version
mamba 0.3.6
conda 4.8.3
$ mamba create -n test -c pytorch python=3 pytorch=1.5 cpuonly                                                                             
                  __    __    __    __
                 /  \  /  \  /  \  /  \
                /    \/    \/    \/    \
███████████████/  /██/  /██/  /██/  /████████████████████████
              /  / \   / \   / \   / \  \____
             /  /   \_/   \_/   \_/   \    o \__,
            / _/                       \_____/  `
            |/
        ███╗   ███╗ █████╗ ███╗   ███╗██████╗  █████╗
        ████╗ ████║██╔══██╗████╗ ████║██╔══██╗██╔══██╗
        ██╔████╔██║███████║██╔████╔██║██████╔╝███████║
        ██║╚██╔╝██║██╔══██║██║╚██╔╝██║██╔══██╗██╔══██║
        ██║ ╚═╝ ██║██║  ██║██║ ╚═╝ ██║██████╔╝██║  ██║
        ╚═╝     ╚═╝╚═╝  ╚═╝╚═╝     ╚═╝╚═════╝ ╚═╝  ╚═╝

        mamba (0.3.6) supported by @QuantStack

        GitHub:  https://github.com/QuantStack/mamba
        Twitter: https://twitter.com/QuantStack

█████████████████████████████████████████████████████████████

conda-forge/linux-64     Using cache
conda-forge/noarch       Using cache
pkgs/main/linux-64       [====================] (00m:00s) No change
pkgs/main/noarch         [====================] (00m:00s) No change
pkgs/r/linux-64          [====================] (00m:00s) No change
pkgs/r/noarch            [====================] (00m:00s) No change
pytorch/linux-64         [====================] (00m:00s) No change
pytorch/noarch           [====================] (00m:00s) No change

Looking for: ['python=3', 'pytorch=1.5', 'cpuonly']

Transaction

  Prefix: /opt/anaconda/envs/test

  Updating specs:

   - python==3
   - pytorch==1.5
   - cpuonly


  Package              Version  Build                           Channel                    Size
─────────────────────────────────────────────────────────────────────────────────────────────────
  Install:
─────────────────────────────────────────────────────────────────────────────────────────────────

  _libgcc_mutex            0.1  conda_forge                     conda-forge/linux-64     Cached
  _openmp_mutex            4.5  0_gnu                           conda-forge/linux-64     435 KB
  blas                    2.15  mkl                             conda-forge/linux-64      10 KB
  ca-certificates   2020.4.5.2  hecda079_0                      conda-forge/linux-64     Cached
  certifi           2020.4.5.2  py38h32f6830_0                  conda-forge/linux-64     152 KB
  cpuonly                  1.0  0                               pytorch/noarch             2 KB
  cudatoolkit          10.2.89  hfd86e86_1                      pkgs/main/linux-64       365 MB
  intel-openmp          2020.1  217                             pkgs/main/linux-64       780 KB
  ld_impl_linux-64        2.34  h53a641e_5                      conda-forge/linux-64     Cached
  libblas                3.8.0  15_mkl                          conda-forge/linux-64     Cached
  libcblas               3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  libffi                 3.2.1  he1b5a44_1007                   conda-forge/linux-64     Cached
  libgcc-ng              9.2.0  h24d8f2e_2                      conda-forge/linux-64     Cached
  libgfortran-ng         7.5.0  hdf63c60_6                      conda-forge/linux-64     Cached
  libgomp                9.2.0  h24d8f2e_2                      conda-forge/linux-64     Cached
  liblapack              3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  liblapacke             3.8.0  15_mkl                          conda-forge/linux-64      10 KB
  libstdcxx-ng           9.2.0  hdf63c60_2                      conda-forge/linux-64     Cached
  mkl                   2020.1  217                             pkgs/main/linux-64       129 MB
  ncurses                  6.1  hf484d3e_1002                   conda-forge/linux-64     Cached
  ninja                 1.10.0  hc9558a2_0                      conda-forge/linux-64     Cached
  numpy                 1.18.5  py38h8854b6b_0                  conda-forge/linux-64       5 MB
  openssl               1.1.1g  h516909a_0                      conda-forge/linux-64     Cached
  pip                   20.1.1  py_1                            conda-forge/noarch       Cached
  python                 3.8.3  cpython_he5300dc_0              conda-forge/linux-64      71 MB
  python_abi               3.8  1_cp38                          conda-forge/linux-64       4 KB
  pytorch                1.5.0  py3.8_cuda10.2.89_cudnn7.6.5_0  pytorch/linux-64         425 MB
  readline                 8.0  hf8c457e_0                      conda-forge/linux-64     Cached
  setuptools            47.3.1  py38h32f6830_0                  conda-forge/linux-64     637 KB
  sqlite                3.30.1  hcee41ef_0                      conda-forge/linux-64     Cached
  tk                    8.6.10  hed695b0_0                      conda-forge/linux-64     Cached
  wheel                 0.34.2  py38_0                          conda-forge/linux-64      43 KB
  xz                     5.2.5  h516909a_0                      conda-forge/linux-64     Cached
  zlib                  1.2.11  h516909a_1006                   conda-forge/linux-64     Cached

I'm on Fedora 31 and the latest conda + mamba version.

@wolfv
Copy link
Member

wolfv commented Jun 17, 2020

Thanks @alanhdu for opening this issue.

Indeed, mamba doesn't respect the cpuonly package in the same way as conda does it. The cpuonly is implemented using track_features and the corresponding package for pytorch carries the same feature.

That mechanism has basically been deprecated by conda and e.g. conda-forge uses mutex packages mostly now.

I am not sure if we should attempt to support track_features with mamba. As far as I am aware there are some strong drawbacks, and mutex packages are comparatively simple. Maybe we can convince the pytorch maintainers to switch?

cc @SylvainCorlay @xhochy what do you think?

@wolfv
Copy link
Member

wolfv commented Jun 17, 2020

Quoting from this blog post: https://www.anaconda.com/blog/understanding-and-improving-condas-performance

Prioritize solutions that minimize the number of “track_features” entries in the environment. Track_features are legacy metadata once used for lining up mkl and debug packages. They are deprecated in general, but are still sometimes used to effectively implement a default variant of a package by “weighing down” alternatives with track_features.

@SylvainCorlay
Copy link
Member

Maybe we can convince the pytorch maintainers to switch?

I think that it would be the most reasonable approach since this is deprecated in conda.

@wolfv
Copy link
Member

wolfv commented Jun 18, 2020

Here is a google drive doc from Mike Saharan (I believe) describing mutex packages very well: https://docs.google.com/document/d/1S2D4lTYfDV4GNGbRSDWobH2PWilUVXTHRgl-2tnzdxY

@wolfv
Copy link
Member

wolfv commented Jun 18, 2020

I opened a bug report on pytorch. I guess this will take a while to materialize...

@wolfv
Copy link
Member

wolfv commented Jun 18, 2020

pytorch/pytorch#40213

@wolfv
Copy link
Member

wolfv commented Mar 16, 2021

Work in progress: pytorch/builder#488

@zbs
Copy link

zbs commented Mar 23, 2021

Are there any workarounds for this in the meantime?

@azagniotov
Copy link

azagniotov commented May 9, 2021

I worked around it by explicitly fetching PyTorch and friends using the CPU-based package build string, e.g. from my Dockerfile:

ARG PYTORCH_VERSION=1.8.0
ARG TORCHVISION_VERSION=0.9.0
...
...
...
RUN mamba install --quiet --yes -c conda-forge \
   "pytorch==${PYTORCH_VERSION}=cpu_py38hd248515_1" \
   "torchvision==${TORCHVISION_VERSION}=py38h89b28b9_0_cpu" 

@wolfv
Copy link
Member

wolfv commented Sep 17, 2021

finally the latest PR was merged and pytorch has started to use mutex packages like conda-forge and many other channels.

@wolfv wolfv closed this as completed Sep 17, 2021
@rgommers
Copy link

Note that it's not yet rolled out to the pytorch channel, only to pytorch-nightly. The PyTorch 1.10.0 release will come with the new mutex.

@wolfv
Copy link
Member

wolfv commented Sep 17, 2021

Thanks @rgommers, good to know!

@jucor
Copy link

jucor commented Sep 21, 2021

Sorry to be a bother, but I have the opposite problem: I can't for the life of me get pytorch's cuda version installed with mamba :/ Any idea?

@IvanYashchuk
Copy link

Sorry to be a bother, but I have the opposite problem: I can't for the life of me get pytorch's cuda version installed with mamba :/ Any idea?

Which channel are you using, pytorch, pytorch-nightly or conda-forge? Is it the case that mamba install -c pytorch -c nvidia pytorch doesn't install the CUDA build of PyTorch for you? Could you try running the command in a fresh environment or restricting the build with mamba install -c pytorch -c nvidia pytorch=*=*cuda*?

@jucor
Copy link

jucor commented Sep 21, 2021 via email

@wolfv
Copy link
Member

wolfv commented Sep 21, 2021

@jucor can you share more of you system?

  • what OS?
  • what version of mamba?
  • the output of conda info

@wolfv
Copy link
Member

wolfv commented Sep 21, 2021

Ah, sorry, the proper syntax is

CONDA_SUBDIR=linux-64 mamba create -n test "pytorch=*=*cuda*" -c pytorch -c nvidia --dry-run

So use "pytorch=*=*cuda*" to properly match on the build string.

@jucor
Copy link

jucor commented Sep 21, 2021 via email

@wolfv
Copy link
Member

wolfv commented Nov 6, 2021

Can you add - pytorch * *cuda* in the env file?

@FeryET
Copy link

FeryET commented Nov 29, 2021

Can you add - pytorch * *cuda* in the env file?

Oh never mind I misread that. I've deleted my previous comment because I'm actually trying to install the cpuonly version.

How to use the mutex package? Should I still declare cpuonly in the environment.yaml file I have? I have only the pytorch channel in my environment file. According to this, the new mutex should be installed, which should work with Mamba:

Note that it's not yet rolled out to the pytorch channel, only to pytorch-nightly. The PyTorch 1.10.0 release will come with the new mutex.

But I cannot install the cpu-only version using mamba from my environment.yaml. Any ideas?

EDIT: Actually it's installing both cpuonly and cudatoolkit. Which I don't need.

@IvanYashchuk
Copy link

IvanYashchuk commented Nov 29, 2021

Here's a simple test.yaml file that installs a CPU-only version of PyTorch from the pytorch channel.

name: pytorch-cpuonly
channels:
  - pytorch
  - defaults
dependencies:
  - python=3
  - pytorch
  - cpuonly

Having test.yaml the environment can be created with mamba env create --file=test.yaml -n pytorch-cpuonly.

I think it's better to ask these kind of questions on PyTorch Forum https://discuss.pytorch.org/, where you're likely to receive a quick response. This issue is closed as the original problem is resolved. If you think that there's a bug in Mamba feel free to open a new issue.

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Jun 5, 2022

I got pointed here from an other post.

Using the conda-forge channel, there is no cpuonly package. This is a pytorch channel feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants