Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove "track_features" from cpuonly conda package #823

Merged
merged 6 commits into from
Sep 17, 2021

Conversation

IvanYashchuk
Copy link
Contributor

@IvanYashchuk IvanYashchuk commented Aug 4, 2021

This PR fixes pytorch/pytorch#40213. This command mamba install -c nvidia -c ivanyashchuk pytorch cpuonly now works.

Currently, the PyTorch conda package uses "track_features" to enable/disable cuda / cpu builds.
Using track_features doesn't work well with mamba, a fast alternative to conda. This PR introduces a new package pytorch-mutex that is a metapackage to control the PyTorch variant, it's similar to how blas and mpi are handled in conda.
cpuonly package has now a runtime dependency with pinned pytorch-mutex.

At this point, mamba was not choosing automatically the latest version of cpuonly preferring the old one (bumping the version number or build number didn't help there (maybe mamba-org/mamba#747 is related)). So, the mamba CPU variant of PyTorch was picked up only with mamba install pytorch cpuonly=2 request. Adding run_constrained to pytorch-nightly/meta.yaml helped to fix it.

I introduced a separate package instead of using outputs: section in pytorch-nightly/meta.yaml because working with outputs: is not easy and introduces too many changes to make everything work.

The order of updates to conda pytorch channel: pytorch-mutex -> cpuonly -> pytorch. The packages are uploaded to ivanyashchuk channel for testing.

Here are the outputs of some conda/mamba commands:
conda install -c nvidia -c ivanyashchuk pytorch
# correctly gives pytorch cuda version by default

The following NEW packages will be INSTALLED:

  cudatoolkit        nvidia/linux-64::cudatoolkit-10.2.89-h8f6ccaa_8
  pytorch            ivanyashchuk/linux-64::pytorch-1.9.0-py3.8_cuda10.2_cudnn7.6.5_1
  pytorch-mutex      ivanyashchuk/noarch::pytorch-mutex-1.0-cuda
conda install -c nvidia -c ivanyashchuk pytorch cpuonly

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-2.0-0
  pytorch            ivanyashchuk/linux-64::pytorch-1.9.0-py3.8_cpu_1
  pytorch-mutex      ivanyashchuk/noarch::pytorch-mutex-1.0-cpu
mamba install -c nvidia -c ivanyashchuk pytorch

  Package                Version  Build                        Channel                     Size
─────────────────────────────────────────────────────────────────────────────────────────────────
  Install:
─────────────────────────────────────────────────────────────────────────────────────────────────

  + cudatoolkit          10.2.89  h8f6ccaa_8                   nvidia/linux-64           450 MB
  + pytorch                1.9.0  py3.8_cuda10.2_cudnn7.6.5_1  ivanyashchuk/linux-64     706 MB
  + pytorch-mutex            1.0  cuda                         ivanyashchuk/noarch         3 KB
mamba install -c nvidia -c ivanyashchuk pytorch cpuonly

  Package                Version  Build           Channel                     Size
────────────────────────────────────────────────────────────────────────────────────
  Install:
────────────────────────────────────────────────────────────────────────────────────

  + cpuonly                  2.0  0               ivanyashchuk/noarch         2 KB
  + pytorch                1.9.0  py3.8_cpu_1     ivanyashchuk/linux-64      73 MB
  + pytorch-mutex            1.0  cpu             ivanyashchuk/noarch         3 KB
Install cuda pytorch, then install `cpuonly`

With conda old cpuonly is chosen and PyTorch is downgraded to the corresponding CPU version with build number 0.

conda install -c ivanyashchuk cpuonly

The following NEW packages will be INSTALLED:

cpuonly            ivanyashchuk/noarch::cpuonly-1.0-0

The following packages will be DOWNGRADED:

pytorch                 1.9.0-py3.8_cuda10.2_cudnn7.6.5_1 --> 1.9.0-py3.8_cpu_0

Mamba picks up the new cpuonly and corresponding PyTorch CPU with build number 1.

 mamba install -c ivanyashchuk cpuonly

  Package          Version  Build                        Channel                    Size
──────────────────────────────────────────────────────────────────────────────────────────
Install:
──────────────────────────────────────────────────────────────────────────────────────────

+ cpuonly            2.0  0                            ivanyashchuk/noarch        2 KB

Change:
──────────────────────────────────────────────────────────────────────────────────────────

- pytorch          1.9.0  py3.8_cuda10.2_cudnn7.6.5_1  installed                      
+ pytorch          1.9.0  py3.8_cpu_1                  ivanyashchuk/linux-64     73 MB
- pytorch-mutex      1.0  cuda                         installed                      
+ pytorch-mutex      1.0  cpu                          ivanyashchuk/noarch        3 KB

Conda can be forced to pick the new cpuonly and it downgrades CUDA PyTorch to CPU.

conda install -c ivanyashchuk cpuonly=2

The following NEW packages will be INSTALLED:

cpuonly            ivanyashchuk/noarch::cpuonly-2.0-0

The following packages will be DOWNGRADED:

pytorch                 1.9.0-py3.8_cuda10.2_cudnn7.6.5_1 --> 1.9.0-py3.8_cpu_1
pytorch-mutex                                    1.0-cuda --> 1.0-cpu

Three other people confirmed that they get the same output as above.

cc: @rgommers, @seemethere, @malfet


requirements:
run:
- pytorch-mutex 1.0 cpu
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative option to pytorch-mutex 1.0 cpu is to use pytorch=*=*cpu*. However, running

conda install -c nvidia -c pytorch pytorch=*=*cpu*

prefers the main channel

pytorch            pkgs/main/linux-64::pytorch-1.8.1-cpu_py38h60491be_0

So I think using pytorch=*=*cpu* for cpuonly package could break conda.

mamba chooses the pytorch channel

  + pytorch               1.9.0  py3.8_cpu_0    pytorch/linux-64        73 MB

@IvanYashchuk
Copy link
Contributor Author

CI workflow "binary_builds" tested here is all green when using the updated package ivanyashchuk::cpuonly.

@IvanYashchuk
Copy link
Contributor Author

Hey @wolfv, this PR fixes mamba-org/mamba#336. Could you please take a look at the updated recipes and tell whether the changes make sense for the Mamba project.

@IvanYashchuk
Copy link
Contributor Author

CI workflow "binary_builds" tested here is all green when using the updated package ivanyashchuk::cpuonly.

@seemethere, I set up the testing branch following your previous attempt (pytorch/pytorch#54900). Could you please take a look whether additional testing is required?

@wolfv
Copy link

wolfv commented Aug 17, 2021

sorry for the long delay, looks good to me!

@IvanYashchuk
Copy link
Contributor Author

@seemethere, @malfet could you please take a look at this PR?

@malfet
Copy link
Contributor

malfet commented Aug 27, 2021

Let's see what will happen if we publish new cpuonly metapackage to the nightly channel

@IvanYashchuk
Copy link
Contributor Author

What would happen to the installation of old packages from the pytorch channel if a new version of the cpuonly package is added? Will it break the behavior for old packages? Unfortunately, yes, when trying to install an older PyTorch version together with cpuonly package using conda the CUDA variant is chosen. Restricting the version cpuonly=1 forces conda to choose the CPU variant. mamba always chooses the CUDA variant (as before this PR).

There was an expectation to make everything work such that the current instructions from the PyTorch website remain the same (pytorch/pytorch#40213 (comment)). I think it would be nice to keep the instructions unchanged. Changing the name to cpu-only or something else would require updating the instructions and people to learn it, if we decide to change the instructions then cpuonly package might be not needed at all. We could tell people to use conda install pytorch pytorch-mutex=1.0=cpu or conda install pytorch pytorch-mutex=1.0=cuda instead.

If someone wants to install an older version of PyTorch they need to add for example =1.6.0, they could as well continue and pin the build variant as well =1.6.0=*cpu* or =1.6.0=*cuda* instead of relying on cpuonly, which is flaky in this situation and doesn't work with mamba anyway.

Will the new cpuonly, pytorch-mutex and pytorch packages work for older conda versions the same way as with the new one?
Yes, the only change from this PR that will be ignored for older conda versions is run_constrained section. That part ensures that there can't be a situation with the CUDA variant and cpuonly package installed simultaneously (only for packages built after merging this PR, not old ones). Current conda and magma would give an error:

mamba install -c nvidia -c ivanyashchuk pytorch=1.9.0=*cuda10.2_cudnn7.6.5_1 cpuonly
Encountered problems while solving:
  - package pytorch-1.9.0-py3.8_cuda10.2_cudnn7.6.5_1 has constraint cpuonly <0 conflicting with cpuonly-1.0-0

I uploaded PyTorch 1.6.0 CPU and CUDA variants to my channel for testing.

The following correctly gives PyTorch CUDA variant:

conda install -c nvidia -c ivanyashchuk pytorch=1.6.0

The following NEW packages will be INSTALLED:

  cudatoolkit        nvidia/linux-64::cudatoolkit-10.2.89-h8f6ccaa_8
  pytorch            ivanyashchuk/linux-64::pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0

If we add cpuonly then unfortunately the new version of cpuonly is picked up and PyTorch CUDA variant is chosen (the same behavior as with mamba instead of conda):

conda install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly
# chooses pytorch-cuda and cpuonly=2

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-2.0-0
  cudatoolkit        nvidia/linux-64::cudatoolkit-10.2.89-h8f6ccaa_8
  pytorch            ivanyashchuk/linux-64::pytorch-1.6.0-py3.8_cuda10.2.89_cudnn7.6.5_0
  pytorch-mutex      ivanyashchuk/noarch::pytorch-mutex-1.0-cpu

Restricting cpuonly=1 gives PyTorch CPU variant:

conda install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly=1

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-1.0-0
  pytorch            ivanyashchuk/linux-64::pytorch-1.6.0-py3.8_cpu_0

Here are outputs for Mamba. The basic pytorch installation with Mamba chooses PyTorch CUDA:

mamba install -c nvidia -c ivanyashchuk pytorch=1.6.0

  Package                Version  Build                        Channel                     Size
─────────────────────────────────────────────────────────────────────────────────────────────────
  Install:
─────────────────────────────────────────────────────────────────────────────────────────────────

  + cudatoolkit          10.2.89  h8f6ccaa_8                   nvidia/linux-64           450 MB
  + pytorch          1.6.0  py3.8_cuda10.2.89_cudnn7.6.5_0  ivanyashchuk/linux-64     539 MB

Adding cpuonly has no effect in Mamba and still CUDA variant is chosen:

mamba install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly
# chooses pytorch-cuda and cpuonly=2

  Package                Version  Build           Channel                     Size
────────────────────────────────────────────────────────────────────────────────────
  Install:
────────────────────────────────────────────────────────────────────────────────────

  + cpuonly                  2.0  0               ivanyashchuk/noarch         2 KB
  + pytorch           1.6.0  py3.8_cuda10.2.89_cudnn7.6.5_0  ivanyashchuk/linux-64     539 MB
  + pytorch-mutex            1.0  cpu             ivanyashchuk/noarch         3 KB

Restricting cpuonly=1 still has no effect with Mamba and we get CUDA variant:

mamba install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly=1
# chooses pytorch-cuda and cpuonly=1

  Package                Version  Build           Channel                     Size
────────────────────────────────────────────────────────────────────────────────────
  Install:
────────────────────────────────────────────────────────────────────────────────────

  + cpuonly                  2.0  0               ivanyashchuk/noarch         2 KB
  + pytorch           1.6.0  py3.8_cuda10.2.89_cudnn7.6.5_0  ivanyashchuk/linux-64     539 MB
  + pytorch-mutex            1.0  cpu             ivanyashchuk/noarch         3 KB

@rgommers
Copy link
Contributor

rgommers commented Sep 1, 2021

There was an expectation to make everything work such that the current instructions from the PyTorch website remain the same (pytorch/pytorch#40213 (comment)). I think it would be nice to keep the instructions unchanged. Changing the name to cpu-only or something else would require updating the instructions and people to learn it, if we decide to change the instructions then cpuonly package might be not needed at all. We could tell people to use conda install pytorch pytorch-mutex=1.0=cpu or conda install pytorch pytorch-mutex=1.0=cuda instead.

I think pytorch pytorch-mutex=1.0=cpu is not very usable, most people cannot remember that. I think a choice needs to be made between naming it cpuonly or cpu-only. The former has some backwards compat impact, it's not too bad but perhaps enough to prefer cpu-only and just update all install instructions. @malfet, @seemethere WDYT?

@malfet
Copy link
Contributor

malfet commented Sep 1, 2021

Since breaking previous installation instructions sounds pretty bad, I suggest we go with cpu-only ( if conda's naming schema allows hyphens) or just cpu for new packages

Q: can legacy cpuonly package contain both feature and constraint? This way, users can still use old and new installation instructions and gradually migrating to new one

@wolfv
Copy link

wolfv commented Sep 1, 2021

hyphens (and underscores) are both allowed in package names. I think it should be fine to have both a feature and a constraint, yep!

@IvanYashchuk
Copy link
Contributor Author

Yes, the cpuonly package can have simultaneously "track_features" and constraints. So we don't need to use a new name. Here's what happens if we keep "track_features" in cpuonly/meta.yaml:
Conda behaves correctly for older packages, but for the current version that has pytorch-1.9.0 relying on "features" (build 0) and pytorch-1.9.0 relying on constraints (build 1) conda prefers build 0. This can be fixed by not removing "features" section from pytorch-nightly/meta.yaml, then conda and mamba behavior will be the same, choosing the same build. I think keeping the "features" section in pytorch-nightly/meta.yaml is unnecessary because PyTorch packages for future versions would have only one build, then both conda and mamba would have the same behavior.

Outputs of conda/mamba commands when there are both "track_features" and constraints in the same `cpuonly` package:
conda install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly
# chooses pytorch-cpu and cpuonly=2

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-2.0-0
  pytorch            ivanyashchuk/linux-64::pytorch-1.6.0-py3.8_cpu_0
  pytorch-mutex      ivanyashchuk/noarch::pytorch-mutex-1.0-cpu

Restricting cpuonly=1 gives PyTorch CPU variant without pytorch-mutex:

conda install -c nvidia -c ivanyashchuk pytorch=1.6.0 cpuonly=1

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-1.0-0
  pytorch            ivanyashchuk/linux-64::pytorch-1.6.0-py3.8_cpu_0

Keeping "track_features" in cpuonly makes conda choose the 0th build (instead of 1st) for 1.9.0. At least the build variant was correctly chosen, just not the latest build.

conda install -c nvidia -c ivanyashchuk pytorch cpuonly
# chooses pytorch-1.9.0-py3.8_cpu_0 and cpuonly=2

The following NEW packages will be INSTALLED:

  cpuonly            ivanyashchuk/noarch::cpuonly-2.0-0
  pytorch            ivanyashchuk/linux-64::pytorch-1.9.0-py3.8_cpu_0
  pytorch-mutex      ivanyashchuk/noarch::pytorch-mutex-1.0-cpu

Mamba chooses the latest build:

mamba install -c nvidia -c ivanyashchuk pytorch cpuonly

 Package               Version  Build          Channel                     Size
──────────────────────────────────────────────────────────────────────────────────
  Install:
──────────────────────────────────────────────────────────────────────────────────

  + cpuonly                 2.0  0              ivanyashchuk/noarch         2 KB
  + pytorch               1.9.0  py3.8_cpu_1    ivanyashchuk/linux-64     Cached
  + pytorch-mutex           1.0  cpu            ivanyashchuk/noarch       Cached

@IvanYashchuk
Copy link
Contributor Author

IvanYashchuk commented Sep 6, 2021

@malfet, once the testing PR (pytorch/pytorch#62718) is all green (UPD: it's all green now) we can proceed with cpuonly, pytorch-mutex packages to be copied to the nightly channel for ensuring that these packages are harmless. Once it's confirmed that new cpuonly and pytorch-mutex do not break anything we can merge this PR so that nightlies switch to the new constraints-based mechanism for choosing between cpu and cuda variants.

@malfet
Copy link
Contributor

malfet commented Sep 10, 2021

Copied two above-mentioned packages to `pytorch-nightly`
% anaconda copy ivanyashchuk/pytorch-mutex/1.0 --to-owner pytorch-nightly
Using Anaconda API: https://api.anaconda.org
Copied file: noarch/pytorch-mutex-1.0-cpu.tar.bz2
Copied file: noarch/pytorch-mutex-1.0-cuda.tar.bz2
Copied 2 files!
% anaconda copy ivanyashchuk/cpuonly/2.0 --to-owner pytorch-nightly
Using Anaconda API: https://api.anaconda.org
Copied file: noarch/cpuonly-2.0-0.tar.bz2
Copied 1 files!

@malfet
Copy link
Contributor

malfet commented Sep 17, 2021

No reports of regression, merging.

@malfet malfet merged commit 4d84b21 into pytorch:main Sep 17, 2021
@wolfv
Copy link

wolfv commented Sep 17, 2021

Thanks everyone! This is great!

IvanYashchuk added a commit to IvanYashchuk/builder that referenced this pull request Sep 20, 2021
`CONDA_CPU_ONLY_FEATURE` and `features` section were removed in pytorch#823, but merge conflict was resolved incorrectly in pytorch@a3909be
malfet pushed a commit that referenced this pull request Sep 20, 2021
`CONDA_CPU_ONLY_FEATURE` and `features` section were removed in #823, but merge conflict was resolved incorrectly in a3909be
malfet added a commit to pytorch/vision that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/audio that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/audio that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/vision that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/vision that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/vision that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/vision that referenced this pull request Oct 19, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/vision that referenced this pull request Oct 20, 2021
* Move TorchVision conda package to use pytorch-mutex

This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`

* Update packaging/torchvision/meta.yaml

Co-authored-by: Eli Uriegas <[email protected]>

* Update packaging/torchvision/meta.yaml

Co-authored-by: Eli Uriegas <[email protected]>

Co-authored-by: Eli Uriegas <[email protected]>
malfet added a commit to pytorch/audio that referenced this pull request Oct 20, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
malfet added a commit to pytorch/audio that referenced this pull request Oct 21, 2021
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
facebook-github-bot pushed a commit to pytorch/vision that referenced this pull request Oct 26, 2021
Summary:
* Move TorchVision conda package to use pytorch-mutex

This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`

* Update packaging/torchvision/meta.yaml

* Update packaging/torchvision/meta.yaml

Reviewed By: NicolasHug

Differential Revision: D31916325

fbshipit-source-id: 47589017b333948caee5b0412c30afaeb6eed4f8

Co-authored-by: Eli Uriegas <[email protected]>
Co-authored-by: Eli Uriegas <[email protected]>
Co-authored-by: Eli Uriegas <[email protected]>
cyyever pushed a commit to cyyever/vision that referenced this pull request Nov 16, 2021
* Move TorchVision conda package to use pytorch-mutex

This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`

* Update packaging/torchvision/meta.yaml

Co-authored-by: Eli Uriegas <[email protected]>

* Update packaging/torchvision/meta.yaml

Co-authored-by: Eli Uriegas <[email protected]>

Co-authored-by: Eli Uriegas <[email protected]>
@alinpahontu2912
Copy link

Hello @IvanYashchuk, I've been investigating this pytorch issue and I think it might be related to you pr. From what I understand from this pr thread, it should be possible to go from the cpu-only to the cuda variant and vice-versa by running the conda install commands from the website, while in reality it does not work. You can go from cuda to cpu-only, but not the other way around. Can you offer some insights into the changes made ?

@IvanYashchuk
Copy link
Contributor Author

Hello @IvanYashchuk, I've been investigating this pytorch issue and I think it might be related to you pr. From what I understand from this pr thread, it should be possible to go from the cpu-only to the cuda variant and vice-versa by running the conda install commands from the website, while in reality it does not work. You can go from cuda to cpu-only, but not the other way around. Can you offer some insights into the changes made ?

You can go from cpuonly to cuda, by removing cpuonly package and installing pytorch-mutex=1.0=cuda.

The following commands would successfully create an environment:

mamba create -n testenv pytorch pytorch-cuda=12.1 cuda-toolkit pytorch-mutex=1.0=cuda -c pytorch-nightly -c nvidia --dry-run
mamba create -n testenv pytorch pytorch-cuda=12.1 cuda-toolkit pytorch-mutex=1.0=cpu -c pytorch-nightly -c nvidia --dry-run

The real problem you might be facing is there's a delay in building torchvision, torchaudio packages and old torch nightly package might be pruned. So this doesn't work today:

mamba create -n testenv torchvision pytorch pytorch-cuda=12.1 cuda-toolkit pytorch-mutex=1.0=cuda -c pytorch-nightly -c nvidia --dry-run
Encountered problems while solving:
  - nothing provides pytorch 2.2.0.dev20231106 needed by torchvision-0.17.0.dev20231106-py39_cu121

The environment can't be solved, aborting the operation

There's no PyTorch from the 6th of November (only today's nightlies are available, you can check the files here https://anaconda.org/pytorch-nightly/pytorch/files), but the latest torchvision is from the 6th of November (https://anaconda.org/pytorch-nightly/torchvision/files).

@Blackhex
Copy link
Contributor

Blackhex commented Nov 10, 2023

@IvanYashchuk The scenairo you are describing seems like edge-case that would actually explain the issue but it might not be worth to do anything about it except of documenting how to overcome it. Thank you for such insights.

What we are observing is that even if one first installs the CPU version of pytorch package alone:

conda install pytorch cpuonly -c pytorch-nightly

and then reinstall it with CUDA version omitting to remove cpuonly package:

conda remove pytorch
conda install pytorch pytorch-cuda=12.1 -c pytorch-nightly -c nvidia

the issue still happens.

Do you think this is "work as expected" and it should be explicitly said in the documentation that each time one is installing CUDA version over CPU version, the cpuonly must be removed first, or this scenairo's issue is worth fixing?

PS: If this happens, the corresponding aftermath fix is to remove the cpuonly package and reinstall the CUDA pytorch package:

conda remove cpuinfo
conda install pytorch pytorch-cuda=12.1 -c pytorch-nightly -c nvidia

@malfet
Copy link
Contributor

malfet commented Nov 10, 2023

I had PR a while back to make pytorch-cuda incompatible with cpuonly package: #1151
Perhaps we should land it and indeed document somewhere how to go from cpuonly to cuda and back

@alinpahontu2912
Copy link

Hey @malfet, how can I help with landing the PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Conda Package: cpuonly should be a mutex package
7 participants