-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove "track_features" from cpuonly
conda package
#823
Conversation
|
||
requirements: | ||
run: | ||
- pytorch-mutex 1.0 cpu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative option to pytorch-mutex 1.0 cpu
is to use pytorch=*=*cpu*
. However, running
conda install -c nvidia -c pytorch pytorch=*=*cpu*
prefers the main channel
pytorch pkgs/main/linux-64::pytorch-1.8.1-cpu_py38h60491be_0
So I think using pytorch=*=*cpu*
for cpuonly
package could break conda.
mamba
chooses the pytorch channel
+ pytorch 1.9.0 py3.8_cpu_0 pytorch/linux-64 73 MB
CI workflow "binary_builds" tested here is all green when using the updated package |
Hey @wolfv, this PR fixes mamba-org/mamba#336. Could you please take a look at the updated recipes and tell whether the changes make sense for the Mamba project. |
@seemethere, I set up the testing branch following your previous attempt (pytorch/pytorch#54900). Could you please take a look whether additional testing is required? |
sorry for the long delay, looks good to me! |
@seemethere, @malfet could you please take a look at this PR? |
Let's see what will happen if we publish new cpuonly metapackage to the nightly channel |
What would happen to the installation of old packages from the pytorch channel if a new version of the There was an expectation to make everything work such that the current instructions from the PyTorch website remain the same (pytorch/pytorch#40213 (comment)). I think it would be nice to keep the instructions unchanged. Changing the name to If someone wants to install an older version of PyTorch they need to add for example Will the new
I uploaded PyTorch 1.6.0 CPU and CUDA variants to my channel for testing. The following correctly gives PyTorch CUDA variant:
If we add
Restricting
Here are outputs for Mamba. The basic pytorch installation with Mamba chooses PyTorch CUDA:
Adding
Restricting
|
I think |
Since breaking previous installation instructions sounds pretty bad, I suggest we go with Q: can legacy |
hyphens (and underscores) are both allowed in package names. I think it should be fine to have both a feature and a constraint, yep! |
Yes, the Outputs of conda/mamba commands when there are both "track_features" and constraints in the same `cpuonly` package:
Restricting
Keeping "track_features" in
Mamba chooses the latest build:
|
@malfet, once the testing PR (pytorch/pytorch#62718) is all green (UPD: it's all green now) we can proceed with |
Copied two above-mentioned packages to `pytorch-nightly`
|
No reports of regression, merging. |
Thanks everyone! This is great! |
`CONDA_CPU_ONLY_FEATURE` and `features` section were removed in pytorch#823, but merge conflict was resolved incorrectly in pytorch@a3909be
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
* Move TorchVision conda package to use pytorch-mutex This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained` * Update packaging/torchvision/meta.yaml Co-authored-by: Eli Uriegas <[email protected]> * Update packaging/torchvision/meta.yaml Co-authored-by: Eli Uriegas <[email protected]> Co-authored-by: Eli Uriegas <[email protected]>
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained`
Summary: * Move TorchVision conda package to use pytorch-mutex This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained` * Update packaging/torchvision/meta.yaml * Update packaging/torchvision/meta.yaml Reviewed By: NicolasHug Differential Revision: D31916325 fbshipit-source-id: 47589017b333948caee5b0412c30afaeb6eed4f8 Co-authored-by: Eli Uriegas <[email protected]> Co-authored-by: Eli Uriegas <[email protected]> Co-authored-by: Eli Uriegas <[email protected]>
* Move TorchVision conda package to use pytorch-mutex This is follow up after pytorch/builder#823 that gets rids of `feature` and migrate it to `run_constrained` * Update packaging/torchvision/meta.yaml Co-authored-by: Eli Uriegas <[email protected]> * Update packaging/torchvision/meta.yaml Co-authored-by: Eli Uriegas <[email protected]> Co-authored-by: Eli Uriegas <[email protected]>
Hello @IvanYashchuk, I've been investigating this pytorch issue and I think it might be related to you pr. From what I understand from this pr thread, it should be possible to go from the cpu-only to the cuda variant and vice-versa by running the conda install commands from the website, while in reality it does not work. You can go from cuda to cpu-only, but not the other way around. Can you offer some insights into the changes made ? |
You can go from cpuonly to cuda, by removing The following commands would successfully create an environment:
The real problem you might be facing is there's a delay in building torchvision, torchaudio packages and old torch nightly package might be pruned. So this doesn't work today:
There's no PyTorch from the 6th of November (only today's nightlies are available, you can check the files here https://anaconda.org/pytorch-nightly/pytorch/files), but the latest |
@IvanYashchuk The scenairo you are describing seems like edge-case that would actually explain the issue but it might not be worth to do anything about it except of documenting how to overcome it. Thank you for such insights. What we are observing is that even if one first installs the CPU version of
and then reinstall it with CUDA version omitting to remove
the issue still happens. Do you think this is "work as expected" and it should be explicitly said in the documentation that each time one is installing CUDA version over CPU version, the PS: If this happens, the corresponding aftermath fix is to remove the
|
I had PR a while back to make |
Hey @malfet, how can I help with landing the PR? |
This PR fixes pytorch/pytorch#40213. This command
mamba install -c nvidia -c ivanyashchuk pytorch cpuonly
now works.Currently, the PyTorch conda package uses "track_features" to enable/disable cuda / cpu builds.
Using
track_features
doesn't work well withmamba
, a fast alternative toconda
. This PR introduces a new packagepytorch-mutex
that is a metapackage to control the PyTorch variant, it's similar to howblas
andmpi
are handled in conda.cpuonly
package has now a runtime dependency with pinnedpytorch-mutex
.At this point,
mamba
was not choosing automatically the latest version ofcpuonly
preferring the old one (bumping the version number or build number didn't help there (maybe mamba-org/mamba#747 is related)). So, the mamba CPU variant of PyTorch was picked up only withmamba install pytorch cpuonly=2
request. Addingrun_constrained
topytorch-nightly/meta.yaml
helped to fix it.I introduced a separate package instead of using
outputs:
section inpytorch-nightly/meta.yaml
because working withoutputs:
is not easy and introduces too many changes to make everything work.The order of updates to conda pytorch channel: pytorch-mutex -> cpuonly -> pytorch. The packages are uploaded to
ivanyashchuk
channel for testing.Here are the outputs of some conda/mamba commands:
Install cuda pytorch, then install `cpuonly`
With conda old
cpuonly
is chosen and PyTorch is downgraded to the corresponding CPU version with build number 0.Mamba picks up the new
cpuonly
and corresponding PyTorch CPU with build number 1.Conda can be forced to pick the new
cpuonly
and it downgrades CUDA PyTorch to CPU.Three other people confirmed that they get the same output as above.
cc: @rgommers, @seemethere, @malfet