Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 1.10.1 and migrate protobuf and mkl #84

Merged
merged 3 commits into from
Jan 26, 2022

Conversation

hmaarrfk
Copy link
Contributor

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

@conda-forge-linter
Copy link

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe) and found it was in an excellent condition.

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

@hmaarrfk, could you please test one minor change? Could you comment out this line?

https://github.com/hmaarrfk/pytorch-cpu-feedstock/blob/61df6776c8785faae92c7822010b4e85a7d9f749/recipe/build_pytorch.sh#L75

    if [[ "$target_platform" == "osx-arm64" ]]; then
        export BLAS=OpenBLAS
        export USE_MKLDNN=0
        # There is a problem with pkg-config
        # See https://github.com/conda-forge/pkg-config-feedstock/issues/38
        export USE_DISTRIBUTED=0
    fi

The motivation is, cmake will likely look for whatever is available and you have OpenBLAS included, so it would fall back on it anyway. However, it would likely pick up Apple's Accelerate instead of OpenBLAS.

Since this seems to be passing, I will test it locally very quickly and let you know what happens.

Edit: not quickly, but I will try to report back. Feel free to proceed without this though for now. Nothing urgent.

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

As expected, commenting out that line (forcing it to be OpenBLAS) changes the CMAKE process from:

   -- Trying to find preferred BLAS backend of choice: OpenBLAS
  -- Found OpenBLAS libraries: /Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/lib/libopenblas.dylib
  -- Found OpenBLAS include: /Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include

to:

  -- Trying to find preferred BLAS backend of choice: MKL
  -- MKL_THREADING = OMP
  -- Looking for sys/types.h
  -- Looking for sys/types.h - found
  -- Looking for stdint.h
  -- Looking for stdint.h - found
  -- Looking for stddef.h
  -- Looking for stddef.h - found
  -- Check size of void*
  -- Check size of void* - done
  -- MKL_THREADING = OMP
  CMake Warning at cmake/Dependencies.cmake:177 (message):
    MKL could not be found.  Defaulting to Eigen
  Call Stack (most recent call first):
    CMakeLists.txt:653 (include)


  CMake Warning at cmake/Dependencies.cmake:205 (message):
    Preferred BLAS (MKL) cannot be found, now searching for a general BLAS
    library
  Call Stack (most recent call first):
    CMakeLists.txt:653 (include)


  -- MKL_THREADING = OMP
  -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - iomp5 - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - iomp5 - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - guide - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - guide - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_intel_thread - mkl_core - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_sequential - mkl_core - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_sequential - mkl_core - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_core - iomp5 - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_core - iomp5 - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_core - guide - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_core - guide - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl_intel_lp64 - mkl_core - pthread - m]
  --   Library mkl_intel_lp64: not found
  -- Checking for [mkl_intel - mkl_core - pthread - m]
  --   Library mkl_intel: not found
  -- Checking for [mkl - guide - pthread - m]
  --   Library mkl: not found
  -- MKL library not found
  -- Checking for [blis]
  --   Library blis: BLAS_blis_LIBRARY-NOTFOUND
  -- Checking for [Accelerate]
  --   Library Accelerate: /opt/MacOSX11.0.sdk/System/Library/Frameworks/Accelerate.framework
  -- Looking for sgemm_
  -- Looking for sgemm_ - found
  -- Found a library with BLAS API (accelerate). Full path: (/opt/MacOSX11.0.sdk/System/Library/Frameworks/Accelerate.framework)

cc @isuruf (ref comment)

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

So, this happens despite the fact that you have OpenBLAS available. Why? Because OpenBLAS is considered a fail-safe basically. It seems to me that upstream people (not just here) prefer Accelerate over OpenBLAS --- they much prefer MKL, as seen above. I guess we could use BLIS as well...

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

And finally:

from:

(pytorch) ~/Repos/pytorch-cpu-feedstock$ python -c "import torch; print(torch.__config__.show())"
PyTorch built with:
  - GCC 4.2
  - C++ Version: 201402
  - clang 11.1.0
  - OpenMP 201811
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/_build_env/bin/arm64-apple-darwin20.0.0-clang++, CXX_FLAGS=-ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -stdlib=libc++  -std=c++14 -fmessage-length=0 -isystem /Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include -fdebug-prefix-map=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/work=/usr/local/src/conda/pytorch-1.10.1 -fdebug-prefix-map=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643007540369/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol=/usr/local/src/conda-prefix -Wno-deprecated-declarations -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp=libomp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const, LAPACK_INFO=open, TORCH_VERSION=1.10.1, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, 

to:

(pytorch_accelerate) ~/Repos/pytorch-cpu-feedstock$ python -c "import torch; print(torch.__config__.show())"
PyTorch built with:
  - GCC 4.2
  - C++ Version: 201402
  - clang 11.1.0
  - OpenMP 201811
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - Build settings: BLAS_INFO=accelerate, BUILD_TYPE=Release, CXX_COMPILER=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643009389853/_build_env/bin/arm64-apple-darwin20.0.0-clang++, CXX_FLAGS=-ftree-vectorize -fPIC -fPIE -fstack-protector-strong -O2 -pipe -stdlib=libc++  -std=c++14 -fmessage-length=0 -isystem /Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643009389853/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol/include -fdebug-prefix-map=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643009389853/work=/usr/local/src/conda/pytorch-1.10.1 -fdebug-prefix-map=/Users/ngam/Repos/pytorch-cpu-feedstock/miniforge3/conda-bld/pytorch-recipe_1643009389853/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehol=/usr/local/src/conda-prefix -Wno-deprecated-declarations -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp=libomp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -fcolor-diagnostics -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const, LAPACK_INFO=accelerate, TORCH_VERSION=1.10.1, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, 

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

Note though that it is using Accelerate's LAPACK.

@hmaarrfk
Copy link
Contributor Author

I'm going to refer to the comment made #83 (comment)

I would suggest, that for this kind of optimization, that you demonstrate some real gains to be had by moving to accelerate. Maybe you can provide a benchmark showing the improvement? It is pretty easy to build things yourself and upload them to your channel. I strongly suggest that instead of waiting too long on the community for your own work anyway.

once we've established the gains, we can work on integrating it here

@hmaarrfk
Copy link
Contributor Author

Again, OSX optimizations here are "easy" in the sense that they follow the standard conda-forge model of building things on shared CIs.

We can always "delay" for an other small PR and avoid rebuilding the linux builds there.

@hmaarrfk
Copy link
Contributor Author

hmmm. the problem might be that we pinned too hard. we will need a repo patch to update packages to be a little looser.

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

I'm going to refer to the comment made #83 (comment)

I would suggest, that for this kind of optimization, that you demonstrate some real gains to be had by moving to accelerate. Maybe you can provide a benchmark showing the improvement? It is pretty easy to build things yourself and upload them to your channel. I strongly suggest that instead of waiting too long on the community for your own work anyway.

once we've established the gains, we can work on integrating it here

I guess it should be the other way around --- you don't have to set BLAS=OpenBLAS, so why did you? There's a mechanism already in place to find the most appropriate BLAS according to the design upstream.

I would argue the following: By doing this sort of pick-and-choose and ignoring the wishes of upstream developers, you're essentially making a customized fork of pytorch and thus should be renamed to cftorch or something. This is not about the improvements per se, but about the philosophy of redistribution. Aren't we supposed to keep this as close to standard-as-upstream as possible? If that's the case, I don't get why we have an interventionist environment variable.

I think there's a fetishizing going on by wanting to always have OpenBLAS as the common denominator; that's a harmful strategy.

once we've established the gains, we can work on integrating it here

There is no work. It is literally just commenting out that line.

     if [[ "$target_platform" == "osx-arm64" ]]; then
-         export BLAS=OpenBLAS
+         # no need to force building again OpenBLAS
+         # cmake can figure out default config from upstream
+         # export BLAS=OpenBLAS
          export USE_MKLDNN=0
          # There is a problem with pkg-config
          # See https://github.com/conda-forge/pkg-config-feedstock/issues/38
          export USE_DISTRIBUTED=0
     fi

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

Also that @isuruf's comment is unsupported. Why would all these developers refer to accelerate by default then? Also, Apple still uses and supports Accelerate; it is in their dev documentation about BNNS and the like.

@hmaarrfk
Copy link
Contributor Author

i totally understand all the points above.

The good news for you, is that this is an isolated request, that we can focus on after this.

Merging an OSX patch into CF is easy since it all runs on the bots. You decided to backout of the other isolated patch you are doing. I don't think I have the bandwidth to iterate on OSX in this merge request.

Had to rebuild for loosening the pinnings already.

Lets just try to focus on one thing at a time.

@hmaarrfk
Copy link
Contributor Author

And we are building a "cf-pytorch". Upstream developers are seldom interested in integrating in the larger ecosystem. They are interested in their business demos, and not in the larger system.

cf is about making it easy to install many libraries at the same time. Something that is impossible with just the "pytorch" channel.

You can see how instead of using a central library system, pytorch vendors many libraries that could be shared libraries increasing build complexity. In either way, this is best disucssed in a separate issue.

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

i totally understand all the points above.

The good news for you, is that this is an isolated request, that we can focus on after this.

Merging an OSX patch into CF is easy since it all runs on the bots. You decided to backout of the other isolated patch you are doing. I don't think I have the bandwidth to iterate on OSX in this merge request.

Had to rebuild for loosening the pinnings already.

Lets just try to focus on one thing at a time.

Yes, that's fine. This osx stuff is very minor compared to the other work you're doing, so unless you have an opening where everything is going smoothy, ignore it for now and we can revisit.

cf is about making it easy to install many libraries at the same time. Something that is impossible with just the "pytorch" channel

Yes, but if we deviate a lot, we should rename this. Like I've been suggesting with julia due to the deviation.

@hmaarrfk
Copy link
Contributor Author

Yes, that's fine. This osx stuff is very minor compared to the other work you're doing, so unless you have an opening where everything is going smoothy, ignore it for now and we can revisit.

Again, we are happy to review the requests and isuruf provided many good inputs.

OSX updates can be merged independently. It is the linux (and windows) builds that take the longest since they require manual intervention.

@hmaarrfk hmaarrfk closed this Jan 24, 2022
@hmaarrfk hmaarrfk reopened this Jan 24, 2022
This was referenced Jan 24, 2022
@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

@hmaarrfk for your consideration re MKL update: pytorch/pytorch#68812 (comment)

Edit: keep an eye for osx failures, pytorch/pytorch@a07d3dc

@ngam
Copy link
Contributor

ngam commented Jan 24, 2022

Also, @hmaarrfk I forgot to mention this earlier. When I was compiling for cuda builds for mxnet, I noticed a deprecation warning:

nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated, and may be removed in a future release (Use -Wno-deprecated-gpu-targets to suppress warning).

In this case, to decrease compile time, I would remove this line https://github.com/hmaarrfk/pytorch-cpu-feedstock/blob/23612dc7e4b69545153bf66e37a3423089997108/recipe/build_pytorch.sh#L93, thought this may need more coordination across the "ecosystem" ...

@hmaarrfk
Copy link
Contributor Author

I think I'm going to keep all the architectures for now for the CUDA versions I get around to.

are you asking me to include a patch? I might be able to at this stage.

Please try to keep this discussion focused on the improvement at hand. otherwise we will never get anything released.

Happy to have larger discussions in the issues

@ngam
Copy link
Contributor

ngam commented Jan 25, 2022

No, we will figure out a better way to handle the BLAS stuff later.

I just came across that issue with MKL and I thought I would flag it to you. The title of this PR has "MKL" in it.

For the CUDA builds, that's just a thought about excessive build times --- it is also relevant here in general, since the builds are timing out. However, I can't actually see how far they go; if it is around 80--90%, then removing the additional deprecated cuda archs might make them conclude in time, in under 6 hours.

@ngam
Copy link
Contributor

ngam commented Jan 25, 2022

For 10.2, it's timing out at 4935/5485, I don't think removing 3--5 will help, but maybe --- just maybe --- removing 3--5 and 6.1 will make the cut

  [4935/5485] Building NVCC (Device) object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/native/cuda/torch_cuda_generated_ReduceLogicKernel.cu.o
##[error]The operation was canceled.
Finishing: Run docker build

https://github.com/pytorch/audio/blob/565f8d417ec8d210c277021752ebd72cd4f179f5/packaging/pkg_helpers.bash#L48-L128

@hmaarrfk
Copy link
Contributor Author

The issue is that on my machine, i get to 4900 in 1 hour, and the remainder takes 1 hour. So I'm not sure how to cut down on the compilation time. We would need to cut down by a factor of 2.

Ok, since there is no action item, i'm going to keep pushing on building this.

@ngam
Copy link
Contributor

ngam commented Jan 25, 2022

The issue is that on my machine, i get to 4900 in 1 hour, and the remainder takes 1 hour. So I'm not sure how to cut down on the compilation time. We would need to cut down by a factor of 2.

Ok, since there is no action item, i'm going to keep pushing on building this.

Yes, keep going :)

Thanks!

@hmaarrfk
Copy link
Contributor Author

I had to rerender, but i'm about 10/12 builds through. I believe the file contents in ci_support are the same. but the commit message will have changed.

I'll let whoever reviews decide on the course of action. I likely won't have an other machine for 24+ hours to build all the builds.

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Jan 25, 2022

The builds were made before rebasing:

$ git diff --compact-summary release_1.10.1 origin/release_1.10.1
 .azure-pipelines/azure-pipelines-linux.yml                                                                                                       | 90 +++++++++++++++++++++++++++++++++++++++---------------------------------------
 ...n3.7.____cpython.yaml => linux_64_c_compiler_version7cuda_compiler_version10.2cudnn7cxx_compiler_version7numpy1.18python3.7.____cpython.yaml} |  0
 ...n3.8.____cpython.yaml => linux_64_c_compiler_version7cuda_compiler_version10.2cudnn7cxx_compiler_version7numpy1.18python3.8.____cpython.yaml} |  0
 ...n3.9.____cpython.yaml => linux_64_c_compiler_version7cuda_compiler_version10.2cudnn7cxx_compiler_version7numpy1.19python3.9.____cpython.yaml} |  0
 ...n3.7.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.0cudnn8cxx_compiler_version9numpy1.18python3.7.____cpython.yaml} |  0
 ...n3.8.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.0cudnn8cxx_compiler_version9numpy1.18python3.8.____cpython.yaml} |  0
 ...n3.9.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.0cudnn8cxx_compiler_version9numpy1.19python3.9.____cpython.yaml} |  0
 ...n3.7.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.1cudnn8cxx_compiler_version9numpy1.18python3.7.____cpython.yaml} |  0
 ...n3.8.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.1cudnn8cxx_compiler_version9numpy1.18python3.8.____cpython.yaml} |  0
 ...n3.9.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.1cudnn8cxx_compiler_version9numpy1.19python3.9.____cpython.yaml} |  0
 ...n3.7.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.2cudnn8cxx_compiler_version9numpy1.18python3.7.____cpython.yaml} |  0
 ...n3.8.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.2cudnn8cxx_compiler_version9numpy1.18python3.8.____cpython.yaml} |  0
 ...n3.9.____cpython.yaml => linux_64_c_compiler_version9cuda_compiler_version11.2cudnn8cxx_compiler_version9numpy1.19python3.9.____cpython.yaml} |  0
 ..._cpython.yaml => linux_64_c_compiler_version9cuda_compiler_versionNonecudnnundefinedcxx_compiler_version9numpy1.18python3.7.____cpython.yaml} |  0
 ..._cpython.yaml => linux_64_c_compiler_version9cuda_compiler_versionNonecudnnundefinedcxx_compiler_version9numpy1.18python3.8.____cpython.yaml} |  0
 ..._cpython.yaml => linux_64_c_compiler_version9cuda_compiler_versionNonecudnnundefinedcxx_compiler_version9numpy1.19python3.9.____cpython.yaml} |  0
 README.md                                                                                                                                        | 60 ++++++++++++++++++++++++++--------------------------
 conda-forge.yml                                                                                                                                  |  7 +++---
 18 files changed, 79 insertions(+), 78 deletions(-)

I think we will be OK to upload, full diff text can be found:
full_diff.txt

@hmaarrfk
Copy link
Contributor Author

hmaarrfk commented Jan 25, 2022

Logs available:
linux_64_pr84.zip

The uploads are found

https://anaconda.org/mark.harfouche/pytorch
https://anaconda.org/mark.harfouche/pytorch-gpu
https://anaconda.org/mark.harfouche/pytorch-cpu

under the unique label cfep03

@isuruf if you could upload that would be greatly appreciated.

I thought that hte builds were taking time to upload, then I realized that pytorch's builds are even bigger than ours at 1.5GB/build vs 900MB/build for us.

edit: please wait an hour or so for the builds to finish uploading.

@isuruf
Copy link
Member

isuruf commented Jan 25, 2022

It seems that there are no pytorch-cpu builds.

@hmaarrfk
Copy link
Contributor Author

those have been successfully completing on the CIs.

@isuruf isuruf merged commit 552b241 into conda-forge:master Jan 26, 2022
@hmaarrfk
Copy link
Contributor Author

Thank you!

@ngam
Copy link
Contributor

ngam commented Jan 26, 2022

THANK YOU @isuruf and @hmaarrfk!!

@isuruf, since you're the one with the keen eye on licensing stuff --- is there a specific policy about channels copying stuff directly from the conda-forge channel? I have a specific one in mind.

@isuruf
Copy link
Member

isuruf commented Jan 26, 2022

Can you open an issue in https://github.com/conda-forge/conda-forge.github.io ?

@ngam
Copy link
Contributor

ngam commented Jan 26, 2022

Sure, let me dig around a little more to get all the details right

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants