-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for use on Apple Silicon #488
Comments
Hi, have been using tch-rs on an M1 for a bit now. Cheers, |
@ssoudan |
You can basically install (or unpack) the vanilla wheel for macOS ARM64 and take the
That doesn't work. The C++ API is not fully stable. You need to get the declarations YAML file from Torch nightly, regenerate some Rust source files using the OCaml program in |
By the way, if you are interested in the relative performance of https://twitter.com/danieldekok/status/1453070009266315267 For an M1 compared to a Ryzen 5700X, see the slide Dutch model performance in: https://danieldk.eu/Research/Presentations/clin2021.pdf These benchmarks are using a multi-task transformer model, implemented using If you are curious about MPS compared to PyTorch without MPS (which uses the AMX matrix multiplication co-processor), I am currently working on support for PyTorch with MPS in spaCy. So far the number are:
See: https://twitter.com/danieldekok/status/1529057447700226048 This should also be roughly the ballpark, since the AMX co-processor in the M1 Pro/Max is ~2.4 TFLOPS single-precision, whereas the GPU in the Pro and Max are respectively ~5 TFLOPS and ~10 TFLOPS. |
No, only from AMX through Accelerate. For MPS you need to use an |
thanks. almost a 2 part problem to be honest.
thanks! |
I run into namespacing issue with pytorch, seems like this is a new thing. Any idea on how to solve this? perhaps it'd be great if you can share the lib/ include/ so that I can compile against that? (tch-rs-demo) tch-m1 sunilmallya$ cargo run warning: clang: warning: -Wl,-rpath=/Users/sunilmallya/workspace/rust-pt/tch-m1/torch/lib: 'linker' input unused [-Wunused-command-line-argument] error: failed to run custom build command for Caused by: --- stderr error occurred: Command "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-g" "-fno-omit-frame-pointer" "-arch" "arm64" "-I" "/Users/sunilmallya/workspace/rust-pt/tch-m1/torch/include" "-I" "/Users/sunilmallya/workspace/rust-pt/tch-m1/torch/include/torch/csrc/api/include" "-Wl,-rpath=/Users/sunilmallya/workspace/rust-pt/tch-m1/torch/lib" "-std=c++14" "-D_GLIBCXX_USE_CXX11_ABI=1" "-o" "/Users/sunilmallya/workspace/rust-pt/tch-m1/target/debug/build/torch-sys-e9927d9e03b8bf15/out/libtch/torch_api.o" "-c" "libtch/torch_api.cpp" with args "c++" did not execute successfully (status code exit status: 1). |
@sunilmallya are you still running into that error? I'm experiencing the same error. |
Hi, same issue for me as well. Tried all libtorch zips (both linux and mac) and nothing works. I have a dockerfile for replicating this as well, so please let me know how can I help. |
Compilation is successful but when running getting below error: |
Siliicon M2 mac here, I have separate condas setup for arm64 and x86_64 as rosetta. But am getting the same linker errors trying to compile in either environment. Same errors with earlier torch-sys version. On the other hand pytorch is working great for python with mps support in arm64. Happy to test or help if possible. Posting arm64 errors first then x86_64 errors. arm64 errors: warning: clang: warning: -Wl,-rpath=/Users/lc/rust_projects/cr/libtorch/lib/lib: 'linker' input unused [-Wunused-command-line-argument] error: failed to run custom build command for Caused by: --- stderr error occurred: Command "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-2" "-fno-omit-frame-pointer" "-arch" "arm64" "-I" "/Users/lc/rust_projects/cr/torch-sys/include/include" "-I" "/Users/lc/rust_projects/cr/torch-sys/include/include/torch/csrc/api/include" "-Wl,-rpath=/Users/lc/rust_projects/cr/libtorch/lib/lib" "-std=c++14" "-D_GLIBCXX_USE_CXX11_ABI=1" "-o" "/Users/lc/cr/tch-m1/target/debug/build/torch-sys-689ca3535ca01ed0/out/libtch/torch_api.o" "-c" "libtch/torch_api.cpp" with args "c++" did not execute successfully (status code exit status: 1). x86_64 errors: warning: clang: warning: -Wl,-rpath=/Users/lc/rust_projects/cr/libtorch/lib/lib: 'linker' input unused [-Wunused-command-line-argument] error: failed to run custom build command for Caused by: --- stderr error occurred: Command "c++" "-O0" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-2" "-fno-omit-frame-pointer" "-m64" "-arch" "x86_64" "-I" "/Users/lc/rust_projects/cr/torch-sys/include/include" "-I" "/Users/lc/rust_projects/cr/torch-sys/include/include/torch/csrc/api/include" "-Wl,-rpath=/Users/lc/rust_projects/cr/libtorch/lib/lib" "-std=c++14" "-D_GLIBCXX_USE_CXX11_ABI=1" "-o" "/Users/lc/cr/tch-m1/target/debug/build/torch-sys-bffbc597d35ab5cb/out/libtch/torch_api.o" "-c" "libtch/torch_api.cpp" with args "c++" did not execute successfully (status code exit status: 1). warning: build failed, waiting for other jobs to finish... |
Not sure what your actual setup is but maybe you're setting the |
That was true! It was looking in lib/lib |
It built in x86_64 conda environment on silicon M2 mac!! - the Rust version had to be from the conda environment and there was a path problem initially linking back to arm rustc version. The LIBTORCH_INCLUDE now points to the include directory but the LIBTORCH_LIB points just to the LIBTORCH dir. The permissions were wrong on the downloaded libtorch. There were no permissions to x, only rw. So needed to change permissions or run in sudo. |
Totally new to Rust here, so please bear with me if my words don't make sense. After trying for 1 hour, I finally got
both are necessary!
My environment: |
https://formulae.brew.sh/formula/pytorch I don't quite understand what about |
The current version of |
Share the successful process of compiling and running my m1/m2:
|
Your comment and solution does not get enough love! Thank you! |
|
Not sure if anyone else is still having issues or if this was already solved elsewhere, but I managed to get a working build on Apple M1 Pro 16GB 14.1.1 using libtorch built from source. TLDR;This how to use libtorch nightly, built from source on M1 Pro with MPS. Resulting tch changes Here's the rough outline of the steps: Build libtorchFollow these steps to build libtorch from source What I ran: git clone -b main --recurse-submodule https://github.com/pytorch/pytorch.git
mkdir pytorch-build
cd pytorch-build
export USE_MPS=1
export USE_PYTORCH_METAL=1
cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_BUILD_TYPE:STRING=Release -DPYTHON_EXECUTABLE:PATH=`which python3` -DCMAKE_INSTALL_PREFIX:PATH=../pytorch-install ../pytorch
cmake --build . --target install That'll take a while, not sure how long though because it got to ~90% and then slowed way down so I went to bed Build tch-rsI'd started with a fresh crate using My cargo.toml ended up as [package]
name = "foobar"
version = "0.1.0"
edition = "2021"
[dependencies]
tch = "0.14.0"
ndarray = { version = "0.15.6", features = [
"rayon",
"blas",
"matrixmultiply-threading",
] }
safetensors = "0.4.1"
rust-bert = { path = "../rust-bert" }
[patch.crates-io]
ahash = { path = "../aHash" }
tch = { path = "../tch-rs" } (I would suggest not including Update (and create if it doesn't exist) the
At this point running
This was caused by setting The other was the longer one in this thread that prints this note out a bunch of times
Following @danieldk's suggestion here, the goal is to regenerate the files where the errors are coming from. Install OCaml and deps
brew install opam
brew install dune
opam install yaml
opam install stdio Get the declarations fileYou'll want to get a yml file that should've been generated when libtorch was built. It should be wherever you chose to build libtorch in Copy that file to the local Run the regen scriptUpdate the path here so that it points to the nightly version Then run: dune exec gen/gen.exe --profile release RebuildFinally, you should be able to go back to the empty crate and run cargo clean
cargo build I put all the resulting changes (except for |
I still get a build error for a Tauri app that I am running, the log is as follows
Currently I have libtorch v2.2.1 unzipped in a local directory and I have LIBTORCH pointing to it. I have tried @ShaojieJiang's approach as to no avail. I get this same error when I try to follow rust-bert's installation mechanism The error at the start suggested that I didn't have C++17 on my machine but I do. I can compile a simple C++ program if I set the |
According to your logs, you're using tch-rs 0.13.0, this version is I think for libtorch 2.0.0. Version 0.15.0 should be used for libtorch 2.2.0 (and might work with 2.2.1). |
This resolved it thanks. 💯 |
I/m facing the same issue here too:
error:
libtorch version 2.4.0 Updated finally got it to work please use this link to download which libtroch that is compatible: https://download.pytorch.org/libtorch/cpu/ |
I'm doing some development on an M1 Mac. And, I'm having some trouble installing
LibTorch
.Though I assumed it wouldn't work, I tried just installing from the link that usually provides an x86_64 build. Of course, it didn't:
I've also tried installing follow the guide for the Metal Accelerated version. I also didn't expect this to work, and it did not.
I also don't see an obvious solution here.
So, what do I need to do? I'd be happy to provide additional docs if there's a solution.
The text was updated successfully, but these errors were encountered: