-
Notifications
You must be signed in to change notification settings - Fork 540
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need a method to build native libraries for pip imports #48
Comments
Some discussion here: https://groups.google.com/forum/#!topic/bazel-sig-python/ESnkuJ4udkc |
Another scenario I'm struggling with: building *.so's for PyTorch. PyTorch installed through rules_python works just fine. I can also compile programs for If I use If I set things up such that the libs that I'd normally get from So I need a way to get at both the headers and *.so's that are bundled inside the PyTorch wheel installed by Bazel. Currently, to the best of my knowledge, these are not exposed. There should be an option to expose them though. |
@1e100 I met the same issue, do you find any solution? |
I'm going to patch |
@yijianli-autra, after some amount of hacking I was able to accomplish what I wanted without patching, in part because rules_python supports injection of build rules into PIPs via Here's what I did: In PIP_ANNOTATIONS = {
"torch": package_annotation(
additive_build_content = """\
cc_library(
name = "libtorch",
hdrs = glob([
"site-packages/torch/include/torch/**/*.h",
"site-packages/torch/include/c10/**/*.h",
"site-packages/torch/include/ATen/**/*.h",
"site-packages/torch/include/caffe2/**/*.h",
]),
srcs = [
"site-packages/torch/lib/libc10.so",
"site-packages/torch/lib/libtorch.so",
"site-packages/torch/lib/libtorch_cpu.so",
"site-packages/torch/lib/libcaffe2_nvrtc.so",
],
includes = [
"site-packages/torch/include",
"site-packages/torch/include/torch/csrc/api/include"
],
deps = [
"@pybind11",
"@local_config_cuda//cuda",
],
linkopts = ["-ltorch", "-ltorch_cpu", "-lc10", "-lcaffe2_nvrtc"],
copts = ["-D_GLIBCXX_USE_CXX11_ABI=0"],
)
""",
),
}
pip_parse(
name = "pypi",
annotations = PIP_ANNOTATIONS,
requirements_lock = "//:requirements.txt",
)
load("@pypi//:requirements.bzl", "install_deps")
install_deps() Some of the stuff in the rule may be unnecessary - this is a result of a lot of stumbling in the dark and trying to compile.
cc_library(
name = "cpu",
srcs = [
"dcn_v2_cpu.cpp",
"dcn_v2_im2col_cpu.cpp",
"dcn_v2_psroi_pooling_cpu.cpp",
],
hdrs = [
"dcn_v2_im2col_cpu.h",
"vision.h",
],
deps = ["@pypi_torch//:libtorch"],
copts = ["-D_GLIBCXX_USE_CXX11_ABI=0"],
)
cc_binary(
name = "dcn_v2.so",
srcs = [
"dcn_v2.h",
"vision.cpp",
],
linkshared = True,
linkstatic = False,
linkopts = ["-Wl,-rpath,$$ORIGIN"],
deps = [
"//third_party/dcn_v2/src/cpu",
"@pypi_torch//:libtorch",
],
copts = ["-D_GLIBCXX_USE_CXX11_ABI=0"],
) Again, some of this might not be necessary.
py_library(
name = "dcn_v2",
srcs = glob(
[
"*.py",
],
exclude = ["**/*_test.py"],
),
deps = [
requirement("torch"),
],
data = ["//third_party/dcn_v2/src:dcn_v2.so"],
) Obviously modulo your own paths and library names.
torch.ops.load_library("third_party/dcn_v2/src/dcn_v2.so")
dcn_v2 = torch.ops.dcn_v2 |
Note that I haven't yet got this to work with the actual CUDA libs, but not because it's failing - I simply didn't get to it yet. CPU extension builds, loads, and passes the tests. |
Is there any idea of how to accommodate this in a bzlmod world as well? Without the flat repository namespace, just adding additional BUILD files becomes less viable for sharing dependencies across package managers. Say, using a multi-language repository and sharing a common zlib dependency between PIP-installed Python and C++. |
@shahms can you elaborate more? |
I'm not sure which aspect you want elaboration on, but consider a repository which contains both C++ and Python and uses protocol buffers from both. There is a PIP protobuf package for the Python side, but this package needs to invoke There are a few different (conceptual) ways of stitching this together, but they all need coordination between rules_python, pip and Bazel. When bzlmod is involved you can't easily just drop a BUILD file into the pip repository using |
@meteorcloudy I think that relying on toolchain resolution for this would be overkill. The same patch that one would apply in a WORKSPACE setup should also work with Bzlmod. For that, we may need to make a "reverse @tyler-french recently approached me with a similar situation: He needed to patch a Go repository managed by the |
Hmm, I'm not sure how difficult it is to implement this. Can we somehow just pass a resolved canonical label through module extension tags? |
We can pass a canonical label to an arbitrary target visible from the root module into the extension via tags, but that has some serious usability problems:
|
@fmeum I ran into a similar situation with Go as well, specifically with https://github.com/google/brotli/tree/master/go where the root module depends on the C brotli library. In that case it was easy enough to patch the Go library BUILD file to use the canonical name from the root module as that dependency wasn't using modules yet. Something like I think the fundamental issue is around bridging Bazel-managed module namespaces and pip, go, cargo, etc. While the rules in question will need some way of consuming those dependencies from Bazel, Bazel will need to provide a mechanism of making them available. But that all assumes (so far) that the root module already has a direct dependency on that shared dependency which it can make available. It seems like this is bazelbuild/bazel#19301 |
This is probably a long-term / aspirational wish, though I'd be happy to be shown otherwise.
Packages installed via
pip
often depend on native libraries. As a random example, https://pypi.python.org/pypi/cairocffi is a set of python bindings around the nativecairo
library. The bindings themselves don't include the native library, but they do require it to be installed on the system in a way which can be found bysetuptools
when compiling native code to build the relevant wheel.It seems like requiring a system-installed library (which could change between builds) would violate hermeticity. We should instead provide a way to specify the dependent libraries required for a given pip import as Bazel targets, which can themselves be build.
This is likely related to bazelbuild/bazel#1475
The text was updated successfully, but these errors were encountered: