Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide co-resident gcc 8 toolchain for CUDA #867

Merged
merged 16 commits into from
Dec 16, 2021
Merged

Conversation

madisongh
Copy link
Member

OE-Core now includes Rust support, and that toolchain requires gcc 11 in order to build its standard library. Because of that change, we need to change how we support building CUDA libraries and programs, since CUDA 10.2 is stuck at gcc 8.

This PR proposes that we bite the bullet and provide the gcc 8 toolchain directly in the layer. The recipes are modified to allow this toolchain to co-reside with the default toolchain from the core, and are intended to be used only when CUDA compatibility is required. A new cuda-gcc.bbclass is provided; recipes that need to be built with the older toolchain can inherit that class (and cuda.bbclass is updated to do that).

This allows us to build everything else with the latest toolchain available. We no longer need to force users to switch to the older toolchain at the distro level.

@madisongh madisongh force-pushed the gcc-8-for-cuda-only branch 2 times, most recently from bcf9a9b to 18896ee Compare December 15, 2021 13:28
@madisongh
Copy link
Member Author

After further testing and examining how Debian handles this, it looks like we can use the main toolchain for everything in most cases, and just pass the old compiler to nvcc. So CXX and CC will invoke the main compiler by default; if a particular package's build files make it difficult to mix the toolchains (as for the CUDA sample programs), you can just set CC to ${CC_FOR_CUDA} and/or CXX to ${CXX_FOR_CUDA} in the recipe.

I've also updated the gcc-8 recipes to version 8.5.0 to pick up all of the available fixes, including some CVE patches that had been back-ported to 8.3.0 in the original recipes.

@madisongh madisongh marked this pull request as ready for review December 15, 2021 13:54
@madisongh madisongh changed the title WIP: provide co-resident gcc 8 toolchain for CUDA Provide co-resident gcc 8 toolchain for CUDA Dec 15, 2021
@ichergui
Copy link
Member

After further testing and examining how Debian handles this, it looks like we can use the main toolchain for everything in most cases, and just pass the old compiler to nvcc. So CXX and CC will invoke the main compiler by default; if a particular package's build files make it difficult to mix the toolchains (as for the CUDA sample programs), you can just set CC to ${CC_FOR_CUDA} and/or CXX to ${CXX_FOR_CUDA} in the recipe.

I've also updated the gcc-8 recipes to version 8.5.0 to pick up all of the available fixes, including some CVE patches that had been back-ported to 8.3.0 in the original recipes.

Thanks @madisongh for doing this.
Please let me know if I can help with the testing

This is a trimmed-down version of the gcc recipes, modified
to be co-resident with the normal gcc toolchain, for use with
recipes that need to compile CUDA code.

Main differences:

* Recipes are based on the 8.3.0 OE-Core recipes, but the
  toolchain is updated to 8.5.0 to pick up the latest
  available fixes.

* Binaries are suffixed with the version number, so they
  can be installed alongside the normal gcc toolchain.
  A version-suffixed directory under ${bindir} is also
  created with non-version-suffixed symlinks for the
  tools, in case that's needed somewhere.

* No shared libraries for the target are packaged, as we
  use the ones from the primary toolchain instead. The
  dev package does include static libraries and header files.

Signed-off-by: Matt Madison <[email protected]>
for redirecting the gcc toolchain used by nvcc (and,
if necessary, directly compiled CC/C++ code) for
compatibility with CUDA.  This class can be inherited
by recipes that need to compile CUDA-compatible code
but do not need the CUDA toolkit dependencies.

The class sets variables CC_FOR_CUDA and CXX_FOR_CUDA.
Recipes needing to use this version of the compiler
must set CC and/or CXX to point to these variables.

Signed-off-by: Matt Madison <[email protected]>
* Use CXX_FOR_CUDA to determine the compiler to pass to nvcc
* Add support for setting CMAKE_CUDA_ARCHITECTURES based on
  the CUDA_ARCHITECTURES variable

Signed-off-by: Matt Madison <[email protected]>
for tegra platforms.

Signed-off-by: Matt Madison <[email protected]>
Recent versions of CMake require us to pass just the
architecture numbers through a variable setting, so
break that out into a separate bitbake variable.

Signed-off-by: Matt Madison <[email protected]>
The samples need to be compiled with the older compiler.

Signed-off-by: Matt Madison <[email protected]>
* The CXX_FOR_CUDA variable is used to set CUDAHOSTCXX
* CMAKE_CUDA_ARCHITECTURES support is added

Signed-off-by: Matt Madison <[email protected]>
to set OPENCV_CUDA_DETECTION_NVCC_FLAGS, which tells
the OpenCV CUDA module which host compiler to use without
generating a warning during the configure step.

Signed-off-by: Matt Madison <[email protected]>
to remove the reference to the gcc-8 toolchain recipes,
which are no longer provided.

Signed-off-by: Matt Madison <[email protected]>
Now that we can build with the latest toolchain from OE-Core,
we no longer need this recipe.

Signed-off-by: Matt Madison <[email protected]>
as some recipes in OE-Core now expect a provider for this.

Signed-off-by: Matt Madison <[email protected]>
to point to libglvnd.

Signed-off-by: Matt Madison <[email protected]>
to add the gcc-8 cross-compiler dependency on
linux-libc-headers, mirroring what's done in
OE-Core for the main toolchain.

Signed-off-by: Matt Madison <[email protected]>
@madisongh
Copy link
Member Author

Everything looks good with build, SDK, and runtime testing I've done.

@madisongh madisongh merged commit e7abe94 into master Dec 16, 2021
@madisongh madisongh deleted the gcc-8-for-cuda-only branch December 16, 2021 18:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants