Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update black to 22.3.0, update usage of libcudf macros, and remove direct column indexing #511

Merged
merged 5 commits into from
Apr 6, 2022

Conversation

charlesbluca
Copy link
Member

@charlesbluca charlesbluca commented Apr 1, 2022

This PR resolves three separate issues caused by changes to our dependencies:

  1. The version of Black we were using broke due to a change in click (xref black fails due to _unicodefun psf/black#2963). This PR bumps us to the latest version, which resolves those issues (and makes some minor styling changes).
  2. rapidsa/cudf#10589 changed the CUDA_TRY macro to CUDF_CUDA_TRY and CHECK_CUDA to CUDF_CHECK_CUDA, so cuspatial needs to make that change as well.
  3. rapids/cudf#10516 removed support for direct indexing of columns via __getitem__. This PR replaces those calls with the appropriate methods.

@charlesbluca charlesbluca requested a review from a team as a code owner April 1, 2022 16:33
@github-actions github-actions bot added the Python Related to Python code label Apr 1, 2022
@charlesbluca
Copy link
Member Author

rerun tests

@charlesbluca
Copy link
Member Author

Getting build failures that seem unrelated:

$SRC_DIR/cpp/src/join/quadtree_poly_filtering.cu(186): error: namespace "thrust" has no member "stable_sort_by_key"
          detected during instantiation of "std::unique_ptr<cudf::table, std::default_delete<cudf::table>> cuspatial::detail::<unnamed>::dispatch_quadtree_bounding_box_join::operator()<T,<unnamed>>(const cudf::table_view &, const cudf::table_view &, double, double, double, int8_t, rmm::cuda_stream_view, rmm::mr::device_memory_resource *) [with T=float, <unnamed>=(void *)nullptr]" 
$PREFIX/include/cudf/utilities/type_dispatcher.hpp(456): here

$SRC_DIR/cpp/src/join/detail/traversal.cuh(123): error: namespace "thrust" has no member "constant_iterator"
          detected during:
            instantiation of "std::unique_ptr<cudf::table, std::default_delete<cudf::table>> cuspatial::detail::<unnamed>::join_quadtree_and_bboxes(const cudf::table_view &, const cudf::table_view &, T, T, T, int8_t, rmm::cuda_stream_view, rmm::mr::device_memory_resource *) [with T=float]" 
$SRC_DIR/cpp/src/join/quadtree_poly_filtering.cu(229): here
            instantiation of "std::unique_ptr<cudf::table, std::default_delete<cudf::table>> cuspatial::detail::<unnamed>::dispatch_quadtree_bounding_box_join::operator()<T,<unnamed>>(const cudf::table_view &, const cudf::table_view &, double, double, double, int8_t, rmm::cuda_stream_view, rmm::mr::device_memory_resource *) [with T=float, <unnamed>=(void *)nullptr]" 
$PREFIX/include/cudf/utilities/type_dispatcher.hpp(456): here

$SRC_DIR/cpp/src/join/detail/traversal.cuh(123): error: type name is not allowed
          detected during:
            instantiation of "std::unique_ptr<cudf::table, std::default_delete<cudf::table>> cuspatial::detail::<unnamed>::join_quadtree_and_bboxes(const cudf::table_view &, const cudf::table_view &, T, T, T, int8_t, rmm::cuda_stream_view, rmm::mr::device_memory_resource *) [with T=float]" 
$SRC_DIR/cpp/src/join/quadtree_poly_filtering.cu(229): here
            instantiation of "std::unique_ptr<cudf::table, std::default_delete<cudf::table>> cuspatial::detail::<unnamed>::dispatch_quadtree_bounding_box_join::operator()<T,<unnamed>>(const cudf::table_view &, const cudf::table_view &, double, double, double, int8_t, rmm::cuda_stream_view, rmm::mr::device_memory_resource *) [with T=float, <unnamed>=(void *)nullptr]" 
$PREFIX/include/cudf/utilities/type_dispatcher.hpp(456): here

3 errors detected in the compilation of "$SRC_DIR/cpp/src/join/quadtree_poly_filtering.cu".

@vyasr
Copy link
Contributor

vyasr commented Apr 4, 2022

This failure is likely caused by the Thrust 1.16 update in libcudf. cugraph reported similar issues. If so, we'll likely need to resolve this upstream.

@harrism harrism added improvement Improvement / enhancement to an existing function non-breaking Non-breaking change labels Apr 4, 2022
rapids-bot bot pushed a commit to rapidsai/cudf that referenced this pull request Apr 5, 2022
PR #10489 updated from Thrust 1.15 to Thrust 1.16. However, this appears to be causing conflicts with other repositories -- [cuSpatial](rapidsai/cuspatial#511 (comment)) and cuGraph have reported issues where their builds are finding Thrust 1.16 from libcudf instead of Thrust 1.15 which is [currently pinned by rapids-cmake](https://github.com/rapidsai/rapids-cmake/blob/06a657281cdd83781e49afcdbb39abc491eeab17/rapids-cmake/cpm/versions.json#L26).

This PR is intended to unblock local builds and CI builds for other RAPIDS packages until we are able to identify the root cause (which may be due to CMake include path orderingsrapids-cmake).

Last time Thrust was updated, [rapids-cmake was updated](rapidsai/rapids-cmake#138) one day before [libcudf was updated](#9912). That may explain why we didn't notice this problem with the 1.15 update.

The plan I currently have in mind is:

1. Merge this PR to roll back libcudf to Thrust 1.15 (and revert the patch for Thrust 1.16 [10577](#10577)). This will hopefully unblock CI for cugraph and cuspatial.
2. Try to work out whatever issues with CMake / include paths may exist.
3. Prepare all rapids-cmake repos for Thrust 1.16 compatibility. I've [done this for RMM already](rapidsai/rmm#1011), and I am working on [PR 4675](rapidsai/cuml#4675) to cuML now. I am planning to make the same fixes for `#include`s in cuCollections, raft, cuSpatial, and cuGraph so they will be compatible with Thrust 1.16.
4. Try to upgrade libcudf to Thrust 1.16 again (and re-apply the updated patch). If (2) has been resolved, I hope we won't see any issues in other RAPIDS libraries
5. Upgrade rapids-cmake to Thrust 1.16.

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Vyas Ramasubramani (https://github.com/vyasr)
  - Mark Harris (https://github.com/harrism)

URL: #10586
@bdice
Copy link
Contributor

bdice commented Apr 5, 2022

This failure is likely caused by the Thrust 1.16 update in libcudf. cugraph reported similar issues. If so, we'll likely need to resolve this upstream.

A libcudf fix to rollback Thrust to 1.15 has been merged. rapidsai/cudf#10586

@harrism
Copy link
Member

harrism commented Apr 5, 2022

rerun tests

2 similar comments
@vyasr
Copy link
Contributor

vyasr commented Apr 5, 2022

rerun tests

@vyasr
Copy link
Contributor

vyasr commented Apr 5, 2022

rerun tests

@vyasr
Copy link
Contributor

vyasr commented Apr 5, 2022

rapidsai/cudf#10589 changed the name of a macro that cuspatial was relying on. Since we need to make that change as well in order for tests to pass, I'm pushing those onto this branch as well.

@github-actions github-actions bot added the libcuspatial Relates to the cuSpatial C++ library label Apr 5, 2022
@vyasr vyasr changed the title Update black to 22.3.0 Update black to 22.3.0 and update CUDA_TRY to CUDF_CUDA_TRY Apr 5, 2022
@charlesbluca charlesbluca requested a review from a team as a code owner April 5, 2022 19:28
@bdice
Copy link
Contributor

bdice commented Apr 5, 2022

rapidsai/cudf#10589 changed the name of a macro that cuspatial was relying on. Since we need to make that change as well in order for tests to pass, I'm pushing those onto this branch as well.

Are macros considered part of cuDF's public API? I am not sure if libcudf macros are supposed to be used outside of libcudf. RAFT, for instance, declares RAFT_CUDA_TRY rather than re-using libcudf's macros. cc: @jrhemstad @harrism rapidsai/cudf#9660 rapidsai/raft#128

@vyasr
Copy link
Contributor

vyasr commented Apr 5, 2022

Are macros considered part of cuDF's public API? I am not sure if libcudf macros are supposed to be used outside of libcudf. RAFT, for instance, declares RAFT_CUDA_TRY rather than re-using libcudf's macros. cc: @jrhemstad @harrism rapidsai/cudf#9660 rapidsai/raft#128

No they are not, but cuspatial does so anyway. In general, cuspatial uses a significant number of cudf internals (in both C++ and Python) that we would like for it not to. @harrism and I have previously discussed a need to improve this situation, but it hasn't been prioritized given the relative pace of cuspatial over the past year. I don't think anyone would be opposed to a fix. #477 is part of the move away from a libcuspatial->libcudf dependence. Fixing the cuspatial->cudf Python dependence would take a lot more work and is probably not going to happen any time in the near to intermediate term.

@vyasr vyasr changed the title Update black to 22.3.0 and update CUDA_TRY to CUDF_CUDA_TRY Update black to 22.3.0 and update usage of libcudf macros Apr 5, 2022
@vyasr vyasr changed the title Update black to 22.3.0 and update usage of libcudf macros Update black to 22.3.0, update usage of libcudf macros, and remove direct column indexing Apr 5, 2022
@vyasr
Copy link
Contributor

vyasr commented Apr 6, 2022

@gpucibot merge

@rapids-bot rapids-bot bot merged commit 92266b6 into rapidsai:branch-22.06 Apr 6, 2022
abellina pushed a commit to abellina/cudf that referenced this pull request Apr 11, 2022
PR rapidsai#10489 updated from Thrust 1.15 to Thrust 1.16. However, this appears to be causing conflicts with other repositories -- [cuSpatial](rapidsai/cuspatial#511 (comment)) and cuGraph have reported issues where their builds are finding Thrust 1.16 from libcudf instead of Thrust 1.15 which is [currently pinned by rapids-cmake](https://github.com/rapidsai/rapids-cmake/blob/06a657281cdd83781e49afcdbb39abc491eeab17/rapids-cmake/cpm/versions.json#L26).

This PR is intended to unblock local builds and CI builds for other RAPIDS packages until we are able to identify the root cause (which may be due to CMake include path orderingsrapids-cmake).

Last time Thrust was updated, [rapids-cmake was updated](rapidsai/rapids-cmake#138) one day before [libcudf was updated](rapidsai#9912). That may explain why we didn't notice this problem with the 1.15 update.

The plan I currently have in mind is:

1. Merge this PR to roll back libcudf to Thrust 1.15 (and revert the patch for Thrust 1.16 [10577](rapidsai#10577)). This will hopefully unblock CI for cugraph and cuspatial.
2. Try to work out whatever issues with CMake / include paths may exist.
3. Prepare all rapids-cmake repos for Thrust 1.16 compatibility. I've [done this for RMM already](rapidsai/rmm#1011), and I am working on [PR 4675](rapidsai/cuml#4675) to cuML now. I am planning to make the same fixes for `#include`s in cuCollections, raft, cuSpatial, and cuGraph so they will be compatible with Thrust 1.16.
4. Try to upgrade libcudf to Thrust 1.16 again (and re-apply the updated patch). If (2) has been resolved, I hope we won't see any issues in other RAPIDS libraries
5. Upgrade rapids-cmake to Thrust 1.16.

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Vyas Ramasubramani (https://github.com/vyasr)
  - Mark Harris (https://github.com/harrism)

URL: rapidsai#10586
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Improvement / enhancement to an existing function libcuspatial Relates to the cuSpatial C++ library non-breaking Non-breaking change Python Related to Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants