Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thrust and CUB: README: Fix copy-paste from libcu++ and links #1878

Merged
merged 3 commits into from
Jun 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions cub/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,23 @@

CUB provides state-of-the-art, reusable software components for every layer
of the CUDA programming model:
- [<b><em>Device-wide primitives</em></b>](https://nvlabs.github.io/cub/group___device_module.html)
- [<b><em>Device-wide primitives</em></b>](https://nvidia.github.io/cccl/cub/device_wide.html)
- Sort, prefix scan, reduction, histogram, etc.
- Compatible with CUDA dynamic parallelism
- [<b><em>Block-wide "collective" primitives</em></b>](https://nvlabs.github.io/cub/group___block_module.html)
- [<b><em>Block-wide "collective" primitives</em></b>](https://nvidia.github.io/cccl/cub/block_wide.html)
- I/O, sort, prefix scan, reduction, histogram, etc.
- Compatible with arbitrary thread block sizes and types
- [<b><em>Warp-wide "collective" primitives</em></b>](https://nvlabs.github.io/cub/group___warp_module.html)
- [<b><em>Warp-wide "collective" primitives</em></b>](https://nvidia.github.io/cccl/cub/warp_wide.html)
- Warp-wide prefix scan, reduction, etc.
- Safe and architecture-specific
- [<b><em>Thread and resource utilities</em></b>](https://nvlabs.github.io/cub/group___util_io.html)
- <b><em>Thread and resource utilities</em></b>
- PTX intrinsics, device reflection, texture-caching iterators, caching memory allocators, etc.

![Orientation of collective primitives within the CUDA software stack](http://nvlabs.github.io/cub/cub_overview.png)
![Orientation of collective primitives within the CUDA software stack](https://nvidia.github.io/cccl/cub/_images/cub_overview.png)

CUB is included in the NVIDIA HPC SDK and the CUDA Toolkit.

We recommend the [CUB Project Website](http://nvlabs.github.io/cub) for further information and examples.
We recommend the [CUB Project Website](https://nvidia.github.io/cccl/cub/) for further information and examples.

<br><hr>
<h3>A Simple Example</h3>
Expand Down
6 changes: 3 additions & 3 deletions thrust/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ It builds on top of established parallel programming frameworks (such as CUDA,
It also provides a number of general-purpose facilities similar to those found
in the C++ Standard Library.

The NVIDIA C++ Standard Library is an open source project; it is available on
Thrust is an open source project; it is available on
[GitHub] and included in the NVIDIA HPC SDK and CUDA Toolkit.
If you have one of those SDKs installed, no additional installation or compiler
flags are needed to use libcu++.
flags are needed to use Thrust.

## Examples

Expand Down Expand Up @@ -186,7 +186,7 @@ Thrust is an open source project developed on [GitHub].
Thrust is distributed under the [Apache License v2.0 with LLVM Exceptions].
Some parts are distributed under the [Apache License v2.0] and the [Boost License v1.0].

[GitHub]: https://github.com/nvidia/thrust
[GitHub]: https://github.com/NVIDIA/cccl/tree/main/thrust

[contributing section]: https://nvidia.github.io/thrust/contributing.html

Expand Down
Loading