Skip to content

Commit

Permalink
Add a pointer to GPUs in parallel computing (#36043)
Browse files Browse the repository at this point in the history
* Add a pointer to GPUs

* Update parallel-computing.md

* Update parallel-computing.md

* Update parallel-computing.md

* Update doc/src/manual/parallel-computing.md

Co-authored-by: Dilum Aluthge <[email protected]>

* whitespace fix

* Update parallel-computing.md

* whitespace fix

* whitespace fix

* Update doc/src/manual/parallel-computing.md

Co-authored-by: Jonas Schulze <[email protected]>

Co-authored-by: Dilum Aluthge <[email protected]>
Co-authored-by: Jonas Schulze <[email protected]>
  • Loading branch information
3 people authored May 28, 2020
1 parent adf6d52 commit 0ad6dcf
Showing 1 changed file with 28 additions and 14 deletions.
42 changes: 28 additions & 14 deletions doc/src/manual/parallel-computing.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,34 @@
# Parallel Computing

Julia supports three main categories of features for concurrent and parallel programming:
Julia supports these four categories of concurrent and parallel programming:

1. Asynchronous "tasks", or coroutines
2. Multi-threading
3. Distributed computing
1. **Asynchronous "tasks", or coroutines**:

Julia Tasks allow suspending and resuming computations
for I/O, event handling, producer-consumer processes, and similar patterns.
Tasks can synchronize through operations like [`wait`](@ref) and [`fetch`](@ref), and
communicate via [`Channel`](@ref)s.
Julia Tasks allow suspending and resuming computations
for I/O, event handling, producer-consumer processes, and similar patterns.
Tasks can synchronize through operations like [`wait`](@ref) and [`fetch`](@ref), and
communicate via [`Channel`](@ref)s. While strictly not parallel computing by themselves,
Julia lets you schedule `Task`s on several threads.

Multi-threading functionality builds on tasks by allowing them to run simultaneously
on more than one thread or CPU core, sharing memory.
2. **Multi-threading**:

Finally, distributed computing runs multiple processes with separate memory spaces,
potentially on different machines.
This functionality is provided by the `Distributed` standard library as well as
external packages like `MPI.jl` and `DistributedArrays.jl`.
Julia's [multi-threading](@ref man-multithreading) provides the ability to schedule Tasks
simultaneously on more than one thread or CPU core, sharing memory. This is usually the easiest way
to get parallelism on one's PC or on a single large multi-core server. Julia's multi-threading
is composable. When one multi-threaded function calls another multi-threaded function, Julia
will schedule all the threads globally on available resources, without oversubscribing.

3. **Distributed computing**:

Distributed computing runs multiple Julia processes with separate memory spaces. These can be on the same
computer or multiple computers. The `Distributed` standard library provides the capability for remote execution
of a Julia function. With this basic building block, it is possible to build many different kinds of
distributed computing abstractions. Packages like [`DistributedArrays.jl`](https://github.com/JuliaParallel/DistributedArrays.jl)
are an example of such an abstraction. On the other hand, packages like [`MPI.jl`](https://github.com/JuliaParallel/MPI.jl) and
[`Elemental.jl`](https://github.com/JuliaParallel/Elemental.jl) provide access to the existing MPI ecosystem of libraries.

4. **GPU computing**:

The Julia GPU compiler provides the ability to run Julia code natively on GPUs. There
is a rich ecosystem of Julia packages that target GPUs. The [JuliaGPU.org](https://juliagpu.org)
website provides a list of capabilities, supported GPUs, related packages and documentation.

0 comments on commit 0ad6dcf

Please sign in to comment.