Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump torch from 1.5.1 to 1.6.0 #82

Merged
merged 1 commit into from
Aug 19, 2020
Merged

Conversation

dependabot-preview[bot]
Copy link
Contributor

Bumps torch from 1.5.1 to 1.6.0.

Release notes

Sourced from torch's releases.

Stable release of automatic mixed precision (AMP). New Beta features include a TensorPipe backend for RPC, memory profiler, and several improvements to distributed training for both RPC and DDP.

PyTorch 1.6.0 Release Notes

  • Highlights
  • Backwards Incompatible Changes
  • Deprecations
  • New Features
  • Improvements
  • Bug Fixes
  • Performance
  • Documentation

Highlights

The PyTorch 1.6 release includes a number of new APIs, tools for performance improvement and profiling, as well as major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training.

A few of the highlights include:

  1. Automatic mixed precision (AMP) training is now natively supported and a stable feature - thanks to NVIDIA’s contributions;
  2. Native TensorPipe support now added for tensor-aware, point-to-point communication primitives built specifically for machine learning;
  3. New profiling tools providing tensor-level memory consumption information; and
  4. Numerous improvements and new features for both distributed data parallel (DDP) training and the remote procedural call (RPC) packages.

Additionally, from this release onward, features will be classified as Stable, Beta and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can learn more about what this change means in the post here.

[Stable] Automatic Mixed Precision (AMP) Training

AMP allows users to easily enable automatic mixed precision training enabling higher performance and memory savings of up to 50% on Tensor Core GPUs. Using the natively supported torch.cuda.amp API, AMP provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use torch.float16 (half). Some ops, like linear layers and convolutions, are much faster in float16. Other ops, like reductions, often require the dynamic range of float32. Mixed precision tries to match each op to its appropriate datatype.

  • Design doc | Link
  • Documentation | Link
  • Usage examples | Link

[Beta] TensorPipe backend for RPC

PyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library, a tensor-aware point-to-point communication primitive targeted at machine learning, intended to complement the current primitives for distributed training in PyTorch (Gloo, MPI, ...) which are collective and blocking. The pairwise and asynchronous nature of TensorPipe lends itself to new networking paradigms that go beyond data parallel: client-server approaches (e.g., parameter server for embeddings, actor-learner separation in Impala-style RL, ...) and model and pipeline parallel training (think GPipe), gossip SGD, etc.

# One-line change needed to opt in
torch.distributed.rpc.init_rpc(
    ...
    backend=torch.distributed.rpc.BackendType.TENSORPIPE,
)
No changes to the rest of the RPC API
torch.distributed.rpc.rpc_sync(...)

Commits
  • b31f58d Revert "[release/1.6] .circleci: Don't use SCCACHE for windows release builds...
  • 29fe90e [release/1.6] [JIT] Dont include view ops in autodiff graphs (#42029)
  • 35ad2d8 Revert "[jit] fix tuple alias analysis (#41992)"
  • 994b37b [release/1.6] .circleci: Don't use SCCACHE for windows release builds (#42024)
  • 8aa878f [jit] fix tuple alias analysis (#41992)
  • 7c7c9c3 scatter/gather - check that inputs are of the same dimensionality (#41890)
  • a2922f5 [1.6.0] Mark torch.set_deterministic and torch.is_deterministic as experi...
  • 8acfeca [1.6] Add optimizer_for_mobile doc into python api root doc (#41491)
  • 860e18a Update torch.set_default_dtype doc (#41263)
  • 8f804ba Doc note for complex (#41252)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
  • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
  • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
  • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
  • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language
  • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme

Additionally, you can set the following in your Dependabot dashboard:

  • Update frequency (including time of day and day of week)
  • Pull request limits (per update run and/or open at any time)
  • Out-of-range updates (receive only lockfile updates, if desired)
  • Security updates (receive only security updates, if desired)

Bumps [torch](https://github.com/pytorch/pytorch) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Commits](pytorch/pytorch@v1.5.1...v1.6.0)

Signed-off-by: dependabot-preview[bot] <[email protected]>
@dependabot-preview dependabot-preview bot added the dependencies Pull requests that update a dependency file label Aug 19, 2020
@tamuhey tamuhey merged commit ac25c0a into master Aug 19, 2020
@tamuhey tamuhey deleted the dependabot/pip/torch-1.6.0 branch August 19, 2020 14:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant