Skip to content

Implementation of Compressed SGD with Compressed Gradients in Pytorch

Notifications You must be signed in to change notification settings

SamuelHorvath/Compressed_SGD_PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code guidelines

This implementation is based on PyTorch (1.5.0) in Python (3.8).

It enables to run simulated distributed optimization with master node on any number of workers based on PyTorch SGD Optimizer with gradient compression. Communication can be compressed on both workers and master level. Error-Feedback is also enabled. For more details, please see our manuscript.

Installation

To install requirements

$ pip install -r requirements.txt

Example Notebook

To run our code see example notebook.

Citing

In case you find this this code useful, please consider citing

@article{horvath2020better,
  title={A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning},
  author={Horv\'{a}th, Samuel and Richt\'{a}rik, Peter},
  journal={arXiv preprint arXiv:2006.11077},
  year={2020}
}

License

License: MIT

About

Implementation of Compressed SGD with Compressed Gradients in Pytorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published