Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Distributed Data Samplers in PyTorch Examples #2012

Merged
merged 1 commit into from
Mar 7, 2024

Conversation

andreyvelich
Copy link
Member

I fixed PyTorch training examples where we forgot to distribute data across PyTorch workers using DistributedSampler(dataset). After that change each PyTorch worker will correctly process chunk of training data.

Also, for FashionMNIST example I removed check if this script is running in distributed mode since for PyTorch we can just set arbitrary values for env variables and run this script in 1 Worker.

/assign @johnugeorge @tenzen-y @kuizhiqing

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@coveralls
Copy link

coveralls commented Mar 5, 2024

Pull Request Test Coverage Report for Build 8163673973

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.02%) to 42.908%

Totals Coverage Status
Change from base Build 8118435958: 0.02%
Covered Lines: 3757
Relevant Lines: 8756

💛 - Coveralls

Copy link
Member

@tenzen-y tenzen-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@@ -10,141 +10,216 @@
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim

WORLD_SIZE = int(os.environ.get('WORLD_SIZE', 1))
from torch.utils.data import DistributedSampler
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will submit separate PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to run these examples in CI to verify if these examples are valid, but we can track it in another issues.

@tenzen-y
Copy link
Member

tenzen-y commented Mar 7, 2024

/lgtm
/approve

@google-oss-prow google-oss-prow bot added the lgtm label Mar 7, 2024
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andreyvelich, tenzen-y

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [andreyvelich,tenzen-y]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit 57aa34d into kubeflow:master Mar 7, 2024
36 checks passed
@andreyvelich andreyvelich deleted the fix-pytorch-ddp branch March 7, 2024 15:28
tedhtchang pushed a commit to tedhtchang/training-operator that referenced this pull request Apr 5, 2024
Signed-off-by: Andrey Velichkevich <[email protected]>
(cherry picked from commit 57aa34d)
deepanker13 pushed a commit to deepanker13/deepanker-training-operator that referenced this pull request Apr 8, 2024
johnugeorge pushed a commit to johnugeorge/training-operator that referenced this pull request Apr 28, 2024
johnugeorge pushed a commit to johnugeorge/training-operator that referenced this pull request Apr 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants