-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Train] Strip "module." from state dict #30705
[Train] Strip "module." from state dict #30705
Conversation
Signed-off-by: Antoni Baum <[email protected]>
Signed-off-by: Antoni Baum <[email protected]>
Signed-off-by: Antoni Baum <[email protected]>
@@ -111,6 +119,8 @@ def train_func(): | |||
assert predictions.count() == 3 | |||
|
|||
|
|||
# We can't really test for prepare_model here as we can't detect what the user | |||
# has saved without loading (and thus triggering the exception anyway) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for my understanding, can you elaborate on why prepare_model causes this test to fail?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prepare_model
will wrap the model in DDP. If the user doesn't manually unwrap it before saving, DDP will throw an exception after being loaded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry I guess I mean more- why is it not going through the _encode_dict path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a checkpoint is created from directory, we aren't really able to detect what's actually in the files without deserializing them in the first place (which would not only add overhead but also cause the error anyway), and we can't apply _encode_dict
on already serialized data
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then for this dir checkpoint, why does it get deserialized in the first place?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, we don't have a native way of supporting torch models from files (as mentioned by the TODO in this test). Therefore, the test implements its own predictor. Using dir checkpoints with torch is not what we want users to do right now, but the purpose of this test is to make sure that it works regardless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add prepare_model here but we'd have to unwrap the model before saving anyway, meaning we wouldn't really test anything extra here.
Signed-off-by: Antoni Baum <[email protected]>
Signed-off-by: Antoni Baum <[email protected]>
This PR adds logic to automatically strip the "module." prefix from a user-saved state dict in TorchCheckpoint, which is present if a user obtains the state dict from a DistributedDataParallel module directly. We already obtain the underlying module if a user saves the model object, so this merely makes the logic consistent. This PR also edits our examples to remove instances where this operation was conducted in the example itself. This led to issues if train.torch.prepare_model was used with num_workers=1 (eg. on Google Colab), as the module was not wrapped around, thus leading to the .module attribute being missing. Signed-off-by: Antoni Baum <[email protected]> Signed-off-by: Weichen Xu <[email protected]>
This PR adds logic to automatically strip the "module." prefix from a user-saved state dict in TorchCheckpoint, which is present if a user obtains the state dict from a DistributedDataParallel module directly. We already obtain the underlying module if a user saves the model object, so this merely makes the logic consistent. This PR also edits our examples to remove instances where this operation was conducted in the example itself. This led to issues if train.torch.prepare_model was used with num_workers=1 (eg. on Google Colab), as the module was not wrapped around, thus leading to the .module attribute being missing. Signed-off-by: Antoni Baum <[email protected]>
The regression is introduced by #30705. Also added some documentation into TorchTrainer so users know there is quite some magic happening :) Tested manually in workspace. Follow-up PR to add more strict assertions to the test. Signed-off-by: xwjiang2010 <[email protected]>
This PR adds logic to automatically strip the "module." prefix from a user-saved state dict in TorchCheckpoint, which is present if a user obtains the state dict from a DistributedDataParallel module directly. We already obtain the underlying module if a user saves the model object, so this merely makes the logic consistent. This PR also edits our examples to remove instances where this operation was conducted in the example itself. This led to issues if train.torch.prepare_model was used with num_workers=1 (eg. on Google Colab), as the module was not wrapped around, thus leading to the .module attribute being missing. Signed-off-by: Antoni Baum <[email protected]> Signed-off-by: tmynn <[email protected]>
Signed-off-by: Antoni Baum [email protected]
Why are these changes needed?
This PR adds logic to automatically strip the
"module."
prefix from a user-saved state dict inTorchCheckpoint
, which is present if a user obtains the state dict from aDistributedDataParallel
module directly. We already obtain the underlying module if a user saves the model object, so this merely makes the logic consistent.This PR also edits our examples to remove instances where this operation was conducted in the example itself. This led to issues if
train.torch.prepare_model
was used withnum_workers=1
(eg. on Google Colab), as the module was not wrapped around, thus leading to the.module
attribute being missing.Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.