-
-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve typing of distributed.comp_modules.utils.all_gather #1368
Comments
@vfdev-5 I took a look on the code again and your suggested typing, is not feasible.
Looking at the definition of ignite/ignite/distributed/comp_models/base.py Lines 163 to 167 in 380dd3b
there is a reference to the function _collective_op and there is no possibility to get a result of type List[Number] ignite/ignite/distributed/comp_models/base.py Lines 137 to 155 in 380dd3b
except I change the return in line 152 to
If it is ok for you, then I would make the change and adjust all the typings and create the PR. |
@gruebel you are right,
comparing with in case of all reduce, collective op does:
Probably, we can add the following case if tensor_to_number:
if tensor.numel() == 1:
return cast(Number, tensor.item())
else:
return case(List[Number], tensor.tolist()) I think "return" type annotations of def _collective_op(...) -> Union[torch.Tensor, Number, List[Number], List[str]]: and cover all reduce and all gather cases. |
ok, that makes sense. I also saw the reference to |
🚀 Feature
As a followup to the issue #1344 and comment #1355 (comment) it is needed to rework the typing of the
utils.all_gather
and the different realizations serial, native, hvd, xla.The text was updated successfully, but these errors were encountered: