Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If the input is contiguous, short-circuit infer_size_dv in reshape #95216

Closed
wants to merge 2 commits into from

Conversation

ezyang
Copy link
Contributor

@ezyang ezyang commented Feb 21, 2023

Stack from ghstack (oldest at bottom):

The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang [email protected]

The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <[email protected]>

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 21, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95216

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 0f5279e:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ezyang added a commit that referenced this pull request Feb 21, 2023
The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: dd135df7572e6d3c9523571f7ce6ccc711a0aa88
Pull Request resolved: #95216
@ezyang ezyang added release notes: composability release notes category topic: not user facing topic category ciflow/trunk Trigger trunk jobs on your pull request labels Feb 21, 2023
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess that's ok as long as you're not doing anything funky with dispatch_sizes_strides_policy.

You do need to add the same if (!self.is_xla() && !self.is_lazy() && !self.is_ipu() && !at::isTensorSubclassLike(self)) { as below though.

@ezyang
Copy link
Contributor Author

ezyang commented Feb 21, 2023

I think it is sound to omit the checks. Let us enumerate the cases:

  • What about mkldnn? MKLDNN tensors are never contiguous, thus they will never hit this case.
  • What about xla/lazy/ipu/tensor subclass short circuit? In fact, the whole point of the condition there is to ensure we hit a proper view. When we short circuit is contiguous, we always go to view. So we apply this behavior in that case
  • What about the non short circuit case? It's always safe to go the slower route. In fact, it is required, because we didn't actually compute the stride

@ezyang
Copy link
Contributor Author

ezyang commented Feb 21, 2023

What about mkldnn? MKLDNN tensors are never contiguous, thus they will never hit this case.

lol but CI HAS PROVED ME WRONGGGG

… reshape"

The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

[ghstack-poisoned]
ezyang added a commit that referenced this pull request Feb 21, 2023
The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 507b0f1df3e80b65cbf17a00fae7f57262e5cbd7
Pull Request resolved: #95216
@ezyang
Copy link
Contributor Author

ezyang commented Feb 21, 2023

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!

cyyever pushed a commit to cyyever/pytorch_private that referenced this pull request Mar 5, 2023
…95216)

The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch/pytorch#95216
Approved by: https://github.com/albanD
pruthvistony added a commit to ROCm/pytorch that referenced this pull request May 2, 2023
@facebook-github-bot facebook-github-bot deleted the gh/ezyang/1835/head branch June 8, 2023 16:51
jhavukainen pushed a commit to kulinseth/pytorch that referenced this pull request Mar 15, 2024
…ytorch#95216)

The main improvement is that this avoids guards from infer_size_dv,
although this also counts as a minor perf improvement too.

Signed-off-by: Edward Z. Yang <[email protected]>
Pull Request resolved: pytorch#95216
Approved by: https://github.com/albanD
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: composability release notes category topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants