Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix deprecated call #67

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ClementPinard
Copy link
Contributor

I noticed that the tutorial was updated but not the codebase, so here it goes.
It is now on par with tutorial http://pytorch.org/tutorials/advanced/cpp_extension.html

change AT_ASSERTM by TORCH_CHECK
change .type() by .scalar_type()
change PackedAcessor to PackedAccessor32

This hopefully fixes #65 , #66 (although 1.6 norally only return deprecation warning)

Second significant change :
change fminf and fmaxf to their fmin and fmax counterparts, make sur that the right template is used by casting the 0.0 to scalar_t . This is probably not needed anymore as it was probably a bug with nvcc and gcc7 but it might help people with old configs get the grad_check working.

This fixes #27 and #42

It is now on par with tutorial http://pytorch.org/tutorials/advanced/cpp_extension.html
change AT_ASSERTM by TORCH_CHECK
change .type() by .scalar_type()
change PackedAcessor to PackedAccessor32
change fminf and fmaxf to their fmin and fmax counterparts, make sur that the right template is used by casting the 0.0 to scalar_t
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Compiler error /cuda/setup.py This repo can not compile using Pytorch 1.6.0
2 participants