Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Strategy] Support for Int8 schedules - CUDA/x86 #5031

Merged
merged 8 commits into from
Mar 12, 2020

Conversation

anijain2305
Copy link
Contributor

@anijain2305 anijain2305 commented Mar 10, 2020

Recently introduce op strategy currently has some issues with task extraction with AutoTVM. This PR fixes them for x86/CUDA.

@kevinthesun @icemelon9

@anijain2305 anijain2305 changed the title [CUDA] Op strategy changes for Int8 schedules. [Strategy] Support for Int8 schedules - CUDA/x86 Mar 11, 2020
@anijain2305 anijain2305 marked this pull request as ready for review March 11, 2020 16:41
Copy link
Contributor

@kevinthesun kevinthesun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@anijain2305
Copy link
Contributor Author

@vinx13 Adding you as well. Because I have padded C dim for GPU using Legalize to use DP4A schedules. Otherwise, we will have to put a check in strategy.

Copy link
Member

@icemelon icemelon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line https://github.com/apache/incubator-tvm/pull/5031/files#diff-bf1d7b23844ba1082c770babaa524806R178 should pass both the final output (outs[0].op) and the conv output to the _schedule_conv2d_NCHWc_int8. Otherwise len(s[output].op.axis) == 5 should be always true, right? Correct me if I'm wrong.

topi/python/topi/cuda/conv2d_int8.py Show resolved Hide resolved
@icemelon
Copy link
Member

Could you add a few tests for conv2d_nchw_int8 in the topi/tests/python/test_topi_conv2d_int8.py?

otherwise, lgtm

@vinx13
Copy link
Member

vinx13 commented Mar 12, 2020

I think padding channels would be helpful, it would be good if we have comparison result (channel padding + int8 template vs direct template)

@icemelon icemelon merged commit 681df4f into apache:master Mar 12, 2020
@icemelon
Copy link
Member

Thanks @anijain2305 @kevinthesun @vinx13. This is now merged.

trevor-m pushed a commit to trevor-m/tvm that referenced this pull request Apr 16, 2020
* [CUDA] Op strategy changes for Int8 schedules.

* Applying Haichen's suggestions.

* Make 4D output work for task extraction.

* Make x86 work.

* Fix lint.

* Lint fixes.

* Tests, comments, out channel a multiple of 4.

* Topi test.

Co-authored-by: Ubuntu <[email protected]>
zhiics pushed a commit to neo-ai/tvm that referenced this pull request Apr 17, 2020
* [CUDA] Op strategy changes for Int8 schedules.

* Applying Haichen's suggestions.

* Make 4D output work for task extraction.

* Make x86 work.

* Fix lint.

* Lint fixes.

* Tests, comments, out channel a multiple of 4.

* Topi test.

Co-authored-by: Ubuntu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants