Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoScheduler] Enable schedule sharing in dispatch context #7344

Merged
merged 2 commits into from
Jan 27, 2021

Conversation

comaniac
Copy link
Contributor

This is the follow up PR for #7317 to enable the schedule sharing in the auto_scheduler diaptch context.

cc @merrymercy @jcf94

@masahi
Copy link
Member

masahi commented Jan 27, 2021

@comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!

@comaniac
Copy link
Contributor Author

@comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!

Ah this is not the perfect solution for dynamic shape. This is more like a solution to make tuned logs more useful. For example, you can apply the tuning log with batch 1 to all batch sizes. You can even tune several batch sizes in prime numbers to achieve better performance to their multiples. Meanwhile, we do work on the dynamic shape support in auto_scheduler, but it may not be ready to be upstreamed before this summer or fall.

@jcf94
Copy link
Contributor

jcf94 commented Jan 27, 2021

@comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!!

Ah this is not the perfect solution for dynamic shape. This is more like a solution to make tuned logs more useful. For example, you can apply the tuning log with batch 1 to all batch sizes. You can even tune several batch sizes in prime numbers to achieve better performance to their multiples. Meanwhile, we do work on the dynamic shape support in auto_scheduler, but it may not be ready to be upstreamed before this summer or fall.

😃 Looking forward to the dynamic shape support, too! It will be great useful.

@masahi
Copy link
Member

masahi commented Jan 27, 2021

Yes, I believe dynamic tuning and codegen is one of the biggest challenges of TVM this year. I'm glad at least there are folks looking at the problem.

MaskRCNN should serve as a good benchmark, it has both dynamic dense (very large) and dynamic conv2d + conv2d transpose. All of them are current bottleneck, without tuning them I cannot beat pytorch.

@comaniac comaniac merged commit fd39122 into apache:main Jan 27, 2021
@comaniac
Copy link
Contributor Author

Thanks @merrymercy

@comaniac comaniac deleted the ansor_sche_share branch January 27, 2021 22:55
alexwong pushed a commit to alexwong/tvm that referenced this pull request Feb 11, 2021
)

* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
electriclilies pushed a commit to electriclilies/tvm that referenced this pull request Feb 18, 2021
)

* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
Lokiiiiii pushed a commit to Lokiiiiii/tvm that referenced this pull request Mar 2, 2021
)

* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request Mar 2, 2021
)

* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants