-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AutoScheduler] Enable schedule sharing in dispatch context #7344
Conversation
@comaniac Does this mean tuning support for dynamic workload (dynamic batch size etc) is coming soon? I'm very excited for this, that would tremendously help my MaskRCNN!! |
Ah this is not the perfect solution for dynamic shape. This is more like a solution to make tuned logs more useful. For example, you can apply the tuning log with batch 1 to all batch sizes. You can even tune several batch sizes in prime numbers to achieve better performance to their multiples. Meanwhile, we do work on the dynamic shape support in auto_scheduler, but it may not be ready to be upstreamed before this summer or fall. |
😃 Looking forward to the dynamic shape support, too! It will be great useful. |
Yes, I believe dynamic tuning and codegen is one of the biggest challenges of TVM this year. I'm glad at least there are folks looking at the problem. MaskRCNN should serve as a good benchmark, it has both dynamic dense (very large) and dynamic conv2d + conv2d transpose. All of them are current bottleneck, without tuning them I cannot beat pytorch. |
Thanks @merrymercy |
This is the follow up PR for #7317 to enable the schedule sharing in the auto_scheduler diaptch context.
cc @merrymercy @jcf94