Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DEV] TVM v0.6 Roadmap #2623

Closed
9 of 28 tasks
ZihengJiang opened this issue Feb 19, 2019 · 36 comments
Closed
9 of 28 tasks

[DEV] TVM v0.6 Roadmap #2623

ZihengJiang opened this issue Feb 19, 2019 · 36 comments

Comments

@ZihengJiang
Copy link
Contributor

ZihengJiang commented Feb 19, 2019

This roadmap for TVM v0.6. TVM is a community-driven project and we love your feedback and proposals on where we should be heading. Please open up discussion in the discussion forum as well as bring RFCs.

  • Feel free to volunteer yourself if you are interested in trying out some items(they do not have to be on the list).
  • Please also check out the help wanted list in the github issues on things that need help

Features

  • Quantization
    • Support for configuring mix-precision
    • Per-Channel scale
    • Graph packing
    • Smarter calibration algorithm
    • Model coverage
    • Support importing quantization model from other frameworks
  • Relay
    • Algebra Data Type
    • Runtime support for dynamic models
    • Support Any syntax
    • Pass Manager
    • Official text format support
  • Automated tuning and scheduling
    • graph level automated optimization
  • Ultra low-bit support
    • tutorials of low-bit ops
    • customized accelerator support
  • VTA enhancements
    • support generic high level models
    • Enhanced operator/model coverage
    • Ultra-96, ZCU102 support
    • Amazon F1 preliminary support
    • Low-bit support, bit serial support
    • Chisel version
  • Micro-asm kernel exploration
    • Core micro-asm primitives for certain ops
  • Hybrid python programming model
    • transition of vision operators to hybrid mode.
  • RPC and Device API
    • Support a c++ version of cross platform RPC
  • Training
    • First-order auto differentiation
    • Gradient operators
    • High-order auto differentiation
  • Arithmetics
    • Formalize Integer Arithmetic Analysis
  • Tutorials
    • Tutorials of low-bit ops using Relay
This was referenced Feb 19, 2019
@icemelon
Copy link
Member

Does runtime for dynamic model refer to the runtime for Relay? Otherwise, we can also add Relay runtime into 0.6 roadmap.

@ZihengJiang
Copy link
Contributor Author

@edmBernard Great to hear that! Added!

@ZihengJiang
Copy link
Contributor Author

@icemelon9 Right, it refers to the runtime for Relay

@icemelon
Copy link
Member

Cool. @jroesch @zhiics @wweic and I can work on runtime for Relay.

@FrozenGene
Copy link
Member

Wish TVM v0.6 can finish this item: #2351. i.e. Support importing exist quantization TFLite model. This can be a start for supporting importing existing quantization model (i.e. don't restrict TFLite).

@zhiics
Copy link
Member

zhiics commented Feb 19, 2019

Pass manager for relay, should be able to finish discussion soon, haha

@yzhliu
Copy link
Member

yzhliu commented Feb 19, 2019

does "graph level automated optimization" mean #2184 or something else?

@ZihengJiang
Copy link
Contributor Author

@yzhliu I think so. It is a legacy item from roadmap v0.5

@ZihengJiang ZihengJiang pinned this issue Feb 19, 2019
@antinucleon
Copy link
Contributor

Will work together with @icemelon9 on dynamic runtime.

@antinucleon
Copy link
Contributor

I think we should also deprecate nnvm fully in 0.6. So far there are still some legacy code in topi depends on nnvm.

@icemelon
Copy link
Member

icemelon commented Feb 21, 2019

Could we also add Any dimension support in Relay to the roadmap? I think it's an important feature to have.

@joshpoll
Copy link
Contributor

I think official text format support should be part of 0.6. i.e. the parser and printer should be able to support all Relay constructs.

@yzhliu
Copy link
Member

yzhliu commented Mar 2, 2019

TVM Monthly - Feb 2019

Community

In Feb 2019, we successfully released TVM v0.5 (release notes) and made the roadmap for v0.6.

The community welcomes new Reviewer Zhao Wu (@FrozenGene), Committer Jared Roesch (@jroesch) and PMC member Lianmin Zheng (@merrymercy)

TVM community has voted through Apache Incubation proposal (#2543). Markus Weimer posted a proposal to [email protected] seeking for ideas and suggestions. The official voting is ongoing on general@ right now.

Features and Improvements

Operator Support

  • A special operator annotation.stop_fusion to prevent it being fused with previous expressions ([RELAY] Stop_fusion annotation #2624).
  • batch_matmul supported (#2561).
  • reverse_reshape supported (#2503).
  • Faster-RCNN proposal operator for CUDA (#2420).
  • Vision operator for YOLO yolo_reorg (#1941).
  • slice operator for MXNet (#2662).
  • arange supported (#2621).
  • Vision operator roi_align (#2618).
  • where operator for MXNet (#2647).

User Interface and Frontend

  • Introduced HybridModule (#2477) so that normal TVM schedule can be compiled to hybrid target, run and dumped to Hybrid Script.
  • Most frameworks have been supported in Relay, including ONNX, Keras, Tensorflow, Caffe2, CoreML, NNVMv1, MXNet (#2246). Siju is working on DarkNet.
  • Relay now supports saving and loading parameter dictionaries. (#2620)
  • Rust frontend (#2292).
  • We are now supporting Tensorflow saved model for NNVM (#2493). Relay support is ongoing (#2586).
  • Add max_num_threads to Hybrid Script, which allows users to get max number of threads for GPU targets (#2672).

Runtime and Hardware Support

  • RFC for bringing TVM to Bare-Metal devices (#2563)
  • Make external library extend TVM's NDArray more easily (#2613).

Performance Improvement

  • AlterOpLayout pass is now enabled for x86 on Relay (#2585). It is essential to get decent performance for CNN-based model on Intel CPUs.

Documents and Tutorials

  • Tutorials for deep learning frameworks support in Relay.
  • Tutorial for running AutoTVM with Relay (#2594).
  • Document for Algebraic Data Types (#2575).

High-level Optimizations

Tensor Expression

  • RFC for formalizing Integer Arithmetic Analysis (#2588). It is aiming to perform better context-dependent analysis, bound analysis, centralized arithmetic logic and arithmetic simplification.

Contribution and Commits

Thanks Wei @wweic or providing the tools.

People Who Reviewed Pull Requests

  • tqchen Runtime, Relay, Tensor Expression, Document, Frontend, AutoTVM
  • were Runtime, Hybrid Script, TOPI
  • junrushao1994 Runtime, Relay, Hybrid Script
  • ZihengJiang Relay, Rust, Quantization, Tensor Expression
  • sgrechanik-h Tensor Expression
  • derisavi Tensor Expression
  • wweic Relay
  • nhynes Rust, Quantization
  • ehsanmok Rust
  • mjs-arm Pylint
  • MarisaKirisame Relay
  • srkreddy1238 Relay, Tensorflow Frontend, Golang
  • vinx13 Hybrid Script, Relay, TOPI, Quatization
  • kazum Relay
  • FrozenGene TFLite Frontend, AutoTVM, Quantization
  • jroesch Relay, Rust
  • zhiics Relay, Runtime, Tensorflow Frontend
  • imorinaga Relay(heterogenous annotation)
  • merrymercy AutoTVM, Tensor Expression, Quantization
  • yzhliu AutoTVM, Relay, Tensor Expression, Frontend
  • eqy AutoTVM, Quantization, Relay
  • icemelon9 AutoTVM, Tensor Expression, Runtime, Relay, Frontend
  • reminisce Runtime
  • kevinthesun Tensor Expression, TOPI
  • Anthony-Mai Tensor Expression
  • slyubomirsky Relay
  • joshpoll Relay Document
  • eric-haibin-lin Bugfix
  • ZhennanQin Bugfix
  • grwlf Runtime
  • xqdan Tensor Expression, Hybrid Script
  • Laurawly Relay, Hybrid Script, TOPI
  • masahi Relay, TOPI, Document, CodeGen
  • siju-samuel Relay
  • PariksheetPinjari909 Operator
  • liangfu Quantization
  • lixiaoquan Quantization
  • ajtulloch Quantization
  • zhreshold Operator

People Who Committed

  • tqchen CI, Runtime, Tensor Expression, Relay Text Printer
  • ruslo Documents
  • mjs-arm Pylint, CI
  • vinx13 Relay, TOPI, Operator
  • lixiaoquan Relay
  • wweic Relay Document
  • kazum Tutorial and Document
  • derisavi Tensor Expression (IntSet)
  • antinucleon AutoTVM
  • slyubomirsky Relay (ADT, AutoDiff)
  • MarisaKirisame Relay
  • junrushao1994 Runtime, Build
  • were Hybrid Script
  • yidawang Relay (MAC Calculation)
  • ariwaranosai Operator
  • ZihengJiang Relay, Quantization
  • jroesch Relay
  • hlu1 Runtime
  • headupinclouds TOPI
  • zhiics Relay, Tensor Expression
  • weberlo Relay (param save/load)
  • icemelon9 Tensor Expression, Operator
  • yzhliu Tensor Expression
  • eqy Relay, Quantization, AutoTVM tutorial
  • geexie Bug-fix
  • abergeron Conda package
  • larroy NodeEntry Implement Improvement
  • ptrendx Bug-fix
  • Anthony-Mai Namespace Fix
  • jdavies-huawei Tensor Expression
  • srkreddy1238 Tensorflow Frontend, Golang
  • yongwww Tensorflow Frontend, Relay Document
  • alexeyr Code Improvement
  • nhynes Rust
  • makihiro Caffe2 Frontend
  • kevinthesun AutoTVM
  • Huyuwei Tutorial
  • ehsanmok Rust
  • siju-samuel Operator
  • sgrechanik-h Tensor Expression
  • haojin2 Operator
  • SiNZeRo Document
  • denis0x0D CodeGen
  • Laurawly Bug-fix
  • apivovarov Typo-fix
  • take-cheeze Typo-fix

List of Commits

Refer to https://discuss.tvm.ai/t/tvm-monthly-feb-2019/1801

@jroesch
Copy link
Member

jroesch commented Mar 8, 2019

We (@yongwww, @zhiics and I) will be providing support for loops and conditionals for the TensorFlow frontend soon.

@FrozenGene
Copy link
Member

@jroesch Awesome. Using AOT relay runtime?

@ajtulloch
Copy link
Contributor

@jroesch that's wonderful, very excited to see how that turns out :)

@zhiics
Copy link
Member

zhiics commented Mar 11, 2019

@FrozenGene It uses the interpreter.
@ajtulloch We used something like "pattern matching" to identify the scope/execution frame a control flow op and converted them into Relay recursive call and If constructs. We will send out the PR soon.

@tqchen
Copy link
Member

tqchen commented Mar 11, 2019

@zhiics @jroesch as per community tradition, it would be great to send out an RFC first to describe the proposal of such major change, before we do the PR

@zhiics
Copy link
Member

zhiics commented Mar 12, 2019

@tqchen No worries. We will.

@tqchen tqchen changed the title TVM v0.6 Roadmap [DEV] TVM v0.6 Roadmap Mar 20, 2019
@merrymercy
Copy link
Member

TVM Monthly - March 2019

https://discuss.tvm.ai/t/tvm-monthly-march-2019/2083

@ZihengJiang
Copy link
Contributor Author

TVM Monthly - April 2019

https://discuss.tvm.ai/t/tvm-monthly-april-2019/2426

@icemelon
Copy link
Member

icemelon commented Jun 3, 2019

TVM Monthly - May 2019

https://discuss.tvm.ai/t/tvm-monthly-may-2019/2793

@yzhliu
Copy link
Member

yzhliu commented Jul 3, 2019

TVM Monthly - June 2019

https://discuss.tvm.ai/t/tvm-monthly-june-2019

@kparzysz-quic
Copy link
Contributor

What is the planned release date for 0.6?

@tqchen
Copy link
Member

tqchen commented Jul 18, 2019

There has been quite a lot of improvements recently. While it is up to the community, I think we might be able to get out something around Sep

@Lyken17
Copy link
Contributor

Lyken17 commented Aug 8, 2019

  • graph level automated optimization

Does it mean traditional fusion / layout transform, or more recent graph substitution like this paper? If the later one, I would like to port my onnx implementation to TVM.

@MarisaKirisame
Copy link
Contributor

Higher order automatic differentiation was done like half years ago. Please check that.

@merrymercy
Copy link
Member

@Lyken17 #1585

@icemelon
Copy link
Member

TVM Monthly - July 2019

https://discuss.tvm.ai/t/tvm-monthly-july-2019

@ZihengJiang
Copy link
Contributor Author

@MarisaKirisame Checked. I listed it here just because that it did not go into the last release cycle

@snowolfhawk
Copy link

Hybrid python programming model
transition of vision operators to hybrid mode.

what's plan for this feature?

@zhiics
Copy link
Member

zhiics commented Sep 2, 2019

TVM Monthly - August 2019

https://discuss.tvm.ai/t/tvm-monthly-august-2019

@icemelon
Copy link
Member

icemelon commented Oct 3, 2019

TVM Monthly - September 2019

https://discuss.tvm.ai/t/tvm-monthly-september-2019

@xqdan
Copy link
Contributor

xqdan commented Oct 28, 2019

When will we have 0.6 release ? thanks

@yzhliu
Copy link
Member

yzhliu commented Nov 2, 2019

TVM Monthly - October 2019

https://discuss.tvm.ai/t/tvm-monthly-oct-2019

@tqchen tqchen unpinned this issue Nov 23, 2019
@tqchen
Copy link
Member

tqchen commented Nov 27, 2019

Move to #4259

@tqchen tqchen closed this as completed Nov 27, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests