Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

[Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0) #9686

Closed
szha opened this issue Feb 2, 2018 · 77 comments

Comments

@szha
Copy link
Member

szha commented Feb 2, 2018

Let's start a discussion here about the roadmap towards MXNet 2.0. We are looking for:

  • New features that are useful to your research and development.
  • Improvements and patches to existing features.
  • APIs that should be fixed.

If you have any item that you'd like to propose to have in the roadmap, please do:

  • Create (or locate existing) issue for the item, note the issue number.
  • Comment in this issue: 1) the above issue number, 2) one sentence of what the item is about and why it's useful to you.
  • Indicate whether you'd be willing to help out on the item.

Given that this would be a major release, we'd have the opportunity to make backward incompatible changes. This would allow us to visit some topics that require large changes such as dropping support for python2, transitioning fully to cmake, making the tensor library numpy-compatible, or even new programming models.


Now that we decided to follow semantic versioning for releases, it would be a good idea to coordinate features and API changes to make the best use of the next major release. Thus, I propose that we use this issue to track the APIs we'd like to change in the next major version.

The candidates I've collected so far:

  1. remove legacy ops such as batch-norm v1
  2. reorganizing namespace for utility functions such as download in Exp backoff for downloads. #9671
  3. transform argument in the constructor of existing vision dataset API.

Once there are more of such requests, I will try to organize these API-breaking requests better.

@bhavinthaker
Copy link
Contributor

Do we have sufficient automated testing to catch accidental lapses?

If not, can we have a volunteer to work on writing these automated test-cases? How do we track this task?

@larroy
Copy link
Contributor

larroy commented Feb 15, 2018

Refactors of the cpp-package and other C++ APIs. I would like that.

@fhieber
Copy link
Contributor

fhieber commented Feb 15, 2018

  • consistent use of "axis"/"dim(s)" keyword arguments in all operators, for example:
    • swapaxes uses dim1, dim2
    • expand_dims uses axis
  • expose optimizer in Module API

@anirudhacharya
Copy link
Member

@sandeep-krishnamurthy Please tag this - API Change, Call for Contribution, Roadmap.

@szha
Copy link
Member Author

szha commented Apr 9, 2018

#9881

@szha szha changed the title APIs that might be a good idea to remove in 2.0 APIs that might be a good idea to break in 2.0 Apr 9, 2018
@eric-haibin-lin
Copy link
Member

kvstore should not be public API

@szha
Copy link
Member Author

szha commented May 10, 2018

we should merge element wise ops with broadcast ops and dispatch the different implementation only based on the shape, so that symbol and ndarray +-*/ are consistent

@szha
Copy link
Member Author

szha commented May 17, 2018

contrib.ctc_loss should make into supported operator.

@szha
Copy link
Member Author

szha commented May 29, 2018

#11031

@szha
Copy link
Member Author

szha commented May 30, 2018

#10807

@RogerChern
Copy link

fix_gamma=False for mxnet.symbol.BatchNorm

@szha
Copy link
Member Author

szha commented Jun 4, 2018

#11141

@szha
Copy link
Member Author

szha commented Jun 4, 2018

#11134

@szha
Copy link
Member Author

szha commented Jul 30, 2018

Gluon RNN layer parameters are currently saved through unfused cells, causing the name to be something like "_unfused.0.l_cell.weight". This caused trouble in #11482 when I removed unfused cells. The workaround is to override _collect_params_with_prefix function to add the prefix. In 2.0, we should:

  • remove the _collect_params_with_prefix in Gluon RNN layer
  • write a converter for parameter formats
  • start versioning parameter files.

@szha
Copy link
Member Author

szha commented Aug 3, 2018

#11953

@szha
Copy link
Member Author

szha commented Aug 16, 2018

#12197 in using integer types for index instead of float.

@anirudhacharya
Copy link
Member

anirudhacharya commented Oct 17, 2018

Taking a brief look at Data Iterators, it would seem the iterators are split up between the mx.io module and mx.image module. And there does not seem to be any method/process( correct me if I am wrong) in the split. For instance,

  • ImageRecordIter and ImageRecordUInt8Iter( along with CSVIter, NDArrayIter, LibSVMIter) are under mx.io
  • whereas ImageIter and ImageDetIter are under mx.image module

And

Is there any specific reason for this kind of design? It might be good to take a relook at this and reorganize this, even if it leads to breaking a few APIs.

And there is similar functionality in the gluon interface too( and I am not including that in this discussion).

@anirudhacharya
Copy link
Member

3. transform argument in the constructor of existing vision dataset API.

What is the proposed change here? is the plan to remove transform as an argument to the constructor?

@szha
Copy link
Member Author

szha commented Oct 17, 2018

@anirudhacharya yes, because dataset interface has a .transform method that serves the same purpose but strictly more flexible.

@zhreshold
Copy link
Member

@anirudhacharya

I can see your concern, but iterators included in mxnet.image are specifically designed for images and serve as compliments to the purpose agnostic iterators in mxnet.io.

Same stories apply to all these image transformation functions provided in mxnet.image, users basically can use them as an opencv alternative in order to process mx.nd.NDArray instead of numpy.array

@anirudhacharya
Copy link
Member

@zhreshold but ImageRecordIter and ImageRecordUInt8Iter which are image specific are defined under mx.io.

With regards to image transforms, I was thinking the symbolic interface should also have something similar to the interface available in gluon-CV transforms - https://gluon-cv.mxnet.io/api/data.transforms.html, which is very intuitive and not cluttered. Because we have users who have gone to production using MXNet's symbolic interface. We can discuss this in-person, it will be better.

@ptrendx
Copy link
Member

ptrendx commented Jun 25, 2019

I would like to remove the reliance on topological ordering of inputs in communication between frontend and backend: #15362. The cleanest solution to this is to change the C API to pass dictionaries instead of lists to the backend (and get dictionaries back).

@szha
Copy link
Member Author

szha commented Jun 28, 2019

#9182

@apeforest
Copy link
Contributor

Removing deprecate operators from code base (or at lease hide them from users).

Some operators, such as SoftmaxActivation has been made deprecate for a few minor releases. Ideally, we should have a process to remove deprecate operators in the next major release given users have had sufficient time to update them to the newer version in the minor releases.

@larroy
Copy link
Contributor

larroy commented Jul 15, 2019

I was thinking that I would like to drop Amalgamation and instead have a dynamic operator registry, imagine for a moment that you can register your operators in a set of yaml files that would do the same as NNVM_REGISTER, before build time you can configure which operators to compile and produce a very lean library in a similar way that almalgamation is doing but more clean and in a per operator basis, with a codegen step that would parse the operator registry, and also not compile the training code if you just want inference. This would make it possible to make "MXNet lite builds". Would this be desirable?

@braindotai
Copy link

braindotai commented Jul 16, 2019

I have some suggestions down here:-

  • MXNet only has 4 prebuild datasets, I think we should have more...
    Here are some important missing datasets-
    KMNIST, EMNIST, LSUN, STL10, SVHN, PhotoTour, SBU, Flickr8k, Flickr30k, Cityscapes, SBD and FakeData(A fake data generator that generates fake images, useful for GANS)

  • As we got the Estimator for gluon, (which will be supposedly released with MXNet 1.5), we can have many predefined estimators(very similar to Scikit Learn, with awesome gpu support), for example:-

from mxnet.gluon import estimator
model = estimator.linearregression
model = estimator.logisticregression
model = estimator.ridgeregression
model = estimator.lassoregression
model = estimator.knearestneighbors
model = estimator.kmeansclusttering
model = estimator.svm
....etc

These classical ML algorithms work better than DL for some specific tasks and many users want such ML algorithms with gpu support, so that'd be quite awesome.

  • We need to have a good and updated website design, for instance in the bottom of homepage, even the "copyright(2017 - 2018)" statement is not updated, which sounds like MXNet is no longer sponsored by the Apache Incubator(BTW is this True?).
    • Also, you can't access this useful link at all, because the link is unnecessarily and massively hidden somewhere inside the website.(fortunately I got it from an email from my friend, and I don't know how he got it!!)
    • We should update the benchmarks available here
    • We should put these FAQ tutorials available directly from our default homepage because these are really good tutorials(but hidden under docs) for a beginner in MXNet.
    • If you click on any link for documentation available here under some section, let's say "Gluon API" then afterwards any link for other docs(on the left side) available outside of "Gluon API" section wouldn't work. And this behaviour is common for all sections.
    • We should have more information about MXNet displayed right on the homepage(instead of provided links), for example, its key features, ecosystem, community, and resources like this holy book of MXNet, GluonCV, GluonNLP... etc.

The reason why I am so worried about the website is because "IT IS IMPORTANT", more we show to the user directly, better the understanding a user can have.(For instance ME!, when I opened the website first time, it was very difficult for me to find good tutorials and examples, instead, I had to rely on GitHub and had to ask in the forum separately about that.)

  • This Build Status available at the GitHub Homepage under "README.md" is not working .

  • KVStore API should note be public, because I haven't really seen any use of it in any tutorial or implementation, secondly gluon.Trainer already handles the functionality of KVStore internally, thirdly at the top of KVStore Tutorial it's clearly written that

.. note:: Direct interactions with KVStore are dangerous and not recommended.

so why we are telling users how to use it if it's so dangerous and not recommended?

That's a lot to take, I know.
Sorry if I've written something wrong above.

@arcadiaphy
Copy link
Member

I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:

  • One instance of the executor is enough for multi-threaded inference, which means it can be used simultaneously in different threads.
  • The immutable ndarray in executor should be shared in multi-threaded inference to save memory footprint.

Now we have MXPredCreateMultiThread in C API, but it's buggy and we still need to create multiple executors for each thread.

@cloudhan
Copy link

cloudhan commented Jul 17, 2019

I think we should provide a user-friendly thread-safe inference API for deploying in c++, java, etc. We can focus on naive engine in inference since it's very hard to refactor threaded engine to be thread-safe. A good and easy-to-use executor should have the following properties:

  • One instance of the executor is enough for multi-threaded inference, which means it can be used simultaneously in different threads.
  • The immutable ndarray in executor should be shared in multi-threaded inference to save memory footprint.

Now we have MXPredCreateMultiThread in C API, but it's buggy and we still need to create multiple executors for each thread.

Sounds like refactoring execution engine with TBB and adding some buffering mechanism?

@marcoabreu
Copy link
Contributor

Agree, the entire API (at the C-API level) should be designed to be entirely threadsafe for all requests - whether it's inference or training. This includes parallel calls from different threads - speak no locking or sticky threads.

@marcoabreu
Copy link
Contributor

Could we get rid of all the different pre-processor statements in the codebase that evolved due to the different accelerators (USE_CUDA, USE_TVM, USE_MKLDNN, etc) and fully replace them with the accelerator API from @samskalicky ? This would heavily improve the maintainability.

In terms of operator definitions, we could use ONNX as standard (or derive from it if it's not sufficient). At the moment, there's a tight coupling between the operator definitions and the accelerator choice.

@szha
Copy link
Member Author

szha commented Jul 19, 2019

different pre-processor statements ... replace them with the accelerator API

No, we cannot. They don't serve the same purpose.

we could use ONNX as standard

I don't believe we should. ONNX is only good for model exchange and not much for anything else. Also, community has already reached consensus to move towards numpy so it's probably not a good idea to get married to ONNX

@samskalicky
Copy link
Contributor

@marcoabreu We can definitely remove some of these pre-processor statements with the accelerator API (MKLDNN) but not all of them like @szha points out. USE_CUDA needs to stay since we have GPU embedded pretty tightly. We might be able to push it out into an accelerator library, but not in time for 2.0.

I agree with @szha ONNX is not the way to do this. We need to keep our operator registration in NNVM for now. What we could separate out are the operator definitions (NNVM reg) from the compute functions (infer shape/type, fcompute, etc) though. But again I think we should take this slow, enable actual accelerators first. Then see if it makes sense for TensorRT/MKLDNN and then maybe GPU.

I would like to see the accelerator API (or a first pass at it) as part of 2.0 though. Is this feasible from your perspective @szha ?

@szha
Copy link
Member Author

szha commented Jul 19, 2019

@samskalicky I'm not sure about the accelerator API. It seems that the existing subgraph API in combination with 1) better third-party operator support and 2) exposing graph partitioning as an API should be able to serve the same goal as the accelerator API. Those items are useful in other contexts too and deserve more attention, so I'd suggest those as an alternative to a specialized API just for accelerators.

@samskalicky
Copy link
Contributor

samskalicky commented Jul 19, 2019

@szha I agree the third-party operator support could be more useful to the broader community and have been continually working on it in my spare time. I would be interested in collaborating with others to move this along faster. Should we consider that as part of the 1.6 release?

But after discussing with @zheng-da today the subgraph API + operator support does not serve the same goal as the accelerator API. Some additional external APIs (like external subgraph properties for example, or supporting compiled binaries for subgraphs, or binding accelerators to subgraphs) would be needed to serve the same goal.

@szha
Copy link
Member Author

szha commented Jul 19, 2019

I think we're talking about the same thing. (i.e. item 2 in my last response)

@samskalicky
Copy link
Contributor

samskalicky commented Jul 19, 2019

Maybe I misunderstood item 2, assumed that meant better APIs for partitioning. Could clarify what you mean by item 2? Do you mean third-party graph partitioning (similar to third-part operator support)?

@Neutron3529
Copy link
Contributor

Neutron3529 commented Jul 25, 2019

(1)#10840 Add einsum since it is useful and it could simplify linear algebra ops. (it is now supported by tensorflow and pyThrch)
(2)seperate MXNet.dll into small files to reduce the import time.
nvcc will generate a HUGE file with -arch=(...a lot of sm_*...)
it seems seperate the HUGE KNOWN_CUDA_ARCHS instruction may increase the import performance.

@Zha0q1
Copy link
Contributor

Zha0q1 commented Jul 25, 2019

I would like to bring up one issue with profiler: currently, there is a flag kSimbolic that supposedly should control whether to profiler operators called in symbolic mode. However, there is rarely a use case where users would want to set it to False; also, in the backend, this flag is not used at all, meaning that even if it's set to False, the profiler will still profiler symbolic operators.

I have a issue about this: #15658.

I think maybe we should have this flag removed in 2.0 to avoid confusion?

@anirudh2290
Copy link
Member

module.bind API has a param named data_shapes which is misleading because the param is not limited to just shapes but they are data descriptors and accept DataDesc instances. I think this should be fixed in 2.0

@iblislin
Copy link
Member

iblislin commented Sep 7, 2019

Julia-related issue

@ShownX
Copy link

ShownX commented Sep 13, 2019

Expect more image operations: adjust_colors (not random), rotate, and more

@eric-haibin-lin
Copy link
Member

remove the deprecated infer_range argument for nd.arange. The option is actually not supported.

@access2rohit
Copy link
Contributor

we need to fix this issue as well #16216. It's a breaking change

@eric-haibin-lin
Copy link
Member

  • mx.symbol.LogisticRegressionOutput
  • mx.symbol.LinearRegressionOutput
  • mx.symbol.MAERegressionOutput
  • mx.symbol.LogisticRegressionOutput
  • mx.symbol.LinearRegressionOutput
  • mx.symbol.SVMOutput

@marcoabreu
Copy link
Contributor

Are there any plans to move the training logic (dataset handling, distributed training, etc) into the core to avoid having all that logic in the frontend languages?

@szha szha closed this as completed Sep 13, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests