-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff #6078
Conversation
I have read the paper and roughly understand how it work. will continue reviewing shortly after. |
|
||
// For Div we simply use the condition of the numerator. | ||
|
||
if (nz_a.value.same_as(op->a)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
refactor this into a function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean provide a function bool value_equals(const PrimExpr&)
in NonzeroConditionResult?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return ReuseNZ(nz_a, op);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is the only place to use the function, do we really need to create a new function? correct me if I misunderstood.
return NonzeroConditionFunctor().NonzeroCondition(expr); | ||
} | ||
|
||
struct FactorOutAtomicFormulasResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps you should store them in conjunctive normal form.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean to change the struct & member name to mention CNF?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
your approach is pure conjunction. I was suggesting maybe a disjunction of conjunction will allow more optimization ability. it is fine if you dont do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. I'll keep in mind and maybe change it later.
return NonzeroConditionFunctor().NonzeroCondition(expr); | ||
} | ||
|
||
struct FactorOutAtomicFormulasResult { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
your approach is pure conjunction. I was suggesting maybe a disjunction of conjunction will allow more optimization ability. it is fine if you dont do it.
@MarisaKirisame @sergei-grechanik @tqchen I addressed most of the comments, please take a look again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @yzhliu, looks good to me. My only concern is that without performance testing the quality of the generated code may deteriorate with subsequent changes of the simplifiers, but that problem probably shouldn't be addressed in this PR.
@sergei-grechanik I agree. Perhaps we need performance integration test run periodically as mentioned in https://discuss.tvm.ai/t/efforts-on-benchmarking-for-tvm/ @tqchen could you also take a look? |
include/tvm/node/container.h
Outdated
* \param other Map to be merged. | ||
* @return The merged Array. Original Map is kept unchanged. | ||
*/ | ||
Map<K, V> Merge(const Map<K, V>& other) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we make it as a global function instead of member? so it is not ambiguitous (that the result is a new map)
include/tvm/runtime/container.h
Outdated
* \param other Array to be concatenated. | ||
* @return The concatenated Array. Original Array is kept unchanged. | ||
*/ | ||
Array<T> Concat(const Array<T>& other) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider make it as a global function, which also enables copy on write on the lhs(this)
@tqchen I make them static now. |
include/tvm/runtime/container.h
Outdated
* \param lhs second Array to be concatenated. | ||
* \return The concatenated Array. Original Arrays are kept unchanged. | ||
*/ | ||
static Array<T> Concat(Array<T> lhs, const Array<T>& rhs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can make it a global template fucntion, so you can do Concat(lhs, rhs) without having to refer to the original type. Note that the type signature should be able to only run the function for the specific type so we won't risk over generalization
include/tvm/node/container.h
Outdated
template <typename K, typename V, | ||
typename = typename std::enable_if<std::is_base_of<ObjectRef, K>::value>::type, | ||
typename = typename std::enable_if<std::is_base_of<ObjectRef, V>::value>::type> | ||
static Map<K, V> Merge(Map<K, V> lhs, const Map<K, V>& rhs) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static->inline as it is in the header
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tqchen my bad I forgot to when copy-pasting. modified.
@tqchen it's ready |
@tqchen kindly ping. |
Thanks @yzhliu @sergei-grechanik @MarisaKirisame ! |
…apache#6078) * [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff Co-authored-by: Sergei Grechanik <[email protected]> * fix lint * fix clang-format * add comments and magic number * clang-lint * address some comments * remove FreeVarsVisitor * fix constexpr lint * fix lint * fix lint * add Map.Merge * lint * change Array::Concat & Map::Merge to global functions * fix lint * move functions to global * static -> inline Co-authored-by: Sergei Grechanik <[email protected]>
…apache#6078) * [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff Co-authored-by: Sergei Grechanik <[email protected]> * fix lint * fix clang-format * add comments and magic number * clang-lint * address some comments * remove FreeVarsVisitor * fix constexpr lint * fix lint * fix lint * add Map.Merge * lint * change Array::Concat & Map::Merge to global functions * fix lint * move functions to global * static -> inline Co-authored-by: Sergei Grechanik <[email protected]>
…apache#6078) * [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff Co-authored-by: Sergei Grechanik <[email protected]> * fix lint * fix clang-format * add comments and magic number * clang-lint * address some comments * remove FreeVarsVisitor * fix constexpr lint * fix lint * fix lint * add Map.Merge * lint * change Array::Concat & Map::Merge to global functions * fix lint * move functions to global * static -> inline Co-authored-by: Sergei Grechanik <[email protected]>
…apache#6078) * [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff Co-authored-by: Sergei Grechanik <[email protected]> * fix lint * fix clang-format * add comments and magic number * clang-lint * address some comments * remove FreeVarsVisitor * fix constexpr lint * fix lint * fix lint * add Map.Merge * lint * change Array::Concat & Map::Merge to global functions * fix lint * move functions to global * static -> inline Co-authored-by: Sergei Grechanik <[email protected]>
…apache#6078) * [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff Co-authored-by: Sergei Grechanik <[email protected]> * fix lint * fix clang-format * add comments and magic number * clang-lint * address some comments * remove FreeVarsVisitor * fix constexpr lint * fix lint * fix lint * add Map.Merge * lint * change Array::Concat & Map::Merge to global functions * fix lint * move functions to global * static -> inline Co-authored-by: Sergei Grechanik <[email protected]>
Co-authored-by: Sergei Grechanik [email protected]
This is the PR that aim to remove the intermediate large Jacobian tensor in te.autodiff, as well as to do some optimizations such as inline. We have made a series of PRs for https://discuss.tvm.ai/t/rfc-bring-in-tensor-expression-autodiff and this will be the last one.
@sgrechanik-h @MarisaKirisame @junrushao1994 @xqdan @tqchen Please take a look.