Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ConvertLayout] Squeeze and reduce ops #7835

Merged
merged 2 commits into from
Apr 15, 2021
Merged

Conversation

lixiaoquan
Copy link
Contributor

@lixiaoquan lixiaoquan commented Apr 13, 2021

  1. Add FInferCorrectLayout for squeeze
  2. Handle keep_dims = False for reduce ops

@anijain2305 Could you please review? thanks.

  1. Add FInferCorrectLayout for squeeze
  2. Handle keep_dims = False for reduce ops
Comment on lines 39 to 40
getent group "${CI_BUILD_GID}" || addgroup --force-badname --gid "${CI_BUILD_GID}" "${CI_BUILD_GROUP}"
getent passwd "${CI_BUILD_UID}" || adduser --force-badname --gid "${CI_BUILD_GID}" --uid "${CI_BUILD_UID}" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not change unrelevant files. Please file another PR for this change if necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your comment, I've solved all those issues, please take another look.

Comment on lines 177 to 178
return std::make_tuple(params->keepdims ? stripe(layout) : Layout(new_layout_string),
new_r_axes);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me that infer could be further improved, since the desired layouts of two branches (keepdims or not) are totally different. Could we make it like the following, so that you don't need to traverse the layout twice when keepdims is true?

std::string new_layout_string = "";
for (auto iter_var : layout->axes) {
  // ...
  if (layout_axis.IsPrimal()) {
    if (params->keepdims || !old_r_dims.count(layout_dim)) {
      new_layout_string += layout_dim;
    }
    axis_index++;
  }
}
return std::make_tuple(new_layout_string, new_r_axes);

If the above solution works, we could further consider merging infer and stripe to reduce duplicated logic:

auto infer = [&](const Layout& layout, const bool keepdims) { /* ... */ };
// ...
// Origin: std::tie(inferred_out, new_r_axes) = infer(new_in_layouts[0]);
// Origin: inferred_in = stripe(new_in_layouts[0]);
std::tie(inferred_out, new_r_axes) = infer(new_in_layouts[0], params->keepdims);
std::tie(inferred_in, std::ignore) = infer(new_in_layouts[0], false);

}
}

if (axis.size() == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add comment saying nothing for squeeze, or simply put this into the above else branch to make it clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've removed this check since it can be handled in following logic, and a case is added to cover nothing to squeeze case

return Array<Array<Layout>>{{inferred_input}, {inferred_output}};
}

auto kept = [&](size_t i, Array<Integer> axis) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Add comments, or name this function better.
  • I'd suggest moving this function down to get closer to its callee to reduce the confusion.

Copy link
Contributor

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
cc @anijain2305 to take a final look.

Copy link
Contributor

@anijain2305 anijain2305 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution. LGTM!

@comaniac comaniac merged commit cc7f529 into apache:main Apr 15, 2021
@comaniac
Copy link
Contributor

Thanks @lixiaoquan @anijain2305

mehrdadh pushed a commit to mehrdadh/tvm that referenced this pull request Apr 22, 2021
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
trevor-m pushed a commit to trevor-m/tvm that referenced this pull request May 6, 2021
trevor-m pushed a commit to neo-ai/tvm that referenced this pull request May 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants