Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support fusing 3D Conv with Add/Mul. #23

Merged
merged 5 commits into from
Dec 3, 2018
Merged

Conversation

weixingzhang
Copy link
Contributor

With this PR, the subgraph 3D Conve->Add->Mul in Resnet3D can be fused into one 3D Conv.

ke1337
ke1337 previously approved these changes Nov 28, 2018
Copy link
Contributor

@ke1337 ke1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

ke1337
ke1337 previously approved these changes Nov 28, 2018
Copy link
Contributor

@ke1337 ke1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@weixingzhang weixingzhang requested a review from a team as a code owner November 29, 2018 23:23
continue;
}
conv_B = std::make_unique<Initializer>(conv_B_tensor_proto);
}

// Calculate new value of initializers of conv node
conv_W->scale_by_axis(*mul_B, 1);
// Caculate new value of initializers of conv node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caculate->Calculate

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Thanks!

conv_W_tensor_proto->dims_size() < 4 ||
!(mul_B_tensor_proto->dims_size() == 0 ||
(mul_B_tensor_proto->dims_size() == conv_W_tensor_proto->dims_size() - 1 &&
conv_W_tensor_proto->dims(0) == mul_B_tensor_proto->dims(0)))) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this have the same problem as the conv_add_fusion where the other dims of the multiplier should be 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it is better to check it.

if (conv_B->size() != add_B->size()) {
continue;
}
// Caculate new value of initializers of conv node
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caculate->Calculate

@@ -255,13 +255,13 @@ class Initializer final {
switch (data_type_) {
case ONNX_NAMESPACE::TensorProto_DataType_FLOAT: {
for (int i = 0; i < size_; i++) {
data<float>()[i] *= other.data<float>()[i];
data<float>()[i] *= other.data<float>()[other.size() == 1 ? 0 : i];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These helpers are a little weird with the broadcasting rules. mul now behaves different than add/sub/etc. Maybe just have a scale_by_scalar to handle the case for now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. The PR is updated. Please review it! Thanks!

@weixingzhang
Copy link
Contributor Author

It would be better to hoist the other.size() work outside the for(j) loop.

I would also encourage you to look at the disassembly for these routines: the calls to data() inside the loop end up being calls to go fetch the data pointer on every cycle through the loop for something that should be loaded once. Also if size_ were cached in a local, then the compiler would be able to vectorize the loop. As it stands, the compiler doesn't have enough aliasing info to know that writing to data[] may not trash size_. This cleanup can happen in a different PR though.

@raymondxyang
Copy link

@weixingzhang its good for merge now

@weixingzhang weixingzhang merged commit aa549cd into master Dec 3, 2018
@raymondxyang raymondxyang deleted the fusion-resnet3d branch December 3, 2018 21:11
tmccrmck pushed a commit to tmccrmck/onnxruntime that referenced this pull request Aug 28, 2019
…obuf_status

Create a mapping from ONNX Runtime Status to Protobuf Status
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants