Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add dilation in x86 NCHWc depthwise conv support #6267

Merged
merged 1 commit into from
Aug 14, 2020

Conversation

wjliu1998
Copy link
Contributor

as mentioned in #4962 (#4962), the NCHWc depthwise convolution does not support dilation > 1.
The problem has been fixed

dilated_kernel_w = (filter_width - 1) * dw + 1
pad_top, pad_left, pad_down, pad_right = get_pad_tuple(
padding, (dilated_kernel_h, dilated_kernel_w))
print("padding", pad_top, pad_left, pad_down, pad_right, dilated_kernel_h, padding)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To output messages, it is better to make usage of the logging functionality. See the logger object, created in the beginning of this file.

Same comment applies for other print statements below.

@@ -122,13 +122,24 @@ def depthwise_conv2d_NCHWc(cfg, data, kernel, strides, padding, dilation,

strides = strides if isinstance(strides, (tuple, list)) else (strides, strides)
HSTR, WSTR = strides
pad_top, pad_left, pad_down, pad_right = get_pad_tuple(padding, (filter_height, filter_width))
#pad_top, pad_left, pad_down, pad_right = get_pad_tuple(padding, (filter_height, filter_width))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is not useful anymore, please remove it rather than keeping it commented out here. Same for other commented lines below.

@@ -268,7 +268,7 @@ def depthwise_conv2d_with_workload_NCHWc(batch, in_channel, in_height, channel_m
filter_width = filter_height
stride_h = stride_w = stride

assert dilation == 1, "depthwise_conv2d_NCHWc currently does not support dilation."
#assert dilation == 1, "depthwise_conv2d_NCHWc currently does not support dilation."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is not useful anymore, please remove it rather than keeping it commented out here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review! The comments and logging have been removed!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, thanks for accepting the suggestions.

@wjliu1998 wjliu1998 force-pushed the master branch 2 times, most recently from 2aaea26 to 4abc65c Compare August 13, 2020 09:15
@masahi
Copy link
Member

masahi commented Aug 13, 2020

@wjliu1998 Thanks, please update the PyTorch frontend test below that has been disabled due to the lack of dilation support in x86:

https://github.com/apache/incubator-tvm/blob/master/tests/python/frontend/pytorch/test_forward.py#L1556-L1560

(Remove [cuda_ctx], do the test in the same way as fcn above)

@masahi masahi self-assigned this Aug 13, 2020
@wjliu1998
Copy link
Contributor Author

@wjliu1998 Thanks, please update the PyTorch frontend test below that has been disabled due to the lack of dilation support in x86:

https://github.com/apache/incubator-tvm/blob/master/tests/python/frontend/pytorch/test_forward.py#L1556-L1560

(Remove [cuda_ctx], do the test in the same way as fcn above)

Thanks for the comment! The x86 frontend has been enabled!

@masahi masahi merged commit ad0dbe0 into apache:master Aug 14, 2020
@masahi
Copy link
Member

masahi commented Aug 14, 2020

Thanks @wjliu1998 @leandron

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants