Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add conversions for 1x1 conv_2d to matmul #18736

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

IanWood1
Copy link
Contributor

@IanWood1 IanWood1 commented Oct 9, 2024

Convert 1x1 conv_2d to linalg.matmul ops when the HW dimensions are dynamic and convert linalg.conv_2d_nhwc_hwcf when the N dimension is not 1. No change to linalg.conv_2d_nchw_fchw currently (see linked issue for discussion). Matmul is simpler and easier for the compiler to understand, allowing for better optimizations.

@IanWood1 IanWood1 marked this pull request as ready for review October 10, 2024 16:09
@hanhanW hanhanW requested a review from Max191 October 10, 2024 17:06
@hanhanW
Copy link
Contributor

hanhanW commented Oct 10, 2024

Can you make the PR description more descriptive when it is ready for review? Here are some examples: https://google.github.io/eng-practices/review/developer/cl-descriptions.html

@IanWood1 IanWood1 changed the title Add conversion for 1x1 conv_2d Add conversions for 1x1 conv_2d to matmul Oct 10, 2024
Copy link
Contributor

@Max191 Max191 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you support the NCHW case as well? I added a suggestion for how to do it.

Additional thought: It may be a reasonable idea to create a linalg.generic op with expanded H and W dims (so one M/N contraction dim for each of the N, H, and W convolution dims). Having fewer reshapes is probably better at GlobalOptimization level, but I am not sure if it is worth it yet. I think the collapse_shape on the H and W dimensions can typically be propagated through the producers/consumers, so it might not make a difference. You may have a better idea of which form is better for dispatch region formation.

Also, this additional thought is not a requirement for this PR, just an idea to try afterwards.

@ScottTodd ScottTodd removed their request for review October 15, 2024 17:01
Use linalg::generalizeNamedOp to generalize 1x1 conv and then remove the
unit extent affine symbols from input's affine map. Additionally, clean
up the test cases (since expand/extract is introduced). Just check that
the affine maps are correct.

Signed-off-by: Ian Wood <[email protected]>
Copy link
Contributor

@Max191 Max191 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we may not even need this pass anymore. Does it still work if you simply add a case for 1x1 filters here?

if (isa_and_nonnull<linalg::AbsOp, linalg::AddOp, linalg::BroadcastOp,
linalg::CeilOp, linalg::CopyOp, linalg::DivOp,
linalg::DivUnsignedOp, linalg::ElemwiseBinaryOp,
linalg::ElemwiseUnaryOp, linalg::ExpOp, linalg::FloorOp,
linalg::LogOp, linalg::MapOp, linalg::MaxOp,
linalg::MulOp, linalg::NegFOp, linalg::ReduceOp,
linalg::SubOp, linalg::TransposeOp>(
linalgOp.getOperation())) {
namedOpCandidates.push_back(linalgOp);

I would expect generalizing the op would be enough, and hopefully the indexing maps will simplify to contraction maps.

@IanWood1
Copy link
Contributor Author

I think we may not even need this pass anymore. Does it still work if you simply add a case for 1x1 filters here?

if (isa_and_nonnull<linalg::AbsOp, linalg::AddOp, linalg::BroadcastOp,
linalg::CeilOp, linalg::CopyOp, linalg::DivOp,
linalg::DivUnsignedOp, linalg::ElemwiseBinaryOp,
linalg::ElemwiseUnaryOp, linalg::ExpOp, linalg::FloorOp,
linalg::LogOp, linalg::MapOp, linalg::MaxOp,
linalg::MulOp, linalg::NegFOp, linalg::ReduceOp,
linalg::SubOp, linalg::TransposeOp>(
linalgOp.getOperation())) {
namedOpCandidates.push_back(linalgOp);

I would expect generalizing the op would be enough, and hopefully the indexing maps will simplify to contraction maps.

The affine_maps dont get simplified to contraction maps until FoldUnitExtentDimsPass which is directly after this pass.

@Max191
Copy link
Contributor

Max191 commented Oct 17, 2024

The affine_maps dont get simplified to contraction maps until FoldUnitExtentDimsPass which is directly after this pass

Nice, that's what I was hoping for! As long as the unit dims are getting folded then it should be good.

Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think changes to generalize is fine, but maybe keep the conv-> matmul conversion pass around (doesnt have to be used in the global optimization pass pipeline).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[GlobalOptimization] 1x1 filter convolutions not converted to matmul
4 participants