Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Look for TupleType instead of TupleNode in LayoutRewriter #5018

Closed
wants to merge 3 commits into from

Conversation

trevor-m
Copy link
Contributor

@trevor-m trevor-m commented Mar 9, 2020

When getting the input shapes from the args LayoutRewriter checked for tuples only by seeing if the node is an instance of TupleNode. With that code, the following relay program would encounter an error during LayoutRewriter because the arg to the function is a relay.Var but has a TupleTypeNode instead of a TensorTypeNode as it expected.

def @main(%input_0: Tensor[(1, 2, 6, 6), float32], %input_1: Tensor[(1, 3, 6, 6), float32]) -> Tensor[(1, 5, 6, 6), float32] {
  %0 = (%input_0, %input_1);
  %1 = fn (%x: (Tensor[(1, 2, 6, 6), float32], Tensor[(1, 3, 6, 6), float32])) -> Tensor[(1, 5, 6, 6), float32] {
    concatenate(%x, axis=1) /* ty=Tensor[(1, 5, 6, 6), float32] */
  };
  %1(%0) /* ty=Tensor[(1, 5, 6, 6), float32] */
}

@zhiics @anijain2305 Could you please review?

@anijain2305
Copy link
Contributor

Trevor, can you please add a test case? The implementation is good, but we need a test case.

Copy link
Member

@zhiics zhiics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, I don't think we need to test this functionality in the partitioning pass as it is a unit test of layout itself. Instead, You can directly make the "partitioned" graph and run Alterlayout pass and assert whatever IR it should emit.

@trevor-m
Copy link
Contributor Author

trevor-m commented Mar 10, 2020

IMHO, I don't think we need to test this functionality in the partitioning pass as it is a unit test of layout itself. Instead, You can directly make the "partitioned" graph and run Alterlayout pass and assert whatever IR it should emit.

Hi Zhi, I don't think it is possible to create the partitioned graph without actually using partitoning. I recreated the subgraph below:

input_type = relay.TensorType((1, 5, 6, 6), "float32")
x = relay.var("x", relay.TupleType([input_type, input_type]))
out = relay.concatenate(x, axis=1)
func = relay.Function([x], out)

However, the relay.concatenate python API does not accept a Var node as the input:

  File "test_pass_alter_op_layout.py", line 1053, in test_concatenate
    out = relay.concatenate(x, axis=1)

  File "/data/neo-ai-tvm/python/tvm/relay/op/tensor.py", line 888, in concatenate
    data = list(data)

TypeError: 'Var' object is not iterable

Edit: I was able to bypass this by calling tvm.relay.op._make.concatenate. Updating PR...

@anijain2305
Copy link
Contributor

#5066

Above PR have same line changes. Might be a good idea to try that first.

@tqchen
Copy link
Member

tqchen commented Mar 30, 2020

@trevor-m @anijain2305 @zhiics please followup

@tqchen tqchen added status: need test case need test cases to cover the change status: need update need update based on feedbacks labels Mar 30, 2020
@trevor-m
Copy link
Contributor Author

Closing since #5066 also solves this issue.

@trevor-m trevor-m closed this Mar 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: need test case need test cases to cover the change status: need update need update based on feedbacks
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants