You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
around 10%1 of the total failures in the iree onnx node tests are caused by "failed to legalize operation 'torch.aten.sum.dim_IntList' that was explicitly marked illegal". Every single case i checked was due to a dynamic dim being fed to line 488 in Reduction.cpp here
I thought it might be a problem with the test cases, but per the discusssion at nod-ai/SHARK-TestSuite#308 this does not seem to be the case.
bool isNoneOrEmptyDimList = isa<Torch::NoneType>(op.getDim().getType());
if (matchPattern(op.getDim(), m_TorchListOfConstantInts(dimList))) {
// Fix negative dimensions, if any, before adding to the list.for (int64_t dim : dimList) {
dim = toPositiveDim(dim, inputType.getRank());
// Drop invalid dimensionsif (isValidDim(dim, inputType.getRank()))
opInfo.dimSet.insert(dim);
}
if (dimList.empty())
isNoneOrEmptyDimList = true;
} elseif (matchPattern(op.getDim(), m_TorchConstantInt(&dim))) {
dim = toPositiveDim(dim, inputType.getRank());
if (!isValidDim(dim, inputType.getRank()))
return rewriter.notifyMatchFailure(
op, "`dim` argument must be valid, invalid received.");
opInfo.dimSet.insert(dim);
} elseif (!isNoneOrEmptyDimList) {
return rewriter.notifyMatchFailure(
op, "`dim` argument must be a constant int list or None");
}
Value torch_to_linalg::createReductionLinalgGeneric(
OpBuilder &b, Location loc, const ReductionOpInfo &opInfo, Value initElem,
function_ref<void(OpBuilder &, Location, ValueRange)> bodyBuild) {
auto inputType = cast<RankedTensorType>(opInfo.tensorOperand.getType());
// Get the result shape by obtaining the size of each// dimension in the input tensor that is not getting reduced.// If `opInfo.keepDim` is true, the rank of the output tensor// is kept the same as the rank of the input tensor, and the// reduced dimensions are set to have size 1.auto c1 = b.create<arith::ConstantIndexOp>(loc, /*value=*/1);
SmallVector<Value> resultShape;
for (int64_t i = 0; i < inputType.getRank(); i++) {
auto currentDimSize = b.create<tensor::DimOp>(loc, opInfo.tensorOperand, i);
if (!opInfo.dimSet.contains(i))
resultShape.push_back(currentDimSize);
elseif (opInfo.keepDim)
resultShape.push_back(c1);
}
// Create the affine expressions that will be used to// iterate over the input and output tensors.// Here we also set the type of iterator: parallel or reduction.
SmallVector<AffineExpr> exprs;
SmallVector<utils::IteratorType> iteratorTypes;
SmallVector<AffineExpr> resultExprs;
for (auto size :
llvm::enumerate(makeShapeTorchCompatible(inputType.getShape()))) {
exprs.push_back(b.getAffineDimExpr(size.index()));
if (opInfo.dimSet.contains(size.index())) {
iteratorTypes.push_back(utils::IteratorType::reduction);
// If `opInfo.keepDim`, create affine map to the first element// in the current dimension.if (opInfo.keepDim)
resultExprs.push_back(b.getAffineConstantExpr(0));
} else {
iteratorTypes.push_back(utils::IteratorType::parallel);
resultExprs.push_back(b.getAffineDimExpr(size.index()));
}
}
around 10%1 of the total failures in the iree onnx node tests are caused by "failed to legalize operation 'torch.aten.sum.dim_IntList' that was explicitly marked illegal". Every single case i checked was due to a dynamic dim being fed to line 488 in Reduction.cpp here
I thought it might be a problem with the test cases, but per the discusssion at nod-ai/SHARK-TestSuite#308 this does not seem to be the case.
We should figure out how to support dynamic dimes in reduction ops. The error are thrown during computeReductionOpInfoForDimVariantOp
and the computed ReductionOfInfo is used in createReductionLinalgGeneric
Footnotes
by my count in https://gist.github.com/renxida/2706b7522b38d8c6271bfff95fb62609 ↩
The text was updated successfully, but these errors were encountered: