Skip to content

[mlir][linalg] Use hasPureTensorSemantics in TransposeMatmul methods. #146438

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

hanhanW
Copy link
Contributor

@hanhanW hanhanW commented Jul 1, 2025

The issue is triggered by ee070d0 that checks TensorLikeType when downstream projects use the pattern without registering bufferization::BufferizationDialect. The registeration is needed because the interface implementation for builtin types locate at BufferizationDialect::initialize(). However, we do not need to fix it by the registeration. The proper fix is using the linalg method, i.e., hasPureTensorSemantics.

The issue is triggered by
llvm@ee070d0
that checks `TensorLikeType` when downstream projects use the pattern
without registering bufferization::BufferizationDialect. The
registeration is needed because the interface implementation for builtin
types locate at `BufferizationDialect::initialize()`. However, we do not
need to fix it by the registeration. The proper fix is using the linalg
method, i.e., hasPureTensorSemantics.

Signed-off-by: hanhanW <[email protected]>
@hanhanW
Copy link
Contributor Author

hanhanW commented Jul 1, 2025

cc @andrey-golubev (I can't add you as a reviewer.)

@llvmbot
Copy link
Member

llvmbot commented Jul 1, 2025

@llvm/pr-subscribers-mlir-linalg

@llvm/pr-subscribers-mlir

Author: Han-Chung Wang (hanhanW)

Changes

The issue is triggered by ee070d0 that checks TensorLikeType when downstream projects use the pattern without registering bufferization::BufferizationDialect. The registeration is needed because the interface implementation for builtin types locate at BufferizationDialect::initialize(). However, we do not need to fix it by the registeration. The proper fix is using the linalg method, i.e., hasPureTensorSemantics.


Full diff: https://github.com/llvm/llvm-project/pull/146438.diff

1 Files Affected:

  • (modified) mlir/lib/Dialect/Linalg/Transforms/TransposeMatmul.cpp (+2-2)
diff --git a/mlir/lib/Dialect/Linalg/Transforms/TransposeMatmul.cpp b/mlir/lib/Dialect/Linalg/Transforms/TransposeMatmul.cpp
index e624f589917d1..233f180e48be0 100644
--- a/mlir/lib/Dialect/Linalg/Transforms/TransposeMatmul.cpp
+++ b/mlir/lib/Dialect/Linalg/Transforms/TransposeMatmul.cpp
@@ -38,7 +38,7 @@ FailureOr<Operation *> mlir::linalg::transposeMatmul(RewriterBase &rewriter,
         matmulOp, "only matmul ops with non-extended semantics are supported");
   }
 
-  if (!bufferization::hasTensorSemantics(matmulOp))
+  if (!matmulOp.hasPureTensorSemantics())
     return rewriter.notifyMatchFailure(
         matmulOp, "only matmul ops with tensors are supported");
 
@@ -93,7 +93,7 @@ mlir::linalg::transposeBatchMatmul(RewriterBase &rewriter,
         batchMatmulOp, "ops with user-defined maps are not supported");
   }
 
-  if (!bufferization::hasTensorSemantics(batchMatmulOp))
+  if (!batchMatmulOp.hasPureTensorSemantics())
     return rewriter.notifyMatchFailure(
         batchMatmulOp, "only matmul ops with tensors are supported");
 

@hanhanW
Copy link
Contributor Author

hanhanW commented Jul 1, 2025

The functionality is well tested in transpose-matmul.mlir. I'm not able to reproduce it using the transform dialect. My guess is that the interpreter pass already loads all the upstream dialects.

I can only reproduce it with writing a new c++ pass. If it is a requirement, I can do so. But my feeling is that people will delete it in the future.

Copy link
Contributor

@andrey-golubev andrey-golubev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (I wonder how I didn't catch this when introducing a patch...). Could you add a test for this case so that the regression is covered?

Edit: saw your note regarding the reproduction in a different-scope scenario. Up to you then whether you want a test or not. I guess we could have some precautionary note saying "don't delete this test as it tests a different setup".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants