-
Notifications
You must be signed in to change notification settings - Fork 616
Integrate LLVM at a854c266b98468ad4479a7d3c56a3fa76437e30d #4017
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sjarus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PRs have been individually tested by @justin-ngo-arm and reviewed. Approving.
|
Thank you for working on this! The IREE<-LLVM integrate is blocked on this, since we have had to revert llvm/llvm-project#125789, and the revert is now conflicting with newer TOSA changes. |
* In TOSA v1.0, tosa.mul's `shift` is an input. This commit updates Torch to TOSA in alignment with that change. * Update `tosa::createMulOpAndCast()` function to create a const shift TOSA tensor based on the provided shift value. This should be the API used to create tosa.mul from now on (instead of using `rewriter.create<tosa::MulOp>()`). * Update `tosa::CreateOpAndInfer()` function to call `tosa::CreateOpAndInferShape()` function. * Update LIT tests. Signed-off-by: Justin Ngo <[email protected]> Change-Id: I84aeccacbb33eee65e5923725ace86a78f877869
* In TOSA 1.0, tosa.slice's `start` and `size` are !tosa.shape types. Update tosa.slice in Torch to TOSA in alignment with that. * Update LIT tests. Signed-off-by: Justin Ngo <[email protected]> Change-Id: Icf878ea4dc43ec1af3bd498b5ae96f514fe0f04a
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
- `rewriter.create<tosa::CastOp>()`
- `tosa::promoteType()`
- `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
`tosa::tosaCastTensorToType()` with the following features:
- Checking whether source and destination element types are the same
before casting. If they are same, skip the cast.
- Custom float to integer cast behavior added from this PR:
llvm#3946
TLDR: PyTorch's and TOSA's float to integer casting behaviors are
different (round to zero vs round to nearest, respectively), which
requires a custom casting here.
- Future `TODO`: add a --strict mode which includes
`checkValidityOfCast()` to ensure that the casting pairs follow TOSA
specifications.
* Update LIT tests.
Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
* Update tosa.reshape's `shape` attribute as an input with !tosa.shape type. * Update LIT tests. Signed-off-by: Justin Ngo <[email protected]> Change-Id: I72a9989912356895029787ff1f91d382e1de368a
Update LLVM to llvm/llvm-project@a854c26 TOSA Updates Summary: [TOSA] Update tosa.mul's shift as input [TOSA] Update tosa.slice's start and size to !tosa.shape type [TOSA] Refactor: Use tosaCastTensorToType() function to create tosa.cast --------- Signed-off-by: Vivek Khandelwal <[email protected]> Co-authored-by: Justin Ngo <[email protected]>
83bbd58 to
a96258d
Compare
|
|
||
|
|
||
| @run | ||
| # CHECK-LABEL: test_mutable_buffer_not_supported_without_hooks | ||
| # CHECK: EXPECTED ERROR: Store of a mutation to {{.*}} is not supported | ||
| def test_mutable_buffer_not_supported_without_hooks(): | ||
| class Basic(nn.Module): | ||
| def __init__(self): | ||
| super().__init__() | ||
| self.register_buffer("buffer", torch.randn(3, 4)) | ||
|
|
||
| def forward(self, x): | ||
| self.buffer.mul_(x) | ||
| return x | ||
|
|
||
| try: | ||
| m = fx.export_and_import( | ||
| Basic(), | ||
| torch.randn(3, 4), | ||
| experimental_support_mutation=True, | ||
| ) | ||
| except NotImplementedError as e: | ||
| print("EXPECTED ERROR:", str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The CI passes on removing this test otherwise it keeps on running until it timeout. This test is working fine locally for me. I am not able to understand what's the exact issue here.
@stellaraccident what do you suggest here?
|
Hi all, the PR got merged since the auto-merge was enabled. I didn't intend to merge it by removing the test. I will re-add the test but for that we want the Ci issue with this test to be fixed. |
Update LLVM to llvm/llvm-project@a854c26
TOSA Updates Summary:
[TOSA] Update tosa.mul's shift as input
[TOSA] Update tosa.slice's start and size to !tosa.shape type
[TOSA] Refactor: Use tosaCastTensorToType() function to create tosa.cast
Signed-off-by: Vivek Khandelwal [email protected]
Co-authored-by: Justin Ngo [email protected]