Skip to content

Conversation

@vivekkhandelwal1
Copy link
Collaborator

Update LLVM to llvm/llvm-project@a854c26

TOSA Updates Summary:

[TOSA] Update tosa.mul's shift as input
[TOSA] Update tosa.slice's start and size to !tosa.shape type
[TOSA] Refactor: Use tosaCastTensorToType() function to create tosa.cast


Signed-off-by: Vivek Khandelwal [email protected]
Co-authored-by: Justin Ngo [email protected]

Copy link
Collaborator

@sjarus sjarus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PRs have been individually tested by @justin-ngo-arm and reviewed. Approving.

@vivekkhandelwal1 vivekkhandelwal1 enabled auto-merge (squash) February 11, 2025 06:55
@bjacob
Copy link
Contributor

bjacob commented Feb 12, 2025

Thank you for working on this! The IREE<-LLVM integrate is blocked on this, since we have had to revert llvm/llvm-project#125789, and the revert is now conflicting with newer TOSA changes.

justin-ngo-arm and others added 5 commits February 17, 2025 06:21
* In TOSA v1.0, tosa.mul's `shift` is an input. This commit updates
  Torch to TOSA in alignment with that change.
* Update `tosa::createMulOpAndCast()` function to create a const shift
  TOSA tensor based on the provided shift value. This should be the API
  used to create tosa.mul from now on (instead of using
  `rewriter.create<tosa::MulOp>()`).
* Update `tosa::CreateOpAndInfer()` function to call
  `tosa::CreateOpAndInferShape()` function.
* Update LIT tests.

Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I84aeccacbb33eee65e5923725ace86a78f877869
* In TOSA 1.0, tosa.slice's `start` and `size` are !tosa.shape types.
  Update tosa.slice in Torch to TOSA in alignment with that.
* Update LIT tests.

Signed-off-by: Justin Ngo <[email protected]>
Change-Id: Icf878ea4dc43ec1af3bd498b5ae96f514fe0f04a
* Previously, in Torch to TOSA, there are 3 ways to create tosa.cast op:
  - `rewriter.create<tosa::CastOp>()`
  - `tosa::promoteType()`
  - `tosa::tosaCastTensorToType()`
* This commit combines the three APIs above into
  `tosa::tosaCastTensorToType()` with the following features:
  - Checking whether source and destination element types are the same
    before casting. If they are same, skip the cast.
  - Custom float to integer cast behavior added from this PR:
    llvm#3946
    TLDR: PyTorch's and TOSA's float to integer casting behaviors are
    different (round to zero vs round to nearest, respectively), which
    requires a custom casting here.
  - Future `TODO`: add a --strict mode which includes
    `checkValidityOfCast()` to ensure that the casting pairs follow TOSA
    specifications.
* Update LIT tests.

Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I2aef3c79d8f2d98b93e671d5b815b8eab33e697e
* Update tosa.reshape's `shape` attribute as an input with !tosa.shape
  type.
* Update LIT tests.

Signed-off-by: Justin Ngo <[email protected]>
Change-Id: I72a9989912356895029787ff1f91d382e1de368a
Update LLVM to llvm/llvm-project@a854c26

TOSA Updates Summary:

[TOSA] Update tosa.mul's shift as input
[TOSA] Update tosa.slice's start and size to !tosa.shape type
[TOSA] Refactor: Use tosaCastTensorToType() function to create tosa.cast

---------

Signed-off-by: Vivek Khandelwal <[email protected]>
Co-authored-by: Justin Ngo <[email protected]>
@vivekkhandelwal1 vivekkhandelwal1 merged commit b0b7057 into llvm:main Feb 17, 2025
3 checks passed
Comment on lines -174 to -196


@run
# CHECK-LABEL: test_mutable_buffer_not_supported_without_hooks
# CHECK: EXPECTED ERROR: Store of a mutation to {{.*}} is not supported
def test_mutable_buffer_not_supported_without_hooks():
class Basic(nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("buffer", torch.randn(3, 4))

def forward(self, x):
self.buffer.mul_(x)
return x

try:
m = fx.export_and_import(
Basic(),
torch.randn(3, 4),
experimental_support_mutation=True,
)
except NotImplementedError as e:
print("EXPECTED ERROR:", str(e))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CI passes on removing this test otherwise it keeps on running until it timeout. This test is working fine locally for me. I am not able to understand what's the exact issue here.

@stellaraccident what do you suggest here?

@vivekkhandelwal1
Copy link
Collaborator Author

Hi all, the PR got merged since the auto-merge was enabled. I didn't intend to merge it by removing the test. I will re-add the test but for that we want the Ci issue with this test to be fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants