Skip to content

🐛 [Bug] aten::abs converter does not support int32 #1231

@mfeliz-cruise

Description

@mfeliz-cruise

Bug Description

The current implementation of the aten::abs converter relies on the UnaryLayer kABS implementation which does not support integers

auto unary = ctx->net->addUnary(*in, nvinfer1::UnaryOperation::trt_type); \
.

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_network_definition.html#a77831224c9a72ad02587a56ded93c672

Generally the input must have a floating-point type (or kINT8 as a quantized float), except for the following operations:

kSIGN accepts a floating-point or Int32 tensor.
kNOT requires a Bool tensor.

To Reproduce

Steps to reproduce the behavior:

  1. Attempt to compile a model with an aten::abs op with integer inputs.

Expected behavior

This can be supported with an element-wise implementation of the op in cases where the unarylayer does not support the input type. abs(x) = max(x, x*-1)

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Metadata

Metadata

Labels

bugSomething isn't workingcomponent: convertersIssues re: Specific op converters

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions