-
Notifications
You must be signed in to change notification settings - Fork 370
Add support for TensorRT-RTX #3753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
No ciflow labels are configured for this repo. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lanluo-nvidia just remove the PTQ Calibrator feature from python and C++ and put in deprecation errors.
@@ -59,11 +59,16 @@ ConversionCtx::ConversionCtx(BuilderSettings build_settings) | |||
for (auto p = settings.enabled_precisions.begin(); p != settings.enabled_precisions.end(); ++p) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cant we condition this full block? since enabled precisions doesnt matter to RTX? Also the else case of the macro should be a warning that the setting enabled precision does not do anything for TRT RTX
@@ -204,7 +204,7 @@ void AddInputs(ConversionCtx* ctx, c10::ArrayRef<const torch::jit::Value*> input | |||
"Optimization profile is invalid, please check the input range provided (conversion.AddInputs)"); | |||
|
|||
ctx->cfg->addOptimizationProfile(profile); | |||
#if NV_TENSORRT_MAJOR > 7 || (NV_TENSORRT_MAJOR == 7 && NV_TENSORRT_MINOR >= 1) | |||
#ifndef TRT_MAJOR_RTX && (NV_TENSORRT_MAJOR > 7 || (NV_TENSORRT_MAJOR == 7 && NV_TENSORRT_MINOR >= 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work? like a ifndef + some other conditional?
@@ -334,8 +336,12 @@ int main(int argc, char** argv) { | |||
if (calibration_cache_file) { | |||
calibration_cache_file_path = torchtrtc::fileio::resolve_path(args::get(calibration_cache_file)); | |||
} | |||
|
|||
#ifndef TRT_MAJOR_RTX |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same thing here, the else cases should log a warning if the user tries to use the API. Or in this case maybe a error
@@ -12,6 +12,7 @@ | |||
import torch | |||
from torch.fx.immutable_collections import immutable_list | |||
from torch.fx.node import Argument, Target | |||
from torch_tensorrt._utils import is_tensorrt_version_supported |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need these FX frontend updates?
Description
Add initial support for TensorRT-RTX.
The following are the currently identified issues:
RTX team side:
5439176
5400490
5407733
5402295
Our side:
PR is in Progress:
fix: atan2 strong type support & bug fix for integer dynamic shape #3751
add strong typing fix #3749
🐛 [Bug] TensorRT-RTX BatchNorm constant fold got nan #3699
🐛 [Bug] Refitter test failed when constant fold is disabled #3752
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: