-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: NVIDIA/TensorRT
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
BertQA sample throws segementation fault (TensorRT 10.3) when running GPU Jetson Orin Nano
#4220
opened Oct 23, 2024 by
krishnarajk
Can TensorRT calculate the number of Params and FLOPs for the model?
#4219
opened Oct 23, 2024 by
demuxin
AttributeError: 'tensorrt_bindings.tensorrt.ICudaEngine' object has no attribute 'num_bindings'
#4216
opened Oct 21, 2024 by
metehanozdeniz
out of memory failure of TensorRT 10.5 when running flux dit on GPU L40S
#4214
opened Oct 21, 2024 by
QZH-eng
How to strictly limiting the maximum GPU memory usage and clear GPU memory cache?
#4211
opened Oct 19, 2024 by
EmmaThompson123
"Device to shape host node should not be folded into myelin" failure of TensorRT 10.5 when running trtexec on GPU L4
Export: torch.onnx
https://pytorch.org/docs/stable/onnx.html
internal-bug-tracked
triaged
Issue has been triaged by maintainers
#4210
opened Oct 18, 2024 by
sean-xiang-applovin
Different versions of TensorRT get different model inference results
Accuracy
triaged
Issue has been triaged by maintainers
#4209
opened Oct 18, 2024 by
demuxin
Does Flux not support int8?
Demo: Diffusion
Issues regarding demoDiffusion
Precision: INT8
triaged
Issue has been triaged by maintainers
#4208
opened Oct 17, 2024 by
algorithmconquer
flux model engine_from_bytes(bytes_from_path(self.engine_path)) OutOfMemory
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4207
opened Oct 17, 2024 by
algorithmconquer
When converting ONNX to TensorRT, how to let the output shape determined by the input value instead of the input size?
question
Further information is requested
triaged
Issue has been triaged by maintainers
#4206
opened Oct 17, 2024 by
OswaldoBornemann
flux-demo failure of TensorRT 10.5 when running a single L40 GPU, how to implement 2-GPUs with L40
Demo: Diffusion
Issues regarding demoDiffusion
triaged
Issue has been triaged by maintainers
#4205
opened Oct 17, 2024 by
algorithmconquer
optimization profile is missing values for shape input
triaged
Issue has been triaged by maintainers
#4204
opened Oct 16, 2024 by
OswaldoBornemann
TensorRT output shape is different from ONNX output shape
triaged
Issue has been triaged by maintainers
#4203
opened Oct 16, 2024 by
OswaldoBornemann
Deploy DeBERTa to Triton Inference Server
triaged
Issue has been triaged by maintainers
#4202
opened Oct 16, 2024 by
nbroad1881
the output of onnx model is different from model inferenced by TensorRT
Export: torch.onnx
https://pytorch.org/docs/stable/onnx.html
triaged
Issue has been triaged by maintainers
#4201
opened Oct 15, 2024 by
AmazDeng
ONNX to TensorRT Error: Internal Error (kv_slice: optimization profile is missing values for shape input)
triaged
Issue has been triaged by maintainers
#4199
opened Oct 14, 2024 by
zengrh3
trt10.5 pytorch-quantization has compile bug
triaged
Issue has been triaged by maintainers
#4197
opened Oct 13, 2024 by
lix19937
Polygraphy: How to write the data_loader.py to send the calibration data?
Tools: Polygraphy
triaged
Issue has been triaged by maintainers
#4196
opened Oct 12, 2024 by
Kongsea
Do modulatedDeformConvPlugin and multiscaleDeformableAttnPlugin support quantization? What quantization tools can be used?
Plugins
Quantization: PTQ
triaged
Issue has been triaged by maintainers
#4195
opened Oct 12, 2024 by
IEIAuto
Previous Next
ProTip!
Follow long discussions with comments:>50.