Replies: 1 comment
-
You should use enabled_precisions as follows model = torch.compile(model, fullgraph=True,
backend="tensorrt",
options={
"enabled_precisions": {torch.float16},
},) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When trying to use fp16, I get the following error:
I'm configuring this as follows:
Beta Was this translation helpful? Give feedback.
All reactions