-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fp16 models return no detections #6766
Comments
@fnando1995 this is a CUDA/conda/windows bug that affects some install configurations. Most of the time this is caused by CUDA 11 incompatibilities and will be resolved by downgrading to CUDA 10, i.e. 10.2. |
Hello @glenn-jocher, I tested in this 2 environments Environment 1: Environment 2: Any thoughts? |
Did you run the |
@DavidBaldsiefen That was the problem!, but I was doing the change directly but, the argsparse default for half was changing it. Thanks. |
Search before asking
Question
Hello, I used the validate and detect files for different models, changing fp32 and fp16 for tensorrt framework. I notice that fp16 models return NO detections (0 detects). Does anyone can point out what could this be? I make many imports ,changing fp, batchsize, model version, image size; but all fp16 models do not return detections, only fp32.
Additional
No response
The text was updated successfully, but these errors were encountered: