You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I test this repo using 416*416 size. under win10 , NVIDIA GTX 1060 GPU
Each picture inference time is about average 112ms.
But when i use stand yolov3.cfg to detector ordinary object ,each picture takes about average 38 ms .
Why this network is so much slower than yolov3 network ?
The text was updated successfully, but these errors were encountered:
blackwool
changed the title
The inference speed is slow than yolov3
The inference speed is slower than yolov3
Jul 20, 2020
hi,in the issue 'the test speed',the author says under 2080ti input size:608x608 ,the image ./test_imgs/input/selfie.jpg: Predicted in 6.259000 milli-seconds. may be in the makefile the folowing selects such as "GPU=1" you don't choose ,so the speed you test is worked under cpu?that is my guess ,may not be true,but you can have a try.
I do use GPU=1 configure to make file ,and it calls cudnn api when it runs.
I print each layer costed time and find the that the convolutional layers with large groups will cost a lot of time .
I test this repo using 416*416 size. under win10 , NVIDIA GTX 1060 GPU
Each picture inference time is about average 112ms.
But when i use stand yolov3.cfg to detector ordinary object ,each picture takes about average 38 ms .
Why this network is so much slower than yolov3 network ?
The text was updated successfully, but these errors were encountered: