You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have tried the inference code in NYU dataset. However, I can't achieve "real-time" performance as mentioned in your paper.
For batch size=1: frame rate is 12
For batch size=3: frame rate is 17
Both are not good for real time(less than 24).
I wonder why the inference speed isn't good. Did you do infernece in TWO Titans?
Thank you very much.
The text was updated successfully, but these errors were encountered:
@wdkwyf were you able to run it on gpu?
When I run the test using "python model/hourglass_um_crop_tiny.py" I am prompted to use CPU for tensorflow. Any advice on how to use GPU?
Hi, I have tried the inference code in NYU dataset. However, I can't achieve "real-time" performance as mentioned in your paper.
For batch size=1: frame rate is 12
For batch size=3: frame rate is 17
Both are not good for real time(less than 24).
I wonder why the inference speed isn't good. Did you do infernece in TWO Titans?
Thank you very much.
The text was updated successfully, but these errors were encountered: