Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real-time performance isn't so good #17

Open
wdkwyf opened this issue Oct 24, 2018 · 2 comments
Open

Real-time performance isn't so good #17

wdkwyf opened this issue Oct 24, 2018 · 2 comments

Comments

@wdkwyf
Copy link

wdkwyf commented Oct 24, 2018

Hi, I have tried the inference code in NYU dataset. However, I can't achieve "real-time" performance as mentioned in your paper.
For batch size=1: frame rate is 12
For batch size=3: frame rate is 17
Both are not good for real time(less than 24).
I wonder why the inference speed isn't good. Did you do infernece in TWO Titans?
Thank you very much.

@Abdul-Mukit
Copy link

@wdkwyf were you able to run it on gpu?
When I run the test using "python model/hourglass_um_crop_tiny.py" I am prompted to use CPU for tensorflow. Any advice on how to use GPU?

@wdkwyf
Copy link
Author

wdkwyf commented Jan 17, 2019

@Abdul-Mukit I think you should update the gpu_config.py. You can refer to #6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants