Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to improve performance? #44

Open
ccl-private opened this issue Aug 9, 2019 · 10 comments
Open

How to improve performance? #44

ccl-private opened this issue Aug 9, 2019 · 10 comments

Comments

@ccl-private
Copy link

First, thank you for sharing. I tested your pre-trained model on my Linux(GTX 1080), but my fps was about 5. Do you have some ideas to may it real-time? Really appreciate.

@YangZeyu95
Copy link
Owner

you can resize the input image to a small one at line 103

@YangZeyu95 YangZeyu95 changed the title How to import the performance? How to improve performance? Aug 9, 2019
@ccl-private
Copy link
Author

Thanks :)
It worked! FPS reached 10 when I changed the size to 320*320.
I also notice that there are two smaller network models in vgg.py file, named vgg_a and vgg_16. But when I made some adjustments in train.py and try to train on these models, it always tells me the size is not matched. How can I train on these smaller models?

@ccl-private
Copy link
Author

Screenshot from 2019-08-12 08-39-20
Here is where I adjusted in train.py, including two scope names to args.name.

@YangZeyu95
Copy link
Owner

That because VGGs, with different layers, downsample the image to the different scales. You need to know which downsample scale is used in the backbone network you chose and edit the 'scale' in line 78.

@ccl-private
Copy link
Author

ccl-private commented Aug 13, 2019

I have some trouble with vgg_a and vgg_16 in vgg.py. How 'scale' inline 78 related to VGG downsample. First I think it was the max pool layer that related to the 'scale', so I edited the 'scale' to 10 for vgg_16. However, it reported the same error:
Screenshot from 2019-08-13 10-08-38
I also set the 'scale' to 5,6, 8, and so on, the error report did not change even the dimension value.

@YangZeyu95
Copy link
Owner

make sure you have changed the backbone network.

@ccl-private
Copy link
Author

ccl-private commented Aug 13, 2019

I have set it to None in line: 19.
Screenshot from 2019-08-13 10-48-30
And here is where that report the error above:
Screenshot from 2019-08-13 10-56-53

@YangZeyu95
Copy link
Owner

you need to make sure that the tensor shape of ground truth and mode output have the same shape.

@ccl-private
Copy link
Author

I am still suffering from vgg_16 and vgg_a. It seemed that only the vgg_19 can get the same output shape of the ground truth. Maybe I am in the wrong direction.

I really wish a smaller network, such as mobile-net. Will it be possible a mobile-net version in the future?

@YangZeyu95 YangZeyu95 reopened this Aug 30, 2019
@YangZeyu95
Copy link
Owner

yes, the shape of each vggs' output is fixed, while you can change the heatmap and paf shape when generating ground truth to match vgg's output shape. you can change that use 'scale' at line 78 in 'train.py'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants