Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Could not find coverage layer while parsing output #2

Open
ankitgajera8368 opened this issue Jul 18, 2019 · 0 comments
Open

Comments

@ankitgajera8368
Copy link

ankitgajera8368 commented Jul 18, 2019

I could optimized model 'ssd_inception_v2_coco_2017_11_17' using main.py script and saved the engine file for TensorRT inference.

Now, for using this model in deepstream test app, I am providing the engine file in parameter file of the application as 'model-engine-file' along with 'libflattenconcat.so' as 'custom-lib-path'.

Issue is : Application starts and runs the video on sink but detection boxes are not there even if there are so many cars in the sample video provided in DeepStream SDK. One error is continously being printed - 'Error: Could not find coverage layer while parsing output'.

By default, deepstream only supports resnet parsers, so If I use SSD then I think I have to provide also function name in 'parse-bbox-func-name' if I use custom plugin for error but I have no source of this libflattenconcat.so, can you please provide the function name so that application can parse the outputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant