Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detectnet, detection count on raw data output page. #1412

Open
ontheway16 opened this issue Jan 24, 2017 · 8 comments
Open

Detectnet, detection count on raw data output page. #1412

ontheway16 opened this issue Jan 24, 2017 · 8 comments

Comments

@ontheway16
Copy link

Hi,
In inference screen, I am testing multiple files by using filelist feature. If I choose "Bounding box", total number of objects detected is available at inference output page. If I choose "raw data", no detection count number presented. Is it difficult to put a detection count on raw data output page, too?

Why I want it is, I an using 4.5mpixel test images for inference tests. Visualising, resizing etc processes are taking a good percent of time before "test many" inference completes. Also its eating computer memory very very fast, exceeds 32GB and continues with swap, when enough number of test files are supplied.
"Raw data" is a lot faster, and memory consumption is a lot less. All I want to know is total number of detections, especially when comparing performance of different training sets, on same test data. Using Digits 5.1 dev, cuda 8 (allthough it would be nice to have individual num. of detections on sides of each image's raw data box, also).

@gheinrich
Copy link
Contributor

Hello, I suggest you look at this API doc to run inference from command line. From there it's very easy to parse the network output in a script and count the number of bounding boxes.

@jmformenti
Copy link

Hello, I've used the API to extract bounding boxes, this is a sample of a response:

{ "outputs": { "bbox-list": [ [ [ 746.0, 61.0, 914.0, 116.0, 1.609375 ],

The firsts parameters are: x_min, y_min, x_max, y_max but what is the fifth parameter?
Thanks in advance

@gheinrich
Copy link
Contributor

Hello, the last parameter is a confidence score (arbitrary units).

@jmformenti
Copy link

Thanks @gheinrich, Exists a manner to evaluate this confidence? How can I know if is a good or bad score?

@ontheway16
Copy link
Author

@jmformenti Could you please provide a copy of the command you used, for this output ? Thanks.

@gheinrich
Copy link
Contributor

@jmformenti you can use the score and predictions for a number of images and draw the Precision Recall curve. Then you can pick an appropriate threshold for the score.
https://en.wikipedia.org/wiki/Precision_and_recall

@jmformenti
Copy link

@gheinrich, Thanks for the idea, I'll give a try

@ontheway16 here the command for infer one image:
curl http://localhost:5000/models/images/generic/infer_one.json -XPOST -F job_id=<jobid> -F image_file=@<path_to_image>

Change the url (http://localhost:5000) for your own url.

for many images:
curl http://localhost:5000/models/images/generic/infer_many.json -XPOST -F job_id=<jobid> -F image_list=@<path_to_file_with_images_filename> -F image_folder=<parent_dir_where_are_the_images>

The file for image_list is a text file with a image filename for each line. The param image_folder is optional, it's not necessary if you put full path for each image in the previous text file.

@ontheway16
Copy link
Author

@jmformenti excellent, was looking for developer version one, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants