-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom object detection, no mAP in graph. #1279
Comments
More data is [almost] always better, yes. I'd be shocked if you got DetectNet to converge with only 98 training images.
Make sure you select the object detection visualizations in the form before running inference. |
Thank you for the answer. Then I will increase the number of images soon. I guess about 1000 images will cut it. Can you see any other problems with my way of application (single object in definitions, etc.) ? I turned visualizations on and off, although I do not understand what these images saying to me, I was able to see my object in test pics was colored differently in some vis. graphs. One other concern is object sizes, size varies between 30x30 to 60x60, I hope its not the cause of issue, above. |
Well, as @lukeyeager suggested, after increasing number of training images to 2700+, I immediately started seeing mAP and then, the red boxes. Training was pretty fast for this number of images @1248x384. |
Great news, thanks for letting us know! |
I was having a similar issue. Seems that with ~500 training images, I need >50 epochs to get any bboxes at all (and therefore, any mAP > 0). This takes quite some time – but once something is learned, it seems to be amazingly accurate! Now, I was wondering if there's a way to make it quickly give at least some results, even if inaccurate, to see if the direction is right. Guess will need to look at how the @ontheway16, you wrote:
Can I ask why the "of course"? Using a pretrained model is strongly recommended e.g. here, so there must be something I'm missing? |
@reunanen I am dealing with particles on a sheet and not real world scene photos, therefore I am not sure how much a pretrained model can help here..? |
@ontheway16 Ok, thanks for the explanation – didn't notice this was the case. That said, I'd expect the pretrained edge filters etc might still be quite helpful in your case also. Perhaps worth at least testing? |
I agree with @reunanen I have never seen a case where using a pre-trained model did not help. |
@ontheway16 @reunanen I'm facing a similar issue both of you initially had...namely...a small dataset...but I'm a bit hesitant in going all out with tagging and all, in a bid to get more images because what I'm really trying to achieve is a proof of concept. I don't need any fantastic results at the moment, I just want to get a non-zero mAP and then I can go all out to augment my data. I've tried DetectNet with the Kitti dataset and it works fine but when I try it on my own dataset. I don't get any useful results. At the moment my dataset consists of 133 training images, 29 validation images and 28 images for testing. Also I'm new to machine learning and I'm not sure if DetectNet is even suitable for what I'm trying to achieve because of the way the objects in my images are and my dataset itself.
PS: Sorry if some of my questions sound silly, I'm simply new to machine learning and since I've been stuck for the past 2 weeks, I've had no other choice but to post my problem on this forum. |
I have prepared a very small custom dataset (current set is 98 images, but it will be about 1000 soon). All images are 1248x384 .png files, and train&val directories are in a directory where "kitti-data" folder stays in. Strictly followed the readme for dataset and training preparation, except, min. bounding box value was changed to 15, instead of default 25. And of course, no pretrained model.
All image and label file names are in nnnnnn.txt and nnnnnn.png style (six digits before filetype).
All label definitions consists single line (even if some images contain 2 or 3 target images, same image copied under another "number.png".
All label definitions start with 'Car' (without quotes) and I have no other object type in label definitions.
Here is an example;
Car 0.0 0 0.0 1052 93 1105 123 0.0 0.0 0.0 0.0 0.0 0.0 0.0
What can be wrong here? Too little no. of samples? Label defs. need more than one object type?
When I try to test with a sample image (from training dataset), I get only the following;
Inference visualization
bbox-list [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]]
And here is graph (a zoom from final zone but mAP not available at the beginning either (mAP legend activated by keeping mouse over);
The text was updated successfully, but these errors were encountered: