-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terminal output prints: Class Images Labels, but value under Images is Validation set, and value under Labels is Train #9044
Comments
@robotwhispering first progress bar is training, second is val. The numbers correspond to your val set. You have 1450 images and 12642 labels in your val set. |
@robotwhispering BTW I see you have zero mAP showing. There was a zero mAP bug recently fixed so make sure you are on the latest master. |
Yeah, the 0's were definitely related to that bug that I pulled last night. I rolled back so I could keep training, but my sessions aren't complete yet, so I'm holding off until the models converge to move back to current....I gotta say, I'm looking forward to some of the new additions. As far as the images/labels, I thought that at first too. I quickly dug into my dataset to make sure I didn't fat-finger something...AGAIN. But you can see in these screenshots that the only disparity presenting, in terms of quantities, are the 1200ish background images I added to hopefully help mitigate FP's. I dunno, It doesn't appear to have any negative effect, other than startling me when I first noticed it and thought my dataset was borked. So I do have 15598 training images (1233 of those are unlabeled background images) and 14365 training labels. On the Validation side, I have 1450 validation images and 1450 validation labels. |
@robotwhispering yeah I agree it's a little confusing. Nothing's wrong but it could be explained better. We tried to do a good job of presenting everything better in HUB. Actually I see we call 'labels' as 'instances' there. Which term do you think is clearer? We should at least align the wording there. |
That's interesting. I actually had a very similar conversation with somebody Man. I think it was just a few days ago. We use the term labels and if we don't present more context, our listener doesn't know really. We're talking about labeling a data set for training, or labeling an image after it's been inferenced on. I also sent your support and email this morning. About our interest to engage with you guys on a more specific customized level, If that's something that you're open to. |
@robotwhispering see #9066 |
Search before asking
Question
Pretty much what the subject mentions. I notice that the terminal output during training lists Class Images Labels, adn then underneath those is: all 1450 12642 . In my case, the 1450 value corresponds with my quantify of Validation images and labels. And the 12642 value is how many Training images and labels I have in the dataset. Printed this way, it makes it seem as though I have only 10% of the images listed in my labels.
There's also a problem with the P, R, and mA P values..but but I believe we've already documented that bug today. I'm fairly certain this "Images/Labels" ambiguity was going before this NaN loss issue.
You can see in this screenshot that right after "Checks Passed" it prints out the information correctly. But during the streaming output, it's a different story.
EDIT: Oh, the 1233 "missing" labels indicated after the Checks Passed stage are a collection of background images I intentionally included with no labels as a negative reinforcement to help reduce false positives. I've trained close to a dozen models from this dataset in the past week or so.
Additional
No response
The text was updated successfully, but these errors were encountered: