-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use own dataset #7
Comments
And what's the meaning of the parameter: self.no_text_mode? I see that the default value is False. |
no, this is not the trend I observed with endovis, for me the loss was decreasing consistently. Also, for your dataset, make sure that to generate the predicted images and check if its not producing blank labels all the time. That may be one reason since the training data may have blank labels in a majority of images. You might need to change the hyperparams of the loss function in that case, or train with K negative samples per positive samples |
no text mode was something I added in order to compare with baselines that are not promptable. These would not take text and will generate masks for all labels. For example UNet or MedT. You can let it be as False |
This is my record of 40 epochs of training on EndoVis18, is this loss correct?
|
yeah looks alright. If you want to use additional loss functions for your dataset, I would recommend the following change: |
Hi,
When I train with my own CT dataset, the results for the first epoch are:
There's obviously some problem with this, but I'm not quite sure why, I'm referring to the KvasirSeg_Dataset implementation
Also, when I train on the EndoVis18 dataset, the train loss will be large and the val loss will gradually rise to 200+, is this normal?
The text was updated successfully, but these errors were encountered: