Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use own dataset #7

Open
zzzyzh opened this issue Jan 30, 2024 · 5 comments
Open

Use own dataset #7

zzzyzh opened this issue Jan 30, 2024 · 5 comments

Comments

@zzzyzh
Copy link

zzzyzh commented Jan 30, 2024

Hi,
When I train with my own CT dataset, the results for the first epoch are:

train Loss: 19.6049 Dice: 0.9865 IoU: 0.9865142703056335
val Loss: 0.0000 Dice: 1.0000 IoU: 1.0

There's obviously some problem with this, but I'm not quite sure why, I'm referring to the KvasirSeg_Dataset implementation

Also, when I train on the EndoVis18 dataset, the train loss will be large and the val loss will gradually rise to 200+, is this normal?

@zzzyzh
Copy link
Author

zzzyzh commented Jan 30, 2024

And what's the meaning of the parameter: self.no_text_mode? I see that the default value is False.

@JayParanjape
Copy link
Owner

no, this is not the trend I observed with endovis, for me the loss was decreasing consistently. Also, for your dataset, make sure that to generate the predicted images and check if its not producing blank labels all the time. That may be one reason since the training data may have blank labels in a majority of images. You might need to change the hyperparams of the loss function in that case, or train with K negative samples per positive samples

@JayParanjape
Copy link
Owner

no text mode was something I added in order to compare with baselines that are not promptable. These would not take text and will generate masks for all labels. For example UNet or MedT. You can let it be as False

@zzzyzh
Copy link
Author

zzzyzh commented Jan 31, 2024

This is my record of 40 epochs of training on EndoVis18, is this loss correct?

[2024-01-31 02:13:30,081][train.py][line:121][INFO] Epoch 35/39
[2024-01-31 02:13:30,082][train.py][line:122][INFO] ----------
[2024-01-31 02:26:54,207][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 02:26:54,207][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 02:26:54,208][train.py][line:209][INFO] train Loss: 121.0901 Dice: 0.9002 IoU: 0.865285336971283
[2024-01-31 02:30:53,437][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 02:30:53,437][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 02:30:53,438][train.py][line:209][INFO] val Loss: 1203.2557 Dice: 0.6221 IoU: 0.5799221396446228
[2024-01-31 02:30:53,438][train.py][line:121][INFO] Epoch 36/39
[2024-01-31 02:30:53,438][train.py][line:122][INFO] ----------
[2024-01-31 02:44:22,753][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 02:44:22,754][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 02:44:22,754][train.py][line:209][INFO] train Loss: 118.6438 Dice: 0.9032 IoU: 0.8684555292129517
[2024-01-31 02:48:15,284][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 02:48:15,285][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 02:48:15,285][train.py][line:209][INFO] val Loss: 1108.1448 Dice: 0.6293 IoU: 0.5854318141937256
[2024-01-31 02:48:15,285][train.py][line:121][INFO] Epoch 37/39
[2024-01-31 02:48:15,285][train.py][line:122][INFO] ----------
[2024-01-31 03:03:07,891][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:03:07,892][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:03:07,892][train.py][line:209][INFO] train Loss: 117.5177 Dice: 0.9032 IoU: 0.8687588572502136
[2024-01-31 03:07:07,369][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:07:07,369][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:07:07,370][train.py][line:209][INFO] val Loss: 1179.1814 Dice: 0.6097 IoU: 0.5665776133537292
[2024-01-31 03:07:07,370][train.py][line:121][INFO] Epoch 38/39
[2024-01-31 03:07:07,370][train.py][line:122][INFO] ----------
[2024-01-31 03:21:02,264][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:21:02,265][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:21:02,265][train.py][line:209][INFO] train Loss: 118.9117 Dice: 0.9053 IoU: 0.8707560300827026
[2024-01-31 03:24:54,675][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:24:54,676][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:24:54,676][train.py][line:209][INFO] val Loss: 1228.8108 Dice: 0.6176 IoU: 0.5755595564842224
[2024-01-31 03:24:54,677][train.py][line:121][INFO] Epoch 39/39
[2024-01-31 03:24:54,677][train.py][line:122][INFO] ----------
[2024-01-31 03:39:42,924][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:39:42,924][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:39:42,925][train.py][line:209][INFO] train Loss: 113.3087 Dice: 0.9092 IoU: 0.875225305557251
[2024-01-31 03:43:30,656][train.py][line:207][INFO] all 0 sanity check for preds: True
[2024-01-31 03:43:30,656][train.py][line:208][INFO] all 1 sanity check for preds: True
[2024-01-31 03:43:30,656][train.py][line:209][INFO] val Loss: 1201.9516 Dice: 0.6364 IoU: 0.5938707590103149
[2024-01-31 03:43:31,792][train.py][line:226][INFO] Best val loss: 804.370840, best val accuracy: 0.476486

@JayParanjape
Copy link
Owner

JayParanjape commented Feb 4, 2024

yeah looks alright. If you want to use additional loss functions for your dataset, I would recommend the following change:
in utils.py, in calculation of focal loss, I am returning the sum across all pixels. If you change that to mean, the loss will be in decimals, but the dice should be similar. You might need to change the learning rate. This will make sure you have the losses at the same scale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants