Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training issue #1

Open
zerone-fg opened this issue Oct 7, 2022 · 1 comment
Open

training issue #1

zerone-fg opened this issue Oct 7, 2022 · 1 comment

Comments

@zerone-fg
Copy link

Hi, I am interested in your work published in MICCAI2022, but when I try to reproduct the result presented in your paper, the training process is very slow, like the training loss drops slowly. I am wondering if it's normal or any pretrained weight of transformer should be loaded? I just downloaded Brats2020 data and loaded it into the network as your description without addtional processing, I am looking forward to your reply, thank you.

@920232796
Copy link
Owner

Thank you for your attention, because the nestedformer network uses 4 encoders to process the brats2020 dataset, so it is normal to run a bit slower, it should not be very slow. What device are you using? I am using a V100 or A100 graphics card. Of course you can also try to adjust the network parameters to reduce the number of parameters in the network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants