-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Tag memory based on data shape, annotate T2SMap #2898
Conversation
f03e168
to
a839d13
Compare
a839d13
to
0bc23f5
Compare
@mgxd Do you have time for a review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, have you profiled how this affects other parts of the workflow dependent on mem_gb
?
From my understanding this should generally increase the allocated memory, I wonder if we're now allocating too much in certain steps.
I haven't profiled. I think our general problem has been underestimating actual memory usage, but we don't have a documented process for assessing the quality of the estimation. I feel like I've seen some graphs from @soichih or @HippocampusGirl that show overall memory usage, but I'm not sure what tools they use. (If either of you have suggestions, please let us know!) Oscar pointed at nipreps/mriqc#984 for recent changes to profiling in MRIQC that we should adopt. |
Changes proposed in this pull request
T2SMap was tagged with the default memory consumption (~100MB), when it should be based on the size of the input files. This implements #1100 in order to get a direct estimate of that, and then passes that along, with an additional factor based on the number of echos and a safety factor of 2.
Fixes #1100.
Fixes #2728.
Documentation that should be reviewed