Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageMath TimeSeriesAssemble #1689

Closed
butellyn opened this issue Feb 29, 2024 · 9 comments
Closed

ImageMath TimeSeriesAssemble #1689

butellyn opened this issue Feb 29, 2024 · 9 comments

Comments

@butellyn
Copy link

I am trying to reassemble a timeseries using ImageMath's TimeSeriesAssemble that I originally split here using ImageMath's TimeSeriesDisassemble, and then transformed using antsApplyTransforms as I asked about here. I tried ImageMath 4 ${VolumefMRI_fs} TimeSeriesAssemble ${funcoutdir}/TR* but got the following error:

"terminate called after throwing an instance of 'itk::ExceptionObject'
what(): /software/sources/builds/ants/2.3.4/ANTsX-ANTs-6829396/build/staging/include/ITK-5.2/itkImageBase.hxx:177:
itk::ERROR: itk::ERROR: Image(0x29aead0): A spacing of 0 is not allowed: Spacing is [0.8, 0.8, 0.8, 0]
Aborted (core dumped)"

This call comes out as:

ImageMath 4 /projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/sub-MWMH212_ses-2_task-rest_space-fsnative_desc-preproc_bold.nii.gz \
TimeSeriesAssemble \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1000.nii.gz \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1001.nii.gz \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR1002.nii.gz \
...
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR2109.nii.gz

I get the same output when I run it with the -v 1 flag. How can I address this issue?

@cookpa
Copy link
Member

cookpa commented Feb 29, 2024

TimeSeriesAssemble : Outputs a 4D time-series image from a list of 3D volumes. Usage : TimeSeriesAssemble time_spacing time_origin *images.nii.gz

It requires two initial arguments for the time dimension, the spacing (TR) and origin (usually 0)

@butellyn
Copy link
Author

Oh! I see. What does "origin" mean here?

@cookpa
Copy link
Member

cookpa commented Feb 29, 2024

It should go into the "toffset" field of the NIFTI header, but it appears not to work. I've never used it, I don't know if it's supported in ITK.

@cookpa
Copy link
Member

cookpa commented Mar 1, 2024

It looks like the origin is set correctly in ImageMath but is not written to disk. I would go ahead and use 0 here.

@butellyn
Copy link
Author

butellyn commented Mar 4, 2024

Thanks! I tried TimeSeriesAssemble, but keep ending up with a core dump up to 40G, so it looks like I am running into a similar problem as to when I tried to use antsApplyTransforms directly. I'm trying to do the assembling with smaller chunks, in the hope that that will fix the problem. The first chunk of 100 TRs worked with 40G (haven't tried with a different number). Here's the call I used for all of them at once:

ImageMath 4 \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/sub-MWMH212_ses-2_task-rest_space-fsnative_desc-postproc_bold.nii.gz \
TimeSeriesAssemble 0.555 0 \
/projects/b1108/studies/mwmh/data/processed/neuroimaging/surf/sub-MWMH212/ses-2/func/TR*

And the exact error I got: "Bus error (core dumped)".

@cookpa
Copy link
Member

cookpa commented Mar 4, 2024

I think it should require less memory than antsApplyTransforms because it doesn't need to allocate both the input and the output image, but even the output image itself is very large, 320*320*320*1100*4 bytes, or about 135Gb.

Did you intend to resample the data to the template at full resolution? If not, you can downsample it with ResampleImageBySpacing and probably get away with calling antsApplyTransforms directly.

@butellyn
Copy link
Author

butellyn commented Mar 4, 2024

Oh wow, I didn't realize how large the output image was. That definitely has me questioning my broader framework. Definitely can't store many images of that size...

Basically, I am putting the preprocessed functional data from fmriprep into the T1w space (or subject's anat space) using the --output-spaces flag for fmriprep and then applying the transform from the fmriprep T1w space to the freesurfer T1w space because I want to go through the subject's native surface to get to fsLR space (a group surface + MNI volumetric subcortical space). By not going through MNI, as I have seen a few people do, I think I will better retain the layout of networks on the surface because I will not lose any data if someone has more gyri than the MNI template, for instance.

So if I can resample freesurfer's T1w image to a lower resolution and still project the functional data to the surface in this downsampled space, then I don't need the data at full resolution. I'll try downsampling and see if that fixes things.

The pipeline for getting the data to the surface, as it exists right now: https://github.com/NU-ACNLab/mwmh/blob/main/scripts/process/create_ciftis.sh

@cookpa
Copy link
Member

cookpa commented Mar 4, 2024

Sounds sensible. Does fmriprep's --output-spaces fsLR not do this? I don't know the details of its implementation, just curious if there's a reason you aren't using that directly.

@butellyn
Copy link
Author

butellyn commented Mar 4, 2024

I looked into it briefly, but it would have required me to rework my postprocessing pipeline to work on the surface. I wasn't even sure if that was possible, so I decided to take this route, maybe foolishly thinking that it would be easier 😰

@butellyn butellyn closed this as completed Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants