Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement virtually moving head position for evokeds #12714

Open
wmvanvliet opened this issue Jul 14, 2024 · 3 comments
Open

Implement virtually moving head position for evokeds #12714

wmvanvliet opened this issue Jul 14, 2024 · 3 comments
Labels

Comments

@wmvanvliet
Copy link
Contributor

Describe the new feature or enhancement

Through maxfilter, we can virtually move the head position relative to the helmet. This is super useful if you want to average across epochs taken from different recordings. The official maxfilter also supports doing this for evokeds, and I wish MNE-Python's implementation would also be able to do it, for this reason:

We frequently do "long" experiments where the subject is in the scanner for an hour. So we have regular breaks. During breaks, we close the current raw file and start a new one (this is called a "run" in BIDS lingo). We end up with a bunch of raw files that together constitute the contribution of a single participant. The participant tends to shift their head a little during breaks and signals change a little over time. So, instead of concatenating them and treating them as a single raw file, we prefer to pre-process them separately (i.e. fit ICA on each of them separately). Since we don't want to virtually move the head unless we really need to, we also create different forward/inverse models for each run. Then, finally, when everything is source localized, we start concatenating and averaging things.

But what if we want a sensor-level analysis? We don't want to move the head at the very beginning, we would like to move the head at the end. This means operating on Evoked objects instead of Raw.

Describe your proposed implementation

If possible, split the existing maxwell_filter function in separate functions for doing the SSS, moving the virtual head position, etc. This way, some of these function could be made to operate on Raw, Epochs and Evoked objects. Many things about maxfilter only make sense for Raw objects, but some, notably virtual head position, make a lot of sense for other objects as well.

Describe possible alternatives

Currently, the alternative is to use the official maxfilter software (requires license, linux only).

For MNE-python, we could follow the lead of that program and allow the maxwell_filter function to take in an Evoked object as well. When provided with an Evoked object, many parameters of the maxwell_filter function would be ignored (no tSSS for example), but it'll do everything it can do also on the Evoked.

Ultimately, the best implementation depends on how hard it is to cut the algorithm up in separate steps.

Additional context

No response

@wmvanvliet wmvanvliet added the ENH label Jul 14, 2024
@wmvanvliet
Copy link
Contributor Author

@larsoner you think this is feasible?

@larsoner
Copy link
Member

Have you considered using the more general API from #9609 ? It encompasses what you want I think, and is already only a few lines (in principle) to implement. It wouldn't use maxwell filter / SSS basis but rather a minimum norm approach but the end results should be fairly similar hopefully. I would start there.

If you want the MF version for consistency or some other reason, rather than trying to break apart maxwell_filter too much (which will be a huge pain), just pull apart what's needed to realign based on info["dev_head_t"]s -- it wouldn't be very much. This would also probably only be a dozen or so lines, the code you need is already pretty well factored out I think (creation of a forward and inverse SSS model).

@wmvanvliet
Copy link
Contributor Author

I will give it a try, thanks for the pointers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants