-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Support OPM coreg #11276
Comments
@larsoner I'm hitting this issue as well. Wondering if a first easy step would be to add the ability to visualize MEG sensor locations in the coreg GUI. Currently, one has to use |
With some Kernel OPM data I have I can do:
and if I click "Show MEG Helmet" on the left I get the convex hull of the sensor positions (which is the "helmet" according to MNE-Python when no proper MEG helmet is found): Can you start with this? From there we can tweak appearances, etc. |
Let's continue in #11405 |
That's exactly what I want! But I can't seem to reproduce. It just loads the generic helmet for me ... how can I get |
By "generic" do you mean VectorView? This suggests your This is what is done in the existing OPM tutorial using their own coil def, which produces the convex hull helmet seen here: If it's already producing the convex hull of the sensors, this is the best we do currently. At some point we might want to take the convex hull surface and try to make it smoother somehow... that could be done with the spherical spline interpolator probably. But we can think about that later, first let's make sure you can get the convex hull "helmet" to show up... |
Indeed, I fixed the coil_type and that did it. Thank you! One feedback I have is that it might be helpful to see the actual sensor locations in addition to / instead of the convex hull itself. Because many users do not have whole-head systems ... and they are using only subsets of sensor locations. One could check that the locations appear to match those from a photograph during the experiment. |
Want to try adding a |
From a dev-meeting discussion with @jasmainak one idea would be to add an API to visualize subject-specific (e.g., 3D-printed) OPM helmets, as they also work with them at MGH. I haven't thought about an API for this but I think the idea would be to support passing a To get started @georgeoneill @neurofractal do you have the subject-specific mesh for the |
Hey good to hear from you @larsoner - do you mean the participant's headshape or a mesh of the actual 3D-printed helmet? I can generate the former but not the latter. |
I was hoping for the 3D-printed helmet mesh (though the participant's headshape would be a nice addition as well). Do you usually have those available? If so, and have another already-publicly-accessible dataset ready to go, we could create a new MNE dataset. |
Friendly ping @neurofractal as I'm starting to think about this issue again... do you have a mesh of the 3D-printed OPM helmet for an open dataset that we could use (especially the existing UCL auditory OPM dataset)? |
Hey @larsoner good to hear from you. We don't have this information - the manufacturer just sends position of the sensors in relation to the MRI mesh. I could generate headshape information for you? |
Any chance you have an anonymized (or un-anonymized with permission to share original) MRI for the participant from that dataset? I could run freesurfer's recon-all etc. (which would give the headshape) and update the dataset. Then we could source localize the auditory response, which would be nice. I'd also need the transformation from the sensor positions to the MRI space, though, in whatever format you all use (which sounds like is just at most a translation?). If this is too much work to track down that's alright! |
I'll send you an email with the link - no worries :) The MRI should be in the same space as the sensors, so no need for any translations. |
Got it, thanks! 👍 |
Continuing from #11257 (comment) with @georgeoneill
Yes we'll have to think about this. Let's just consider the rigid-helmet case for now maybe to make our lives easier :)
One thing to know is that, in MNE-Python, all sensor locations (for EEG) are supposed to live in the "head" coordinate frame, defined by the line between LPA and RPA (which become -X and +X) and the line perpendicular to this one through the nasion (+Y) in a right-handed coordinate system (making +Z up).
mne coreg
is really meant to coregister points in this head coordinate frame with the MRI coordinate frame defined during MRI acquisition. For MEG data, each system can additionally have its own "MEG device" coordinate frame (usually near the center of sensor "sphere" of the helmet). Theinfo['dev_head_t']
is usually set during acquisition to say how to translate from MEG to head, and thenmne coreg
gets you from MRI to head, so you can go from any frame to any other one.One way I think we could get this all to work in this framework is:
N
sensor positions in a point cloud visualization in a simple GUI (maybe the iEEG GUI could be repurposed, but if not, I don't think it's hard using pyvista)info
of the raw to contain the extra head shape points ininfo['dig']
, including some dummy/wrong LPA/Nasion/RPA (this will just make things easier in MNE-Python), i.e., present but in an anatomically incorrect "head" coordinate framemne coreg
to coregister the MEG sensors to the MRI, i.e., obtain MEG<->MRI transformmne coreg
to use the "MRI fiducials" -- which are easily accurately manually marked on the MRI, or estimated from the MNI<->MRI transform given by FreeSurfer -- to overwrite the existing dummy fiducials in the head coordinate frame, which will then overwrite/update theinfo['dev_head_t']
and also adjust all existing dig points to be in an antomically correct head coordinate frameAt this point we'd have all transforms we need for things to be defined according to MNE-Python's conventions.
It's a bit of hoops to jump through, but if we do this then all viz functions should behave properly, things like BIDS anonymization and uploading should "just work", etc.
One way to move forward with this would actually be for me to try this with our existing OPM dataset, because IIRC its head coordinate frame is not defined correctly. So I could try to make these adjustments to the dataset, and re-upload it.
The text was updated successfully, but these errors were encountered: