Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discuss project #1

Open
MarkNelson86 opened this issue May 19, 2020 · 1 comment
Open

Discuss project #1

MarkNelson86 opened this issue May 19, 2020 · 1 comment

Comments

@MarkNelson86
Copy link

Hi Isabelle,
Your project sounds interesting. Can you tell me more about it? So you give an auditory signal, record the generated FFR, and then compare with structural scans for source localization?
(1) Where does the classification come in?
(2) What contrast are your structural scans?
(3) what kind of auditory stimuli are you using to generate FFR?

@arsisabelle
Copy link
Collaborator

arsisabelle commented May 24, 2020

Hi MarkNelson86,
Thank you for your questions! This exercise is actually associated with my doctoral thesis work, which is much more extended and complex. For my BHS project, at the end, we presented two different auditory stimuli in musicians and non-musician, recorded their FFRs with EEG, and then compare the FFR signal to the stimuli used to generate it and to variables associated with the subjects.

To answer each of your points:

  1. Under the scope of my BHS project, the classification comes in when we want to confirm a hypothesis based only on a reduced number of FFR trials. The FFR is a very noisy signal, and typically requires the averaging of thousands (and even dozen of thousands) FFR trials to obtain a usable signal. The amplitudes are so small that patterns in the data remain inaccessible with traditional analysis techniques. This makes it hardly possible for us to examine FFR in more natural listening conditions and constrain the conditions used for comparison. However, ML classifiers recently developed with speech FFR (originally trained with datasets of thousands of trials) have been able to identify patterns in the FFR that are associated with different variables, and such with a very low number of trials. Once transferred, the classifiers can distinguish, from a very few FFR trials, characteristics of the FFR (e.g. if the subject is a musician or not, if the FFR is from a speech or music sound, etc). This is particularly useful for group analysis in our field, as we know that musicians have enhanced FFR (Strait and Kraus - 2014 - Biological impact of auditory expertise across the.pdf. Thus, the classifiers distinguish if the FFR relates to another variable (here binary), although the signal would be too noisy to do traditional analysis. ML here is more a mean than an end ;-)

  2. In my doctoral thesis, I will use T1, but under the scope of the BHS course, I will remain within the EEG signal classification. (We can do source localization with EEG, but as you know the spatial resolution is really limited. So, in my doctoral project, we actually do the source localization with MEG-FFR and the structural MRI (T1) of participants.)

  3. Any complex periodic sound (e.g. speech and music) can generate an FFR. In this dataset, we use a 100 ms "dah" and piano tones of both 100 Hz.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants