Question about signal extraction

Submitted by phil1022 on

 

Dear Chao-Gan,

I was using DPARSFA to extract time courses from the AAL template. In
the preprocessing, I used Head Motion Scrubbing Regressors and chose the
default setting: Mean FD power > 0.5, regress out one TR before and two TR
after.

After the time course extraction, I encountered a problem, that in some
participants, the extracted signals corresponding to some ROIs are NAN.
They were not some particular ROIs. Different participants had different
cases. I tested this after eliminated all the cerebellum ROIs assuming that
the cerebellum may not be fully covered during the scan, but the issue
existed in quite a lot of subjects.

Is that normal? How can I fix it?

Thank you very much!

Sincerely,
Xiaosong

 

Dear Chao-Gan,

    That is the most confusing part. Indeed, I noticed in some subjects, due to their severe head motion, the scrubbing resulted in meaningless data. In these subjects, all the extracted time points in all the ROIs are zero. So for AAL, I have 116 flat line at zero. This is understandable, since the scrubbing is applied on all of the voxels. There are also a few other subjects with a lot of time points showing FD_power >0.5, though their exracted data was not exactly zero, but close. So I used matlab to recreate the scrubbing time series, then I counted how many time points had been regressed out. Since I have a relatively short scan (120 time points), so I took 12 (10%) as threshold, and excluded all the subjects with more than 12 time points been regressed out. However, this method still can not fix my NAN problem.

    The NAN can be found in any ROI. It varied across subjects. I assume it is related to signal loss because I found most of the NAN were found in cerebellum regions where may not be fully covered by our scan. However, I do believe we covered most part of the cerebral cortex. We may have a little bit of loss of signal in a few areas, such as the top of the brain (in a few subjects), or majorly, the orbital frontal/inferior temporal regions where it is caused by the air in the nose. The wired thing is, taking one subject as an example, his/her ROI shown NAN is the first one in AAL templete, which is the  left precentral gyrus. It is a big, lateral ROI. So it does not make sense at all. I checked the preprocessed image, which looked normal (see attached).

    I think your scripts some how makes judgement before extracting the signals. Based on the preprocessed data, each voxel in the space had some value. So if the script avreage values regardlessly, there is no way it gives an NAN. I am not saying making judgement is a bad thing. But I still need to figure out how to extract the signals from the ROIs where there are obevious values.

    The other thing I am concerning is the potential miss match of the templete and the image. I am sure you know more on that than I am. Just for your information, I did all the preprocessing using DPARSFA and the check for normalization looked normal. I noticed that, if using the extract ROI time courses function in DPARSFA, the script will resample the ROI mask before data exracting. However, if directly use the ROI signal exractor in the Utilities in the dpabi, it will stop and report errors if the mask is not spatially exactly match the image (Such as, if you may noticed, the preprocessed data follow the default pipeline is 91*109*91 while the AAL templete provided in the toolbox is 181*217*181). 

    I am really looking forward your reply and if needed I can provided you the preprocessed data for you to further test on. I got 48 out of 181 subjects with this issue which is indeed bothering me a lot!

    Thank you very much for your help!

    Sincerely,

    Xiaosong

 

Hi Xiaosong,
1. Make sure there is no voxels contains NaN. If there is a voxel contains NaN within an ROI, then the ROI time course will be NaN after averaging.

2. That's right, DPARSFA resamples the masks automatically. You will get an error in Utilities if the mask mismatch.

Best,

Dear Chao-Gan,

    It is a pleasure to discuss with you since you can always led me to the point.

    Based on your suggestion, I checked the 4D image data to see if there is any voxels contains NaN. Indeed I found there are several NaN voxels in the normalized and filtered data. Then I went back stage by stage to see when did this NaN reveal. At last, I found that the NaN voxels are created during the normalization process (I chose the "Normalize by choosing EPI templetes"). I don't know why. In my previous work when I used SPM8 directly for normalization, I haven't encounter such issue before. I am not sure if it is a bug of DPARSFA or something else. I emphesize that the head motion of the subject I checked was perfectly controled and not a single volume was scrubbed.

    I am currently exracting ROI time courses using 'wrap masks into individual spaces' option so that I hope I can bypass the normalization part. However I got another concern that the temporal filtering was only done after the normalization. Does that mean the signal extracted under individual space is not filtered? 

    Thank you very much for your help.

    With kind regards.

    Sincerely,

    Xiaosong

Hi Xiaosong,

DPARSF is based on SPM. Thus you could use SPM to reprocess that data to check the NaN issue.

If you don't have T1 images, warp back will not work based on only EPI images.

If you do have, then the images can be filtered in original space, just don't check Normalize.

Best,

Chao-Gan

Dear Chao-Gan,
 
    Based on your suggestion, I used SPM to normalize the data. As you can expect, the issue remains.
 
    Further exloring this issue, I found that if I normalize data before regressing out the head motion/detrending, then I will not encounter this issue; however, if I normalize the data after covariates regression, then this issue emerges.
 
    I also tried normalize by DARTEL. This method gave me a nice data without any NaN. However in my study I incline not to use smoothing but as you may remember, normalization by DARTEL without smooth will create some issue which looks like signal loss in shapes of waves.
 
   At last, I took your advice and I refiltered my data in original space and extract signals from there with wrapped templete. By bypassing normalization I finally get ride of this issue. However, it still remains a pain if I need normalized images in the furture. I can only hope that I won't.
 
      With kind regards.
 
      Sincerely,
      Xiaosong