a network for supporting resting-state fMRI related studies.
Working in native space
Submitted by
David
on
Hi,
as I see, DPABI has the option to calculate measures like ALFF in native space and then to normalize the derivates and smooth the derivatives. On which article or method is this based on?
In Yan, C.G., Cheung, B., Kelly, C., Colcombe, S., Craddock, R.C., Di Martino, A., Li, Q., Zuo, X.N., Castellanos, F.X., Milham, M.P., 2013. A comprehensive assessment of regional variation in the impact of head micromovements on functional connectomics. Neuroimage 76, 183-201.
I have a follow-up question regarding the Cambridge dataset you used in this paper. I tried preprocessing the Cambridge data with DPARSF 4.3 and spm 12 and I get a lot of failed normalizations with Dartel (>30). In the paper you only excluded 4 subjects due to bad normalization. I don't understand why I get so many although it is the same dataset and DPARSF as well.
Moreover, in almost all of the 1000 fcp datasets either the segmentation fails with this error:
Failed 'Segment'
Error using svd
Input to SVD must not contain NaN or Inf.
or the normalization with DARTEL is totally bad in all subjects so I can't use the data. Since you also processed the FCP data in one of your papers also with DPARSF, I was wondering if you had similar problems or if you have any ideas what the problem could be?
thank you! It seems using the already skullstripped files generated those problems. Using the anonymized non-skullstrippend and running bet afterwards produces better results!
In Yan, C.G., Cheung, B.,
In Yan, C.G., Cheung, B., Kelly, C., Colcombe, S., Craddock, R.C., Di Martino, A., Li, Q., Zuo, X.N., Castellanos, F.X., Milham, M.P., 2013. A comprehensive assessment of regional variation in the impact of head micromovements on functional connectomics. Neuroimage 76, 183-201.
We did that way.
Thanks for the reply,
Thanks for the reply,
I have a follow-up question regarding the Cambridge dataset you used in this paper. I tried preprocessing the Cambridge data with DPARSF 4.3 and spm 12 and I get a lot of failed normalizations with Dartel (>30). In the paper you only excluded 4 subjects due to bad normalization. I don't understand why I get so many although it is the same dataset and DPARSF as well.
Moreover, in almost all of the 1000 fcp datasets either the segmentation fails with this error:
or the normalization with DARTEL is totally bad in all subjects so I can't use the data. Since you also processed the FCP data in one of your papers also with DPARSF, I was wondering if you had similar problems or if you have any ideas what the problem could be?
Any help appreciated!
Hi,
Hi,
I havn't got such problems.
1. Did you try bet as well for Cambridge dataset?
2. for the data with that error, try to examine if there is NaN in the T1 image.
thank you! It seems using the
thank you! It seems using the already skullstripped files generated those problems. Using the anonymized non-skullstrippend and running bet afterwards produces better results!