How to use the first eigenvariate to calculate in the functional connectivity of resting state fMRI.

When we used the seed-based ROI to calculate functional connectivity, the methods is “A seed reference time course was obtained within each ROI. Correlation analyses were conducted on the seed reference and the whole brain in a voxelwise manner for each ROI”. Recently, our article which used this methods was advised by the reviewers to use the first eigenvariate instead of a seed reference time course. Our question is how to use the first eigenvariate to calculate in the functional connectivity of resting state fMRI. 
The opinion of the reviewers: For selection of the reference time course within a ROI, the paper just writes "A seed time course was obtained within each ROI." This is not sufficient. It would be very interesting to understand how the authors computed the reference time course from the data. Is it just the average? I do not suggest the average as the regions are quite large and the smoothness is rather small. Instead, I would suggest to choose the first eigenvariate as suggested e. g. in SPM.

I suggest you argue back. List the references used average.

Dear Prof. Yan, thanks for your quick reply. I still have some questions when replying to the reviewers' comments:

Question 1:  You mean the seed reference time course we used from the DPABI is the average? Is there any reference to compare the average time course value with the first eigenvector? Or about the methods of first eigenvector.

Question 2:  The reviewer put forward an article, “Controlling the Family-wise Error Rate in Functional Neuroimaging: A Comparative Review.” Statistical Methods in Medical Research 12, no. 5 (October 2003): 419–46,  which suggested “The recommended rule of thumb is three voxels FWHM smoothness.”  Thus, with 3.4 mm original resolution, the minimum smoothness should be 10.2 mm. But we use the y_grf_threshold of DPABI to estimate the smoothness, which is similar to the easythreshold of FSL and the smoothness is 7.9 .  Does the DPABI statistical software we used have any references to prove the rationality of this processing analysis?

Question 3: Whether the permutation test TFCE method and the permutation test cluster level correction method are different at Z=3.3 (p=0.001). Or the TFCE method is more strict?

Question 4:  Whether the mode of simulation between the permutation test cluster level correction and the GRF correction (when the voxel level p both setting 0.001)is completely equivalent. If not, could we choose the permutation test cluster level correction instead of TFCE or TFCS, when the reviewer recommends using permutation test for verification,  

Question 5:  According to the teaching video, GRF voxel level p= 0.001 was recommended, but the article on PNAS 2016, Eklund et al seem to suggest that p=0.001 is still not as good as permutation test? Our reviewer makes the discussion based on the question that there are still false positives in voxel level p=0.001;cluster level=0.05, how to answer it? Is the CDT method not recommended in the future?

Thank you in advance!


1. Mean.

2-5. Why not to use permutation test + TFCE as recommended? They all differ in some sense.

There is no consensus on the exact size of the smooth kernel. Clearly there is a trade-off because larger smooth kernel impove the signal-to-noise ratio but will wipe off the anatomical info at the same time on the voxel level. The smooth level of dpabi is at the medium level, so it may balance the goods and bads of smooth. However, the empirical evidence is still needed.