Hi, I have a question regarding how to obtain the Zscore. Particularly I don't know how to calculate the correction for the degrees of freedom (that seems to be a value around 2), since I could not find the actual calculation. Does anyone know how it is calculated?

Also, I would also like to use the Yan's program called y_Corr2p(r,n). But I don't understand what the "n" is. In the comments of the program it says that is the number of pairs, but I don't understand.

thanks a lot, many greetings


Dear Pablo,
       If you have two time series, then you can calculate the Pierson's correlation coefficient (r) between these two time series. And you can transform the correlation coefficient to z value by Fisher's r-to-z transform. The formulas like this (Vincent JL, Patel GH, Fox MD, Snyder AZ, Baker JT, Van Essen DC, Zempel JM, Snyder LH, Corbetta M, Raichle ME. 2007. Intrinsic functional architecture in the anaesthetized monkey brain. Nature. 447:83-86.):

       If you use y_Corr2p(r,n), then n means the number of pairs. For example, if you have two time series and each with 230 time points, then n=230.
       Hope this be helpful! Further discussion is welcomed to restfmri.net.
       Best wishes!

Dear Yan, thanks a lot for your response, it helps me a lot.

I have two further questions. In the paper you cite, the authors say that  "z(v) was divided by the square root of the theoretical variance, computed as 1/radic(n - 3) where n is the degrees of freedom. To account for autocorrelation in the BOLD signal according to Bartlett's theory, n was taken as the total number of time points (functional volumes) used to compute z(v), divided by the time integral of the square of the lagged autocorrelation function41. Z score maps were combined across subjects using a fixed-effects analysis (sum and divide by the square root of number of subjects"

I am still not getting this two things:
1 -how to calculate the time integral of the square of the lagged autocorrelation function. 
2- how to combine the Zscores across subjects. Instead of taking the mean of Z for each voxel, what they do is  sum(Zi) /sqrt(Numberofsubjects)?

thanks a lot!


Dear Pablo,
       REST take z(v) as Z score (Stored in zFCMap*.img).
(v) was divided by the square root of the theoretical variance, computed as 1/radic(n - 3) where n is the degrees of freedom...", these steps are taken especially by their group. We ussually do not use these steps. We performed t test on the z scores (See details in Yan C, Liu D, He Y, Zou Q, Zhu C, Zuo X, Long X, Zang Y. 2009. Spontaneous brain activity in the default mode network is sensitive to different resting-state conditions with limited cognitive load. PLoS ONE. 4:e5743.).
          Also take an example of another group:
         "For each seed set, Pearson correlation coefficients were calculated for each pair of regions, for each subject and each scan. The resulting correlation coefficients were either Fisher z-transformed for subsequent calculation of ICC, or were transformed into a distance measure (1 - r), for use in subsequent consistency (Kendall’s W) and clustering analyses. To assess the significance of the correlation between each pair of regions in each seed set, we carried out a one-sample t-test on the ztransformed correlation coefficients for the 26 participants." (Shehzad Z, Kelly AM, Reiss PT, Gee DG, Gotimer K, Uddin LQ, Lee SH, Margulies DS, Roy AK, Biswal BB, Petkova E, Castellanos FX, Milham MP. 2009. The Resting Brain: Unconstrained yet Reliable. Cereb Cortex. doi:10.1093/cercor/bhn256 )
       So you can choose a way to do the t tests.
        Best wishes!

Thanks a lot, it was a great help!
I'll be in touch


if you guys really care how to correct such z-values, please find it out from:



Thank you very much, correction on DoF is interesting.
Best wishes!

Dear Xinian,

thank you for suggesting this. I was wondering what the exact equation for the autocorrelation is. In particular the one that would work in combination with the expressions for correcting for the dofs that is given in the paper you suggested (supposing this is correct). Does the autocorrelation need to be normalised? I somehow cannot make the adjustment work properly as I keep getting really small values for dofs (I know I use quite smooth data, but still (df_corrected-3) should be greater than zero?).
Would welcome any thoughts on this.

Many thanks,